Dissertations / Theses on the topic 'Functional algorithms'

To see the other types of publications on this topic, follow the link: Functional algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Functional algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

King, David Jonathan. "Functional programming and graph algorithms." Thesis, University of Glasgow, 1996. http://theses.gla.ac.uk/1629/.

Full text
Abstract:
This thesis is an investigation of graph algorithms in the non-strict purely functional language Haskell. Emphasis is placed on the importance of achieving an asymptotic complexity as good as with conventional languages. This is achieved by using the monadic model for including actions on the state. Work on the monadic model was carried out at Glasgow University by Wadler, Peyton Jones, and Launchbury in the early nineties and has opened up many diverse application areas. One area is the ability to express data structures that require sharing. Although graphs are not presented in this style, data structures that graph algorithms use are expressed in this style. Several examples of stateful algorithms are given including union/find for disjoint sets, and the linear time sort binsort. The graph algorithms presented are not new, but are traditional algorithms recast in a functional setting. Examples include strongly connected components, biconnected components, Kruskal's minimum cost spanning tree, and Dijkstra's shortest paths. The presentation is lucid giving more insight than usual. The functional setting allows for complete calculational style correctness proofs - which is demonstrated with many examples. The benefits of using a functional language for expressing graph algorithms are quantified by looking at the issues of execution times, asymptotic complexity, correctness, and clarity, in comparison with traditional approaches. The intention is to be as objective as possible, pointing out both the weaknesses and the strengths of using a functional language.
APA, Harvard, Vancouver, ISO, and other styles
2

Karmakar, Saurav. "Statistical Stability and Biological Validity of Clustering Algorithms for Analyzing Microarray Data." Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/math_theses/3.

Full text
Abstract:
Simultaneous measurement of the expression levels of thousands to ten thousand genes in multiple tissue types is a result of advancement in microarray technology. These expression levels provide clues about the gene functions and that have enabled better diagnosis and treatment of serious disease like cancer. To solve the mystery of unknown gene functions, biological to statistical mapping is needed in terms of classifying the genes. Here we introduce a novel approach of combining both statistical consistency and biological relevance of the clusters produced by a clustering method. Here we employ two performance measures in combination for measuring statistical stability and functional similarity of the cluster members using a set of gene expressions with known biological functions. Through this analysis we construct a platform to predict about unknown gene functions using the outperforming clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Danilenko, Nikita [Verfasser]. "Designing Functional Implementations of Graph Algorithms / Nikita Danilenko." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1102933031/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Klingner, John. "Distributed and Decentralized Algorithms for Functional Programmable Matter." Thesis, University of Colorado at Boulder, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10980638.

Full text
Abstract:

Programmable matter is made up of large quantities of particles that can sense, actuate, communicate, and compute. Motivated to imbue these materials with functionality, this thesis presents algorithmic and hardware developments to meet the unique challenges presented by large-scale robot collectives. The quantity of robots involved necessitates algorithms and processes which scale—in terms of required communication, computation, and memory—sub-linearly to the number of robots, if scaling at all can not be avoided. Included are methods for communication, movement, synchronization, and localization. To encourage application to a variety of hardware platforms, the theoretical underpinnings of these contributions are made as abstract as possible. These methods are tested experimentally with real hardware, using the Droplet swarm robotics platform I have developed. I also present abstractions which relate global performance properties of a functional object composed of programmable matter to local properties of the hardware platform from which the object is composed. This thesis is further supported by example implementations of functional objects on the Droplets: a TV remote control, a pong game, and a keyboard with mouse.

APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Jialu [Verfasser]. "Algorithms to Identify Functional Orthologs And Functional Modules from High-Throughput Data / Jialu Hu." Berlin : Freie Universität Berlin, 2015. http://d-nb.info/1064869807/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ilberg, Peter. "Floyd : a functional programming language with distributed scope." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Demir, Sumeyra Ummuhan. "Image Processing Algorithms for Diagnostic Analysis of Microcirculation." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/137.

Full text
Abstract:
Microcirculation has become a key factor for the study and assessment of tissue perfusion and oxygenation. Detection and assessment of the microvasculature using videomicroscopy from the oral mucosa provides a metric on the density of blood vessels in each single frame. Information pertaining to the density of these microvessels within a field of view can be used to quantitatively monitor and assess the changes occurring in tissue oxygenation and perfusion over time. Automated analysis of this information can be used for real-time diagnostic and therapeutic planning of a number of clinical applications including resuscitation. The objective of this study is to design an automated image processing system to segment microvessels, estimate the density of blood vessels in video recordings, and identify the distribution of blood flow. The proposed algorithm consists of two main stages: video processing and image segmentation. The first step of video processing is stabilization. In the video stabilization step, block matching is applied to the video frames. Similarity is measured by cross-correlation coefficients. The main technique used in the segmentation step is multi-thresholding and pixel verification based on calculated geometric and contrast parameters. Segmentation results and differences of video frames are then used to identify the capillaries with blood flow. After categorizing blood vessels as active or passive, according to the amount of blood flow, quantitative measures identifying microcirculation are calculated. The algorithm is applied to the videos obtained using Microscan Side-stream Dark Field (SDF) imaging technique captured from healthy and critically ill humans/animals. Segmentation results were compared and validated using a blind detailed inspection by experts who used a commercial semi-automated image analysis software program, AVA (Automated Vascular Analysis). The algorithm was found to extract approximately 97% of functionally active capillaries and blood vessels in every frame. The aim of this study is to eliminate the human interaction, increase accuracy and reduce the computation time. The proposed method is an entirely automated process that can perform stabilization, pre-processing, segmentation, and microvessel identification without human intervention. The method may allow for assessment of microcirculatory abnormalities occurring in critically ill and injured patients including close to real-time determination of the adequacy of resuscitation.
APA, Harvard, Vancouver, ISO, and other styles
8

Belešová, Michaela. "Aplikace evolučního algoritmu při tvorbě regresních testů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236142.

Full text
Abstract:
This master thesis deals with application of an evolutionary algorithm in the creation of regression tests. In the first section, description of functional verification, verification methodology, regression tests and evolutionary algorithms is provided. In the following section, the evolutionary algorithm, the purpose of which is to achieve reduction of the number of test vectors obtained in the process of functional verification, is proposed. Afterwards, the proposed algorithm is implemented and a set of experiments is evaluated. The results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Shafi, Muhammmad Imran, and Muhammad Akram. "Functional Approach towards Approximation Problem." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1245.

Full text
Abstract:
Approximation algorithms are widely used for problems related to computational geometry, complex optimization problems, discrete min-max problems and NP-hard and space hard problems. Due to the complex nature of such problems, imperative languages are perhaps not the best-suited solution when it comes to their actual implementation. Functional languages like Haskell could be a good candidate for the aforementioned mentioned issues. Haskell is used in industries as well as in commercial applications, e.g., concurrent applications, statistics, symbolic math and financial analysis. Several approximation algorithms have been proposed for different problems that naturally arise in the DNA clone classifications. In this thesis, we have performed an initial and explorative study on applying functional languages for approximation algorithms. Specifically, we have implemented a well known approximate clustering algorithm both in Haskell and in Java and we discuss the suitability of applying functional languages for the implementation of approximation algorithms, in particular for graph theoretical approximate clustering problems with applications in DNA clone classification. We also further explore the characteristics of Haskell that makes it suitable for solving certain classes of problems that are hard to implement using imperative languages.
Muhammad Imran Shafi: 29A Sodergatan 19547 Marsta, 0737171514, Muhammad Akram C/O Saad Bin Azhar Folkparksvagen 20/10 Ronneby, 0762899111
APA, Harvard, Vancouver, ISO, and other styles
10

Toh, Justin Sieu-Sung. "Iterative diagonalisation algorithms in plane wave density functional theory with applications to surfaces." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Brown, Michael Scott. "A Species-Conserving Genetic Algorithm for Multimodal Optimization." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/104.

Full text
Abstract:
The problem of multimodal functional optimization has been addressed by much research producing many different search techniques. Niche Genetic Algorithms is one area that has attempted to solve this problem. Many Niche Genetic Algorithms use some type of radius. When multiple optima occur within the radius, these algorithms have a difficult time locating them. Problems that have arbitrarily close optima create a greater problem. This paper presents a new Niche Genetic Algorithm framework called Dynamic-radius Species-conserving Genetic Algorithm. This new framework extends existing Genetic Algorithm research. This new framework enhances an existing Niche Genetic Algorithm in two ways. As the name implies the radius of the algorithm varies during execution. A uniform radius can cause issues if it is not set correctly during initialization. A dynamic radius compensates for these issues. The framework does not attempt to locate all of the optima in a single pass. It attempts to find some optima and then uses a tabu list to exclude those areas of the domain for future iterations. To exclude these previously located optima, the framework uses a fitness sharing approach and a seed exclusion approach. This new framework addresses many areas of difficulty in current multimodal functional optimization research. This research used the experimental research methodology. A series of classic benchmark functional optimization problems were used to compare this framework to other algorithms. These other algorithms represented classic and current Niche Genetic Algorithms. Results from this research show that this new framework does very well in locating optima in a variety of benchmark functions. In functions that have arbitrarily close optima, the framework outperforms other algorithms. Compared to other Niche Genetic Algorithms the framework does equally well in locating optima that are not arbitrarily close. Results indicate that varying the radius during execution and the use of a tabu list assists in solving functional optimization problems for continuous functions that have arbitrarily close optima.
APA, Harvard, Vancouver, ISO, and other styles
12

Vroon, Daron. "Automatically Proving the Termination of Functional Programs." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19734.

Full text
Abstract:
Establishing the termination of programs is a fundamental problem in the field of software verification. For transformational programs, termination is used to extend partial correctness to total correctness. For reactive systems, termination reasoning is used to establish liveness properties. In the context of theorem proving, termination is used to establish the consistency of definitional axioms and to automate proofs by induction. Of course, termination is an undecidable problem, as Turing himself proved. However, the question remains: how automatic can a general termination analysis be in practice? In this dissertation, we develop two new general frameworks for reasoning about termination and demonstrate their effectiveness in automating the task of proving termination in the domain of applicative first-order functional languages. The foundation of the first framework is the development of the first known complete set of algorithms for ordinal arithmetic over an ordinal notation. We provide algorithms for ordinal ordering ($<$), addition, subtraction, multiplication, and exponentiation on the ordinals up to epsilon-naught. We prove correctness and complexity results for each algorithm. We also create a library for automating arithmetic reasoning over epsilon-naught in the ACL2 theorem proving system. This ordinal library enables new termination proofs that were previously not possible in previous versions of ACL2. The foundation of the second framework is an algorithm for fully automating termination reasoning with no user assistance. This algorithm uses a combination of theorem proving and static analysis to create a Calling Context Graph (CCG), a novel abstraction that captures the looping behavior of the program. Calling Context Measures (CCMs) are then used to prove that no infinite path through the CCG can be an actual computation of the program. We implement this algorithm in the ACL2, and empirically evaluate its effectiveness on the regression suite, a collection of over 11,000 user-defined functions from a wide variety of applications.
APA, Harvard, Vancouver, ISO, and other styles
13

譚晓慧 and Xiaohui Tan. "Optimization and stability analysis on light-weight multi-functional smart structures using genetic algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41290707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Scott, Daniel. "The discovery of new functional oxides using combinatorial techniques and advanced data mining algorithms." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/15214/.

Full text
Abstract:
Electroceramic materials research is a wide ranging field driven by device applications. For many years, the demand for new materials was addressed largely through serial processing and analysis of samples often similar in composition to those already characterised. The Functional Oxide Discovery project (FOXD) is a combinatorial materials discovery project combining high-throughput synthesis and characterisation with advanced data mining to develop novel materials. Dielectric ceramics are of interest for use in telecommunications equipment; oxygen ion conductors are examined for use in fuel cell cathodes. Both applications are subject to ever increasing industry demands and materials designs capable of meeting the stringent requirements are urgently required. The London University Search Instrument (LUSI) is a combinatorial robot employed for materials synthesis. Ceramic samples are produced automatically using an ink-jet printer which mixes and prints inks onto alumina slides. The slides are transferred to a furnace for sintering and transported to other locations for analysis. Production and analysis data are stored in the project database. The database forms a valuable resource detailing the progress of the project and forming a basis for data mining. Materials design is a two stage process. The first stage, forward prediction, is accomplished using an artificial neural network, a Baconian, inductive technique. In a second stage, the artificial neural network is inverted using a genetic algorithm. The artificial neural network prediction, stoichiometry and prediction reliability form objectives for the genetic algorithm which results in a selection of materials designs. The full potential of this approach is realised through the manufacture and characterisation of the materials. The resulting data improves the prediction algorithms, permitting iterative improvement to the designs and the discovery of completely new materials.
APA, Harvard, Vancouver, ISO, and other styles
15

Tan, Xiaohui. "Optimization and stability analysis on light-weight multi-functional smart structures using genetic algorithms." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B41290707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Delgado, Reyes Lourdes Marielle. "Evaluating motion processing algorithms for use with fNIRS data from young children." Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/5929.

Full text
Abstract:
Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal component analyses (PCA), Kalman filtering, correlation-based signal improvement (CBSI), wavelet filtering, spline interpolation, and autoregressive algorithms. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Recently, Brigadoi et al. (2014) quantitatively compared 6 motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Because fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. Here we examined which techniques are most effective with data from young children. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response using two different sets of parameters to ensure maximum retention of included trials. Results showed that targeted PCA (tPCA) and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using both quantitative metrics and a qualitative assessment. The CBSI technique corrected many of the artifacts present in our data; however, this technique was highly influenced by the parameters used to detect motion. The tPCA technique, by contrast, was robust across changes in parameters while also performing well across all comparison metrics. We conclude, therefore, that tPCA is an effective technique for the correction of motion artifacts in fNIRS data from young children.
APA, Harvard, Vancouver, ISO, and other styles
17

Acharya, Vineeth Vadiraj. "Branch Guided Metrics for Functional and Gate-level Testing." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51661.

Full text
Abstract:
With the increasing complexity of modern day processors and system-on-a-chip (SOCs), designers invest a lot of time and resources into testing and validating these designs. To reduce the time-to-market and cost, the techniques used to validate these designs have to constantly improve. Since most of the design activity has moved to the register transfer level (RTL), test methodologies at the RTL have been gaining momentum. We present a novel functional test generation framework for functional test generation at RTL. A popular software-based metric for measuring the effectiveness of an RTL test suite is branch coverage. But exercising hard-to-reach branches is still a challenge and requires good understanding of the design semantics. The proposed framework uses static analysis to extract certain semantics of the circuit and uses several data structures to model these semantics. Using these data structures, we assist the branch-guided search to exercise these hard-to-reach branches. Since the correlation between high branch coverage and detecting defects and bugs is not clear, we present a new metric at the RTL which augments the RTL branch coverage with state values. Vectors which have higher scores on the new metric achieve higher branch and state coverages, and therefore can be applied at different levels of abstraction such as post-silicon validation. Experimental results show that use of the new metric in our test generation framework can achieve a high level of branch and fault coverage for several benchmark circuits, while reducing the length of the vector sequence. This work was supported in part by the NSF grant 1016675.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Leroy, Arthur. "Multi-task learning models for functional data and application to the prediction of sports performances." Thesis, Université de Paris (2019-....), 2020. http://www.theses.fr/2020UNIP7089.

Full text
Abstract:
Ce manuscrit de thèse est consacré à l’analyse de données fonctionnelles et la définition de modèles multi-tâches pour la régression et la classification non supervisée. L’objectif de ce travail est double et trouve sa motivation dans la problématique d’identification de jeunes sportifs prometteurs pour le sport de haut niveau. Ce contexte, qui offre un fil rouge illustratif des méthodes et algorithmes développés par la suite, soulève la question de l’étude de multiples séries temporelles supposées partager de l’information commune, et généralement observées à pas de temps irréguliers. La méthode centrale développée durant cette thèse, ainsi que l’algorithme d’apprentissage qui lui est associé, se concentrent sur les aspects de régression fonctionnelle à l’aide d’un modèle de processus Gaussiens (GPs) multi-tâche. Ce cadre probabiliste non-paramétrique permet de définir une loi a priori sur des fonctions, supposées avoir généré les données de plusieurs individus. Le partage d’informations communes entre les différents individus, au travers d’un processus moyen, offre une modélisation plus complète que celle d’un simple GP, ainsi qu’une pleine prise en compte de l’incertitude. Unprolongement de ce modèle est par la suite proposé via la définition d’un mélange de GPs multi-tâche. Cette approche permet d’étendre l’hypothèse d’un unique processus moyen sousjacent à plusieurs, chacun associé à un groupe d’individus. Ces deux méthodes, nommées respectivement Magma et MagmaClust, offrent de nouvelles perspectives de modélisation ainsi que des performances remarquables vis-à-vis de l’état de l’art, tant sur les aspects de prédiction que de clustering. D’un point de vue applicatif, l’analyse se concentre sur l’étude des courbes de performances de jeunes nageurs, et une première exploration des données réelles met en évidence l’existence de différents patterns de progression au cours de la carrière. Par la suite, l’utilisation de l’algorithme Magma, entrainé sur la base de données, attribue à chaque sportif une prédiction probabiliste de ses performances futures, offrant ainsi un précieux outil d’aide à la détection. Enfin, l’extension via l’algorithme MagmaClust permet de constituer automatiquement des groupes de nageurs de part les ressemblances de leurs patterns de progression, affinant de ce fait encore les prédictions. Les méthodes détaillées dans ce manuscrit ont également été entièrement implémentées et sont partagées librement
The present document is dedicated to the analysis of functional data and the definition of multi-task models for regression and clustering. The purpose of this work is twofold andfinds its origins in the problem of talent identification in elite sports. This context provides a leading thread illustrative example for the methods and algorithms introduced subsequently while also raising the problem of studying multiple time series, assumed to share information and generally observed on irregular grids. The central method and the associated algorithm developed in this thesis focus on the aspects of functional regression by using multi-task Gaussian processes (GPs) models. This non-parametric probabilistic framework proposes to define a prior distribution on functions, generating data associated with several individuals. Sharing information across those different individuals, through a mean process, offers enhanced modelling compared to a single-task GP, along with a thorough quantification of uncertainty. An extension of this model is then proposed from the definition of a multi-task GPs mixture. Such an approach allows us to extend the assumption of a unique underlying mean process to multiple ones, each being associated with a cluster of individuals. These two methods, respectively called Magma and MagmaClust, provide new insights on GP modelling as well as state-of-the-art performances both on prediction and clustering aspects. From the applicative point of view, the analyses focus on the study of performance curves of young swimmers, and preliminary exploration of the real datasets highlights the existence of different progression patterns during the career. Besides, the algorithm Magma provides, after training on a dataset, a probabilistic prediction of the future performances for each young swimmer, thus offering a valuable forecasting tool for talent identification. Finally, the extension proposed by MagmaClust allows the automatic construction of clusters of swimmers, according to their similarities in terms of progression patterns, leading once more to enhanced predictions. The methods proposed in this thesis have been entirely implemented and are freely available
APA, Harvard, Vancouver, ISO, and other styles
19

Godichon-Baggioni, Antoine. "Algorithmes stochastiques pour la statistique robuste en grande dimension." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS053/document.

Full text
Abstract:
Cette thèse porte sur l'étude d'algorithmes stochastiques en grande dimension ainsi qu'à leur application en statistique robuste. Dans la suite, l'expression grande dimension pourra aussi bien signifier que la taille des échantillons étudiés est grande ou encore que les variables considérées sont à valeurs dans des espaces de grande dimension (pas nécessairement finie). Afin d'analyser ce type de données, il peut être avantageux de considérer des algorithmes qui soient rapides, qui ne nécessitent pas de stocker toutes les données, et qui permettent de mettre à jour facilement les estimations. Dans de grandes masses de données en grande dimension, la détection automatique de points atypiques est souvent délicate. Cependant, ces points, même s'ils sont peu nombreux, peuvent fortement perturber des indicateurs simples tels que la moyenne ou la covariance. On va se concentrer sur des estimateurs robustes, qui ne sont pas trop sensibles aux données atypiques. Dans une première partie, on s'intéresse à l'estimation récursive de la médiane géométrique, un indicateur de position robuste, et qui peut donc être préférée à la moyenne lorsqu'une partie des données étudiées est contaminée. Pour cela, on introduit un algorithme de Robbins-Monro ainsi que sa version moyennée, avant de construire des boules de confiance non asymptotiques et d'exhiber leurs vitesses de convergence $L^{p}$ et presque sûre.La deuxième partie traite de l'estimation de la "Median Covariation Matrix" (MCM), qui est un indicateur de dispersion robuste lié à la médiane, et qui, si la variable étudiée suit une loi symétrique, a les mêmes sous-espaces propres que la matrice de variance-covariance. Ces dernières propriétés rendent l'étude de la MCM particulièrement intéressante pour l'Analyse en Composantes Principales Robuste. On va donc introduire un algorithme itératif qui permet d'estimer simultanément la médiane géométrique et la MCM ainsi que les $q$ principaux vecteurs propres de cette dernière. On donne, dans un premier temps, la forte consistance des estimateurs de la MCM avant d'exhiber les vitesses de convergence en moyenne quadratique.Dans une troisième partie, en s'inspirant du travail effectué sur les estimateurs de la médiane et de la "Median Covariation Matrix", on exhibe les vitesses de convergence presque sûre et $L^{p}$ des algorithmes de gradient stochastiques et de leur version moyennée dans des espaces de Hilbert, avec des hypothèses moins restrictives que celles présentes dans la littérature. On présente alors deux applications en statistique robuste: estimation de quantiles géométriques et régression logistique robuste.Dans la dernière partie, on cherche à ajuster une sphère sur un nuage de points répartis autour d'une sphère complète où tronquée. Plus précisément, on considère une variable aléatoire ayant une distribution sphérique tronquée, et on cherche à estimer son centre ainsi que son rayon. Pour ce faire, on introduit un algorithme de gradient stochastique projeté et son moyenné. Sous des hypothèses raisonnables, on établit leurs vitesses de convergence en moyenne quadratique ainsi que la normalité asymptotique de l'algorithme moyenné
This thesis focus on stochastic algorithms in high dimension as well as their application in robust statistics. In what follows, the expression high dimension may be used when the the size of the studied sample is large or when the variables we consider take values in high dimensional spaces (not necessarily finite). In order to analyze these kind of data, it can be interesting to consider algorithms which are fast, which do not need to store all the data, and which allow to update easily the estimates. In large sample of high dimensional data, outliers detection is often complicated. Nevertheless, these outliers, even if they are not many, can strongly disturb simple indicators like the mean and the covariance. We will focus on robust estimates, which are not too much sensitive to outliers.In a first part, we are interested in the recursive estimation of the geometric median, which is a robust indicator of location which can so be preferred to the mean when a part of the studied data is contaminated. For this purpose, we introduce a Robbins-Monro algorithm as well as its averaged version, before building non asymptotic confidence balls for these estimates, and exhibiting their $L^{p}$ and almost sure rates of convergence.In a second part, we focus on the estimation of the Median Covariation Matrix (MCM), which is a robust dispersion indicator linked to the geometric median. Furthermore, if the studied variable has a symmetric law, this indicator has the same eigenvectors as the covariance matrix. This last property represent a real interest to study the MCM, especially for Robust Principal Component Analysis. We so introduce a recursive algorithm which enables us to estimate simultaneously the geometric median, the MCM, and its $q$ main eigenvectors. We give, in a first time, the strong consistency of the estimators of the MCM, before exhibiting their rates of convergence in quadratic mean.In a third part, in the light of the work on the estimates of the median and of the Median Covariation Matrix, we exhibit the almost sure and $L^{p}$ rates of convergence of averaged stochastic gradient algorithms in Hilbert spaces, with less restrictive assumptions than in the literature. Then, two applications in robust statistics are given: estimation of the geometric quantiles and application in robust logistic regression.In the last part, we aim to fit a sphere on a noisy points cloud spread around a complete or truncated sphere. More precisely, we consider a random variable with a truncated spherical distribution, and we want to estimate its center as well as its radius. In this aim, we introduce a projected stochastic gradient algorithm and its averaged version. We establish the strong consistency of these estimators as well as their rates of convergence in quadratic mean. Finally, the asymptotic normality of the averaged algorithm is given
APA, Harvard, Vancouver, ISO, and other styles
20

Robertson, Calum Stewart. "Parallel data mining on cycle stealing networks." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15970/1/Calum_Robertson_Thesis.pdf.

Full text
Abstract:
In a world where electronic databases are used to store ever-increasing quantities of data it is becoming harder to mine useful information from them. Therefore there is a need for a highly scalable parallel architecture capable of handling the ever-increasing complexity of data mining problems. A cycle stealing network is one possible scalable solution to this problem. A cycle stealing network allows users to donate their idle cycles to form a virtual supercomputer by connecting multiple machines via a network. This research aims to establish whether cycle stealing networks, specifically the G2 system developed at the Queensland University of Technology, are viable for large scale data mining problems. The computationally intensive sequence mining, feature selection and functional dependency mining problems are deliberately chosen to test the usefulness and scalability of G2. Tests have shown that G2 is highly scalable where the ratio of computation to communication is approximately known. However for combinatorial problems where computation times are difficult or impossible to predict, and communication costs can be unpredictable, G2 often provides little or no speedup. This research demonstrates that existing sequence mining and functional dependency mining techniques are not suited to a client-server style cycle stealing network like G2. However the feature selection is well suited to G2, and a new sequence mining algorithm offers comparable performance to other existing, non-cycle stealing, parallel sequence mining algorithms. Furthermore new functional dependency mining algorithms offer substantial benefit over existing serial algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Robertson, Calum Stewart. "Parallel Data Mining On Cycle Stealing Networks." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15970/.

Full text
Abstract:
In a world where electronic databases are used to store ever-increasing quantities of data it is becoming harder to mine useful information from them. Therefore there is a need for a highly scalable parallel architecture capable of handling the ever-increasing complexity of data mining problems. A cycle stealing network is one possible scalable solution to this problem. A cycle stealing network allows users to donate their idle cycles to form a virtual supercomputer by connecting multiple machines via a network. This research aims to establish whether cycle stealing networks, specifically the G2 system developed at the Queensland University of Technology, are viable for large scale data mining problems. The computationally intensive sequence mining, feature selection and functional dependency mining problems are deliberately chosen to test the usefulness and scalability of G2. Tests have shown that G2 is highly scalable where the ratio of computation to communication is approximately known. However for combinatorial problems where computation times are difficult or impossible to predict, and communication costs can be unpredictable, G2 often provides little or no speedup. This research demonstrates that existing sequence mining and functional dependency mining techniques are not suited to a client-server style cycle stealing network like G2. However the feature selection is well suited to G2, and a new sequence mining algorithm offers comparable performance to other existing, non-cycle stealing, parallel sequence mining algorithms. Furthermore new functional dependency mining algorithms offer substantial benefit over existing serial algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Rosić, Bojana [Verfasser], and Hermann [Akademischer Betreuer] Matthies. "Variational Formulations and Functional Approximation Algorithms in Stochastic Plasticity of Materials / Bojana Rosic ; Betreuer: Hermann Matthies." Braunschweig : Technische Universität Braunschweig, 2012. http://d-nb.info/1175822434/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Klipstein, Richard Henry. "Algorithms and mathematical methods for extraction of functional information from magnetic resonance images of the heart." Thesis, Imperial College London, 1988. http://hdl.handle.net/10044/1/47139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Liénard, Jean. "Models of the Basal Ganglia : Study of the Functional Anatomy and Pathophysiology using Multiobjective Evolutionary Algorithms." Paris 6, 2013. http://www.theses.fr/2013PA066125.

Full text
Abstract:
Nous avons étudié les ganglions de la base (GB) sous un angle théorique, au travers de modèles conçus avec des algorithmes évolutionnistes multiobjectifs. Dans un premier temps, nous avons caractérisé les possibles corrélats neuronaux de la sélection supposément opérés par les GB, via l’étude de deux modèles: le CBG (Girard et al. 2008) et le GPR (Gurney et al. 2001). Les voies directe et indirecte sont importantes; la boucle thalamique, indifférente; la projection GPe → GPi/SNr, antagoniste à la sélection. La connexion GPe → MSN et le caractère diffus de GPe → GPi/SNr expliquent la meilleure sélectivité dans le CBG. Nous avons aussi conçu des modèles des GB respectant un corpus de contraintes anato-électrophysiologiques issues du primate. Ces contraintes sont respectées sous l’hypothèse que la projection GPe → GPi/SNr est faiblement inhibitrice. En outre, nous montrons que ces modèles plausibles opèrent une sélection parmi leurs entrées d’une façon compatible avec les données électrophysiologiques. La projection GPe → GPi/SNr permet une meilleure sélection lorsqu’elle est diffuse. Enfin, nous avons étudié avec ces modèles plausibles les origines potentielles des oscillations associés à la maladie de Parkinson. Une modélisation du manque de dopamine via une augmentation modérée de l’efficacité des potentiels d’action au niveau du STN, GPe et GPi/SNr, se révèle suffisante pour entraîner des oscillations dans la bande β. De plus, la fréquence de ces oscillations dépend des délais de la boucle GPe ↔ STN, qui ne semble pas être compatible avec des oscillations dans la bande θ
Our work brings contributions to the field of computational models of the basal ganglia(BG) with the use of multi-objective evolutionary algorithms. We first characterized what underlies the hypothesized selection capability in the BG structure with two models : the CBG (Girard et al. 2008) and the GPR (Gurney et al. 2001). The direct/indirect pathway were found to be important; those of the thalamic loop, indifferent; the GPe → GPi/SNr projection, antagonist to selection. Two pathways explain the better selectivity in CBG: the GPe → MSN connection and the diffuse pattern of GPe → GPi/SNr. We also build plausible BG models which respect a collection of constraints issued from a review of anatomical and electrophysiological primate literature. Our first result is that anatomical and electrophysiological data are consistent if we suppose a GPe → GPi/SNr projection that is weakly inhibitory. Our second result is that the plausible models perform selection, with electrophysiological activities that are furthermore plausible. We finally studied the pattern of projection of the GPe → GPi/SNr projection, and found that a diffuse pattern is more efficient for selection. Finally, we studied with the plausible models the origin of the oscillations occurring in Parkinson’s disease. We first established delays matching the timing data from stimulation experiments. Modeling the dopamine depletion by a moderate plausible increase in single spike efficiency in STN, GPe and GPi/SNr is enough to trigger oscillatory regimens in the β-band. The oscillations frequency is highly dependent of the GPe ⇋ STN delays, which could not plausibly support - band oscillations in our models
APA, Harvard, Vancouver, ISO, and other styles
25

Rizzo, Gaia. "Development of novel computational algorithms for quantitative voxel-wise functional brain imaging with positron emission tomography." Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422179.

Full text
Abstract:
Positron Emission Tomography (PET) allows to study, in vivo, important biological processes in both animals and humans. In particular, it is widely used for receptor studies, where it allows quantitative functional imaging of physiological parameters in terms of receptor binding, volume of distribution, and/or receptor occupancy. In this thesis an overview on kinetic modeling in PET is presented, as well as some novel methodologies for voxel wise quantification of PET data, applied on various datasets. The proposed methods are a good alternative for the generation of reliable parametric maps, and applied to clinical data are expected to simplify the detection also of small/specific pathological areas. Moreover, as additional results, new compartmental models were developed for [11C]SCH442416 and [11C]MDL100907 data and a new clustering approach which allow segmenting the brain volume even for PET data with high level of noise was implemented. This new method was also applied for the selection of the optimal reference region for [11C]MDL100907 data.
La Tomografia ad Emissione di Positroni (PET) permette di studiare, in vivo, l'interazione dei traccianti con specifici siti di legame (trasportatori, recettori, etc.). Inoltre permette un imaging funzionale quantitativo di importanti parametri fisiologici quali la densità di recettori, volume di distribuzione e/o occupazione recettoriale. In questa tesi si espone una panoramica dei principali metodi modellistici in PET e si propongono nuovi approcci Bayesiani sviluppati per la quantificazione a livello di voxel di immagini PET, applicati a vari dataset. I metodi proposti costituiscono una robusta alternativa per la generazione di mappe parametriche affidabili ed applicati a dati clinici renderanno più semplice il riconoscimento di piccole zone patologiche specifiche. Come ulteriore risultato, sono stati sviluppati nuovi modelli compartimentali per i dati di [11C]SCH442416 e [11C]MDL100907. Inoltre è stato implementato un nuovo metodo di clustering che permette di segmentare il volume cerebrale anche per dati PET con un alto livello di rumore. Questo nuovo approccio è stato applicato per la selezione della migliore regione di riferimento per dati di [11C]MDL100907.
APA, Harvard, Vancouver, ISO, and other styles
26

Stojkovic, Ivan. "Functional Norm Regularization for Margin-Based Ranking on Temporal Data." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/522550.

Full text
Abstract:
Computer and Information Science
Ph.D.
Quantifying the properties of interest is an important problem in many domains, e.g., assessing the condition of a patient, estimating the risk of an investment or relevance of the search result. However, the properties of interest are often latent and hard to assess directly, making it difficult to obtain classification or regression labels, which are needed to learn a predictive models from observable features. In such cases, it is typically much easier to obtain relative comparison of two instances, i.e. to assess which one is more intense (with respect to the property of interest). One framework able to learn from such kind of supervised information is ranking SVM, and it will make a basis of our approach. Applications in bio-medical datasets typically have specific additional challenges. First, and the major one, is the limited amount of data examples, due to an expensive measuring technology, and/or infrequency of conditions of interest. Such limited number of examples makes both identification of patterns/models and their validation less useful and reliable. Repeated samples from the same subject are collected on multiple occasions over time, which breaks IID sample assumption and introduces dependency structure that needs to be taken into account more appropriately. Also, feature vectors are highdimensional, and typically of much higher cardinality than the number of samples, making models less useful and their learning less efficient. Hypothesis of this dissertation is that use of the functional norm regularization can help alleviating mentioned challenges, by improving generalization abilities and/or learning efficiency of predictive models, in this case specifically of the approaches based on the ranking SVM framework. The temporal nature of data was addressed with loss that fosters temporal smoothness of functional mapping, thus accounting for assumption that temporally proximate samples are more correlated. Large number of feature variables was handled using the sparsity inducing L1 norm, such that most of the features have zero effect in learned functional mapping. Proposed sparse (temporal) ranking objective is convex but non-differentiable, therefore smooth dual form is derived, taking the form of quadratic function with box constraints, which allows efficient optimization. For the case where there are multiple similar tasks, joint learning approach based on matrix norm regularization, using trace norm L* and sparse row L21 norm was also proposed. Alternate minimization with proximal optimization algorithm was developed to solve the mentioned multi-task objective. Generalization potentials of the proposed high-dimensional and multi-task ranking formulations were assessed in series of evaluations on synthetically generated and real datasets. The high-dimensional approach was applied to disease severity score learning from gene expression data in human influenza cases, and compared against several alternative approaches. Application resulted in scoring function with improved predictive performance, as measured by fraction of correctly ordered testing pairs, and a set of selected features of high robustness, according to three similarity measures. The multi-task approach was applied to three human viral infection problems, and for learning the exam scores in Math and English. Proposed formulation with mixed matrix norm was overall more accurate than formulations with single norm regularization.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
27

More, Joshua N. "Algorithms and computer code for ab initio path integral molecular dynamics simulations." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:b8ca7471-21e3-4240-95b1-8775e5d6c08f.

Full text
Abstract:
This thesis presents i-PI, a new path integral molecular dynamics code designed to capture nuclear quantum effects in ab initio electronic structure calculations of condensed phase systems. This software has an implementation of estimators used to calculate a wide range of static and dynamical properties and of state-of-the-art techniques used to increase the computational efficiency of path integral simulations. i-PI has been designed in a highly modular fashion, to ensure that it is as simple as possible to develop and implement new algorithms to keep up with the research frontier, and so that users can take maximum advantage of the numerous electronic structure programs which are freely available without needing to rewrite large amounts of code. Among the functionality of the i-PI code is a novel integrator for constant pressure dynamics, which is used to investigate the properties of liquid water at 750 K and 10 GPa, and efficient estimators for the calculation of single particle momentum distri- butions, which are used to study the properties of solid and liquid ammonia. These show respectively that i-PI can be used to make predictions about systems which are both difficult to study experimentally and highly non-classical in nature, and that it can illustrate the relative advantages and disadvantages of different theoretical methods and their ability to reproduce experimental data.
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Jiann-Yuarn. "A study of a moving contact algorithm." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/80074.

Full text
Abstract:
A nonlinear movmg contact algorithm has been implemented to model the sticking-sliding inelastic behavior in the interlocks of steel sheet pile sections subjected to axial tension. Previously, numerical instabilities were encountered during the solution process while conducting a series of verification problems for the algorithm by the Newton-Raphson method. In an attempt to identify the cause of these instabilities, an in-depth study of the effect of fineness of the finite element mesh on the convergence of the solution has been undertaken. The solution process credited to Riks and Wempner has been used to find the postbuckling equilibrium path of shallow reticulated domes. This algorithm, with some modifications, is used to move from one load step to another step in this study. As in most nonlinear problems, the size of the load step influences the rate of convergence. In addition, in the moving contact problem nodes can move along the sides of an element on the contact surface. Thus, the mesh refinement also affects the rate of convergence. To study the effects of both of these parameters, a series of test problems was run with variable load steps and mesh refinements. The modified Riks-Wempner algorithm, which automatically adjusts the load step size as the solution process advances, successfully solved all the inelastic and large displacement problems attempted. From the mesh refinement studies two conclusions were reached: for curved boundaries use curved elements and avoid the use of irregularly shaped elements. Finally, the improved solution algorithm is applied fo sheet pile interlocks loaded in axial tension. Results for progressively increasing load show the spread of yielding in the thumbs and fingers of the interlocks and the sliding of one past another as the deformations increase.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Castiel, Eyal. "Study of QB-CSMA algorithms." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0038.

Full text
Abstract:
La performance des réseaux sans fil où les utilisateurs partagent l'air comme moyen de communication est fortement limitée par le phénomène d'interférence électromagnétique. En effet, deux utilisateurs proches qui communiquent sur la même fréquence verront leurs ondes interférer, ce qui peut entraîner la perte de l'information transmise. Ainsi, il est indispensable de mettre en place des protocoles d'accès visant à limiter l'interférence en choisissant de manière efficace les utilisateurs autorisés à émettre à chaque instant. D'un point de vue scientifique, il s'agit d'un problème difficile qui a attiré l'attention de la communauté en informatique et probabilités appliquées depuis plus de 30 ans. Récemment, une nouvelle classe de protocoles d'accès - appelés protocoles CSMA adaptatifs - a émergé, et semble très prometteuse : par exemple, il a été montré que ces nouveaux protocoles possèdent une propriété très attrayante de stabilité maximale. Le but de ce projet est d'approfondir la connaissance que l'on a des protocoles CSMA adaptatifs dits QB (pour l'anglais "Queue-Based") qui à ce jour est encore extrêmement limitée. Concernant ces protocoles, le but de ce projet est de prouver des résultats théoriques permettant de comprendre le compromis réalisable entre débit et délai. Modèle probabiliste - d'un point de vue technique, il s'agit d'étudier le modèle suivant: chaque utilisateur du réseau est représenté par le nœud d'un graphe G, appelé graphe d'interférence, et tel que deux voisins du graphe ne peuvent être actifs simultanément. Des paquets à transmettre arrivent à chaque nœud au cours du temps, et le but est de choisir quels nœuds sont actifs à un moment donné. Le protocole CSMA-QB répond à cette question de la manière suivante : lorsqu'un nœud est actif, il se désactive à taux constant et lorsqu'il est inactif et qu'aucun de ses voisins ne le bloquent, alors il s'active à un taux qui dépend du nombre de paquets en attente de transmission via une fonction ψ appelée fonction d'activation. Le but général de la thèse est de comprendre l'influence de la topologie de G et du choix de ψ sur la performance du protocole. Pour cela, il s'agira d'étudier le temps de mélange de la dynamique de Glauber ainsi qu'un phénomène classique en théorie des probabilités, appelé phénomène de moyennisation stochastique, qui permettent une compréhension fine du comportement dynamique du réseau
Performance of wireless networks, in which users share the air as support for their communications is strongly limited by electromagnetic interference. That is, two users close to each other trying to send a message on the same frequency will experience interference between their messages, eventually leading to the loss of some information. It is then crucial to develop medium access protocols aiming to limit the occurrence of such a phenomena by choosing in an effective (and distributed) manner which station is allowed to transmit. From a scientific point of view, it is a difficult issue which has had some attention from the community in the field of computer science and applied probability in the past 30 years. Recently, a new class of medium access protocols - called adaptive CSMA - emerged and seem quite promising: for example, it has been shown that they exhibit a desirable property: throughput optimality (maximum stability). The goal of this project is to increase the knowledge we have the adaptive CSMA (or CSMA QB, for Queue Based) which is to this day quite limited (notably in the expected waiting time of a request arriving in the system, called delay). Our goal will be to prove theoric results to enhance our understanding of the throughput/delay trade-off
APA, Harvard, Vancouver, ISO, and other styles
30

Cannon, Jordan. "Statistical analysis and algorithms for online change detection in real-time psychophysiological data." Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/342.

Full text
Abstract:
Modern systems produce a great amount of information and cues from which human operators must take action. On one hand, these complex systems can place a high demand on an operator's cognitive load, potentially overwhelming them and causing poor performance. On the other hand, some systems utilize extensive automation to accommodate their complexity; this can cause an operator to become complacent and inattentive, which again leads to deteriorated performance (Wilson, Russell, 2003a; Wilson, Russell, 2003b). An ideal human-machine interface would be one that optimizes the functional state of the operator, preventing overload while not permitting complacency, thus resulting in improved system performance. An operator's functional state (OFS) is the momentary ability of an operator to meet task demands with their cognitive resources. A high OFS indicates that an operator is vigilant and aware, with ample cognitive resources to achieve satisfactory performance. A low OFS, however, indicates a non-optimal cognitive load, either too much or too little, resulting in sub-par system performance (Wilson, Russell, 1999). With the ability to measure and detect changes in OFS in real-time, a closed-loop system between the operator and machine could optimize OFS through the dynamic allocation of tasks. For instance, if the system detects the operator is in cognitive overload, it can automate certain tasks allowing them to better focus on salient information. Conversely, if the system detects under-vigilance, it can allocate tasks back to the manual control of the operator. In essence, this system operates to "dynamically match task demands to [an] operator's momentary cognitive state", thereby achieving optimal OFS (Wilson, Russell, 2007). This concept is termed adaptive aiding and has been the subject of much research, with recent emphasis on accurately assessing OFS in real-time. OFS is commonly measured indirectly, like using overt performance metrics on tasks; if performance is declining, a low OFS is assumed. Another indirect measure is the subjective estimate of mental workload, where an operator narrates his/her perceived functional state while performing tasks (Wilson, Russell, 2007). Unfortunately, indirect measures of OFS are often infeasible in operational settings; performance metrics are difficult to construct for highly-automated complex systems, and subjective workload estimates are often inaccurate and intrusive (Wilson, Russell, 2007; Prinzel et al., 2000; Smith et al., 2001). OFS can be more directly measured via psychophysiological signals such as electroencephalogram (EEG) and electrooculography (EOG). Current research has demonstrated these signals' ability to respond to changing cognitive load and to measure OFS (Wilson, Fisher, 1991; Wilson, Fisher, 1995; Gevins et al., 1997; Gevins et al., 1998; Byrne, Parasuraman, 1996). Moreover, psychophysiological signals are continuously available and can be obtained in a non-intrusive manner, pre-requisite for their use in operational environments. The objective of this study is to advance schemes which detect change in OFS by monitoring psychophysiological signals in real-time. Reviews on similar methods can be found in, e.g., Wilson and Russell (2003a) and Wilson and Russell (2007). Many of these methods employ pattern recognition to classify mental workload into one of several discrete categories. For instance, given an experiment with easy, medium and hard tasks, and assuming the tasks induce varying degrees of mental workload on a subject, these methods classify which task is being performed for each epoch of psychophysiological data. The most common classifiers are artificial neural networks (ANN) and multivariate statistical techniques such as stepwise discriminant analysis (SWDA). ANNs have proved especially effective at classifying OFS as they account for the non-linear and higher order relationships often present in EEG/EOG data; they routinely achieve classification accuracy greater than 80%. However, the discrete output of these classification schemes is not conducive to real-time change detection. They accurately classify OFS, but they do not indicate when OFS has changed; the change points remain ambiguous and left to subjective interpretation. Thus, the present study introduces several online algorithms which objectively determine change in OFS via real-time psychophysiological signals. The following chapters describe the dataset evaluated, discuss the statistical properties of psychophysiological signals, and detail various algorithms which utilize these signals to detect real-time changes in OFS. The results of the algorithms are presented along with a discussion. Finally, the study is concluded with a comparison of each method and recommendations for future application.
APA, Harvard, Vancouver, ISO, and other styles
31

Mabaso, Bongani Andy. "Robots are not ethical like people : an exemplarist framework for functional ethics in everyday robots in ordinary contexts." Thesis, University of Pretoria, 2020. http://hdl.handle.net/2263/76011.

Full text
Abstract:
As increasingly intelligent and autonomous robots continue to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that this will have on ordinary everyday contexts. One of the most urgent societal expectations for these robots is the need for them to behave in a manner that is respecting of human moral values. In response to this challenge, the field of machine ethics began with the goal of developing robots capable of making moral decisions. This work addresses the challenge by proposing that Exemplarist Virtue Ethics (or simply exemplarism), an ethical theory based on virtue ethics, is a viable, suitable and alternative framework for building ethical robots. Exemplarism is a moral theory that grounds key moral concepts (e.g. virtue, right act, etc.) by direct reference to exemplars of moral goodness. Essentially, it proposes that agents can develop their moral character by following the example of morally admirable agents in society. This work will demonstrate how an exemplar- ist machine ethics framework presents several advantages to building ethical robots over traditional approaches based on consequentialism and deontology. Specifically, exemplarism not only helps us formalise the concept of artificial moral agency more coherently, but it also lends itself to be a technically feasible approach for building ethical robots. This thesis will, therefore, also demonstrate the technical feasibility of actually building an exemplarist AMA and suggest ways in which it could be further improved. Since exemplarism has scarcely been applied to this area in prior literature, this thesis will provide an alternative perspective to the machine ethics project, which, in some small way can help to advance the field forward.
Thesis (PhD)--University of Pretoria, 2020.
Philosophy
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
32

Pendurkar, Rajesh. "Design for testability techniques and optimization algorithms for performance and functional testing of mult-chip module interconnections." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/16635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Arkoudas, Kostas. "On the termination of recursive algorithms in pure first-order functional languages with monomorphic inductive data types." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/39074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Shinde, Swapnil Sadashiv. "Radio Access Network Function Placement Algorithms in an Edge Computing Enabled C-RAN with Heterogeneous Slices Demands." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20063/.

Full text
Abstract:
Network slicing provides a scalable and flexible solution for resource allocation with performance guaranty and isolation from other services in the 5G architecture. 5G has to handle several active use cases with different requirements. The single solution to satisfy all the extreme requirements requires overspecifies and high-cost network architecture. Further, to fulfill the diverse requirements, each service will require different resources from a radio access network (RAN), edge, and central offices of 5G architecture and hence various deployment options. Network function virtualization allocates radio access network (RAN) functions in different nodes. URLLC services require function placement nearer to the ran to fulfill the lower latency requirement while eMBB require cloud access for implementation. Therefore arbitrary allocation of network function for different services is not possible. We aim to developed algorithms to find service-based placement for RAN functions in a multitenant environment with heterogeneous demands. We considered three generic classes of slices of eMBB, URLLC, mMTC. Every slice is characterized by some specific requirements, while the nodes and the links are resources constrained. The function placement problem corresponds to minimize the overall cost of allocating the different functions to the different nodes organized in layers for respecting the requirements of the given slices. Specifically, we proposed three algorithms based on the normalized preference associated with each slice on different layers of RAN architecture. The maximum preference algorithm places the functions on the most preferred position defined in the preference matrix. On the other hand, the proposed modified preference algorithm provides solutions by keeping track of the availability of computational resources and latency requirements of different services. We also used the Exhaustive Search Method for solving a function allocation problem.
APA, Harvard, Vancouver, ISO, and other styles
35

Franco, Ricardo Augusto Pereira. "Vericação funcional de sistemas digitais utilizando algoritmos genéticos na geração de dados aplicada a metodologia veriSC." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/5028.

Full text
Abstract:
Submitted by Cláudia Bueno (claudiamoura18@gmail.com) on 2015-12-09T14:35:23Z No. of bitstreams: 2 Dissertação - Ricardo Augusto Pereira Franco - 2014.pdf: 1054078 bytes, checksum: 1f76acc442745cd5dc0a7e159485a061 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-12-10T06:30:38Z (GMT) No. of bitstreams: 2 Dissertação - Ricardo Augusto Pereira Franco - 2014.pdf: 1054078 bytes, checksum: 1f76acc442745cd5dc0a7e159485a061 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-12-10T06:30:38Z (GMT). No. of bitstreams: 2 Dissertação - Ricardo Augusto Pereira Franco - 2014.pdf: 1054078 bytes, checksum: 1f76acc442745cd5dc0a7e159485a061 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-11-26
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The process of creating an Intellectual Property Core (IP-core) has become more complex with the advent of electronic circuit technology, encouraging the development of new techniques and methodologies to assist this process. A fundamental and critical stage of a hardware design is the hardware verification phase. At this phase it is verified that the IP-core was implemented according to their specification, ensuring that it is feasible to prototyping and their large-scale production (System on Chip). The verification phase corresponds to the biggest bottleneck in a hardware design (BERGERON,2006). The VeriSC methodology is an implemented methodology to perform the hardware verifi- cation through simulation, that is, by means of functional verification. This work aims to complement the VeriSC methodology through the development of an algorithm based on the concept of Genetic Algorithms (GAs). The proposed algorithm will modify the data generation of this methodology, whose objective is to reduce the verification time and to improve the generated data by changing the data from pseudorandom mode to random-guided mode, increasing the reliability of the verification performed by the VeriSC methodology. The algorithm has a generic part (templates) that helps the implementation of new environment for the functional verification of new DUVs and it can be incorpo- rated into other functional verification methodologies. Finally, are presented three case studies, the stimuli created using GA are compared with the old implementation of VeriSC methodology.
O processo de criação de um Intellectual Property Core (IP-core) vem se tornando cada vez mais complexo com o advento da tecnologia dos circuitos eletrônicos, incentivando o desenvolvimento de novas técnicas e metodologias que auxiliem esse processo. Uma das fases fundamentais e críticas de um projeto de hardware é a fase de verificação de hardware. É nesta fase que se verifica se o IP-core foi implementado de acordo com sua especificação, garantindo que seja viável sua prototipação e, posteriormente, sua produção em larga escala (System on Chip). A fase de verificação corresponde ao maior gargalo dentro de um projeto de hardware (BERGERON,2006). A metodologia VeriSC é uma metodologia desenvolvida para realizar a verificação de hardware através da simulação, isto é, por meio da verificação funcional. Este trabalho visa complementar a metodologia VeriSC por meio do desenvolvimento de um algoritmo baseado no conceito de Algoritmos Genéticos (AGs). O algoritmo proposto ira modificar a geração de dados dessa metodologia objetivando reduzir o tempo de verificação e aprimorar os dados gerados, alterando a geração de dados da forma pseudoaleatória para aleatória- guiado, aumentando, assim, a confiabilidade da verificação realizada pela metodologia VeriSC. O algoritmo possui partes genéricas (templates ) que facilita sua implementação na verificação de novos projetos de hardware e pode ser incorporado em outras metodologias de verificação funcional. Por fim, serão apresentados os resultados experimentais da aplicação da nova geração de dados em três estudos de casos, comparando-os com a implementação antiga da metodologia VeriSC.
APA, Harvard, Vancouver, ISO, and other styles
36

Castellanos, Lucia. "Statistical Models and Algorithms for Studying Hand and Finger Kinematics and their Neural Mechanisms." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/273.

Full text
Abstract:
The primate hand, a biomechanical structure with over twenty kinematic degrees of freedom, has an elaborate anatomical architecture. Although the hand requires complex, coordinated neural control, it endows its owner with an astonishing range of dexterous finger movements. Despite a century of research, however, the neural mechanisms that enable finger and grasping movements in primates are largely unknown. In this thesis, we investigate statistical models of finger movement that can provide insights into the mechanics of the hand, and that can have applications in neural-motor prostheses, enabling people with limb loss to regain natural function of the hands. There are many challenges associated with (1) the understanding and modeling of the kinematics of fingers, and (2) the mapping of intracortical neural recordings into motor commands that can be used to control a Brain-Machine Interface. These challenges include: potential nonlinearities; confounded sources of variation in experimental datasets; and dealing with high degrees of kinematic freedom. In this work we analyze kinematic and neural datasets from repeated-trial experiments of hand motion, with the following contributions: We identified static, nonlinear, low-dimensional representations of grasping finger motion, with accompanying evidence that these nonlinear representations are better than linear representations at predicting the type of object being grasped over the course of a reach-to-grasp movement. In addition, we show evidence of better encoding of these nonlinear (versus linear) representations in the firing of some neurons collected from the primary motor cortex of rhesus monkeys. A functional alignment of grasping trajectories, based on total kinetic energy, as a strategy to account for temporal variation and to exploit a repeated-trial experiment structure. An interpretable model for extracting dynamic synergies of finger motion, based on Gaussian Processes, that decomposes and reduces the dimensionality of variance in the dataset. We derive efficient algorithms for parameter estimation, show accurate reconstruction of grasping trajectories, and illustrate the interpretation of the model parameters. Sound evidence of single-neuron decoding of interpretable grasping events, plus insights about the amount of grasping information extractable from just a single neuron. The Laplace Gaussian Filter (LGF), a deterministic approximation to the posterior mean that is more accurate than Monte Carlo approximations for the same computational cost, and that in an off-line decoding task is more accurate than the standard Population Vector Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Vieira, Milreu Paulo. "Enumerating functional substructures of genome-scale metabolic networks : stories, precursors and organisations." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00850704.

Full text
Abstract:
In this thesis, we presented three different methods for enumerating special subnetworks containedin a metabolic network: metabolic stories, minimal precursor sets and chemical organisations. Foreach of the three methods, we gave theoretical results, and for the two first ones, we further providedan illustration on how to apply them in order to study the metabolic behaviour of living organisms.Metabolic stories are defined as maximal directed acyclic graphs whose sets of sources and targets arerestricted to a subset of the nodes. The initial motivation of this definition was to analyse metabolomicsexperimental data, but the method was also explored in a different context. Metabolic precursor setsare minimal sets of nutrients that are able to produce metabolites of interest. We present threedifferent methods for enumerating minimal precursor sets and we illustrate the application in a studyof the metabolic exchanges in a symbiotic system. Chemical organisations are sets of metabolites thatare simultaneously closed and self-maintaining, which captures some stability feature in the
APA, Harvard, Vancouver, ISO, and other styles
38

Guan, Wei. "New support vector machine formulations and algorithms with application to biomedical data analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41126.

Full text
Abstract:
The Support Vector Machine (SVM) classifier seeks to find the separating hyperplane wx=r that maximizes the margin distance 1/||w||2^2. It can be formalized as an optimization problem that minimizes the hinge loss Ʃ[subscript i](1-y[subscript i] f(x[subscript i]))₊ plus the L₂-norm of the weight vector. SVM is now a mainstay method of machine learning. The goal of this dissertation work is to solve different biomedical data analysis problems efficiently using extensions of SVM, in which we augment the standard SVM formulation based on the application requirements. The biomedical applications we explore in this thesis include: cancer diagnosis, biomarker discovery, and energy function learning for protein structure prediction. Ovarian cancer diagnosis is problematic because the disease is typically asymptomatic especially at early stages of progression and/or recurrence. We investigate a sample set consisting of 44 women diagnosed with serous papillary ovarian cancer and 50 healthy women or women with benign conditions. We profile the relative metabolite levels in the patient sera using a high throughput ambient ionization mass spectrometry technique, Direct Analysis in Real Time (DART). We then reduce the diagnostic classification on these metabolic profiles into a functional classification problem and solve it with functional Support Vector Machine (fSVM) method. The assay distinguished between the cancer and control groups with an unprecedented 99\% accuracy (100\% sensitivity, 98\% specificity) under leave-one-out-cross-validation. This approach has significant clinical potential as a cancer diagnostic tool. High throughput technologies provide simultaneous evaluation of thousands of potential biomarkers to distinguish different patient groups. In order to assist biomarker discovery from these low sample size high dimensional cancer data, we first explore a convex relaxation of the L₀-SVM problem and solve it using mixed-integer programming techniques. We further propose a more efficient L₀-SVM approximation, fractional norm SVM, by replacing the L₂-penalty with L[subscript q]-penalty (q in (0,1)) in the optimization formulation. We solve it through Difference of Convex functions (DC) programming technique. Empirical studies on the synthetic data sets as well as the real-world biomedical data sets support the effectiveness of our proposed L₀-SVM approximation methods over other commonly-used sparse SVM methods such as the L₁-SVM method. A critical open problem in emph{ab initio} protein folding is protein energy function design. We reduce the problem of learning energy function for extit{ab initio} folding to a standard machine learning problem, learning-to-rank. Based on the application requirements, we constrain the reduced ranking problem with non-negative weights and develop two efficient algorithms for non-negativity constrained SVM optimization. We conduct the empirical study on an energy data set for random conformations of 171 proteins that falls into the {it ab initio} folding class. We compare our approach with the optimization approach used in protein structure prediction tool, TASSER. Numerical results indicate that our approach was able to learn energy functions with improved rank statistics (evaluated by pairwise agreement) as well as improved correlation between the total energy and structural dissimilarity.
APA, Harvard, Vancouver, ISO, and other styles
39

Koskinen, M. (Miika). "Automatic assessment of functional suppression of the central nervous system due to propofol anesthetic infusion:from EEG phenomena to a quantitative index." Doctoral thesis, University of Oulu, 2006. http://urn.fi/urn:isbn:9514281756.

Full text
Abstract:
Abstract The rationale for automatically monitoring anesthetic drug effects on the central nervous system (CNS) is to improve possibilities to gain objective information on a patient's state and to adjust the medication individually. Although monitors have shown their usefulness in practice, there are still a number of unclear issues, especially with respect to the scientific foundations and validity of CNS monitoring techniques, and in monitoring the light hypnotic levels. Current monitors are, for example, often based on heuristics and ad hoc solutions. However, a quantitative index for anesthetic drug effect should have a sound relationship with observations and with the selected control variable. The research objectives are: (1) to explore propofol anesthetic related neurophysiological phenomena that can be applied in the automatic assessment of CNS suppression; (2) to develop a valid control variable for this purpose; (3) by means of digital signal processing and mathematical modeling, to design and to evaluate the performance of an index that correlates with the control variable. This dissertation introduces potentially useful neurophysiological phenomena, such as changes in phase synchronization between different EEG channels due to anesthesia, and painful stimulus evoked responses during the burst suppression. Furthermore, it refines the progression of the time-frequency patterns during the induction of anesthesia and shows their relation to the instant of unresponsiveness. The presented spontaneous and evoked EEG phenomena provide complementary information about the CNS functional suppression. Most significantly, the dissertation proposes a continuous and observation based control variable (r scale) and the means to predict its values by using EEG data. The definition of the scale provides a basis for anticipating the instant of the loss of consciousness. Additionally, the phase synchronization index as an indicator of drug effect is introduced. The approximate entropy descriptor performance is evaluated and optimised with a non-stationary signal recorded during the induction of anesthesia. The results open up opportunities to improve the preciseness, scientific validity and the interpretation of information on the anesthetic effects on CNS, and therefore, to increase the reliability of the anesthesia monitoring. Further work is needed to extend and verify the results in deep anesthesia.
APA, Harvard, Vancouver, ISO, and other styles
40

Hnízdilová, Bohdana. "Registrace ultrazvukových sekvencí s využitím evolučních algoritmů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442502.

Full text
Abstract:
This master´s thesis deals with the registration of ultrasound sequences using evolutionary algorithms. The theoretical part of the thesis describes the process of image registration and its optimalization using genetic and metaheuristic algorithms. The thesis also presents problems that may occur during the registration of ultrasonographic images and various approaches to their registration. In the practical part of the work, several optimization methods for the registration of a number of sequences were implemented and compared.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhong, Cuncong. "Computational Methods for Comparative Non-coding RNA Analysis: From Structural Motif Identification to Genome-wide Functional Classification." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5894.

Full text
Abstract:
Non-coding RNA (ncRNA) plays critical functional roles such as regulation, catalysis, and modification etc. in the biological system. Non-coding RNAs exert their functions based on their specific structures, which makes the thorough understanding of their structures a key step towards their complete functional annotation. In this dissertation, we will cover a suite of computational methods for the comparison of ncRNA secondary and 3D structures, and their applications to ncRNA molecular structural annotation and their genome-wide functional survey. Specifically, we have contributed the following five computational methods. First, we have developed an alignment algorithm to compare RNA structural motifs, which are recurrent RNA 3D structural fragments. Second, we have improved upon the previous alignment algorithm by incorporating base-stacking information and devise a new branch-and-bond algorithm. Third, we have developed a clustering pipeline for RNA structural motif classification using the above alignment methods. Fourth, we have generalized the clustering pipeline to a genome-wide analysis of RNA secondary structures. Finally, we have devised an ultra-fast alignment algorithm for RNA secondary structure by using the sparse dynamic programming technique. A large number of novel RNA structural motif instances and ncRNA elements have been discovered throughout these studies. We anticipate that these computational methods will significantly facilitate the analysis of ncRNA structures in the future.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
42

Кунцев, С. В. "Функціональні особливості GUI–інтерфейсу бібліотеки алгоритмів Xelopes." Thesis, Харківський національний економічний університет, 2009. http://essuir.sumdu.edu.ua/handle/123456789/62723.

Full text
Abstract:
Бібліотека алгоритмів Xelopes привертає увагу своїми особливостями — підтримкою стандартів Data Mining, незалежністю від платформи, незалежністю від вихідних даних, доступністю. Для забезпечення зручності разом з бібліотекою у вигляді окремого додатку поставляється графічний інтерфейс користувача (GUI). У роботі розглянуто функціональні особливості інтерфейсу: завантаження даних; проглядання даних у вигляді таблиці; проглядання атрибутів даних; проглядання статистичної інформації про дані; побудова моделі Data Mining; візуалізація моделі; збереження моделі; застосування моделі.
The library of algorithms of Xelopes comes into a notice the features - support of standards of Data Mining, by independence from a platform, by independence from a weekend given, by availability. For providing of comfort together with a library as separate to addition a graphic man-machine (GUI) interface is supplied. The functional features of interface are in-process considered: loading of data; viewing of data is as a table; viewing of attributes given; viewing of statistical information is about data; construction of model of Data Mining; visualization of model; maintenance of model; application of model.
APA, Harvard, Vancouver, ISO, and other styles
43

Ragnehed, Mattias. "Functional Magnetic Resonance Imaging for Clinical Diagnosis : Exploring and Improving the Examination Chain." Doctoral thesis, Linköping : Department of Medical and Health Sciences, Linköping University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mounirou, Arouna Lukman Moctar. "Construction automatique d'images de pseudo-âges géologiques à partir d'images sismiques par minimisation d'énergie." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0232/document.

Full text
Abstract:
A partir d’un ensemble de données interprétées et issues d’une analyse préalable par un opérateur expert (horizons, failles), l’objectif de la thèse est de proposer une segmentation d’une image sismique sous-jacente en parfaite cohérence avec les lois de la géologie. L’originalité de la démarche consistera à développer des techniques de segmentation d’images sismiques, entre autres basées sur des approches de type contours actifs, contraintes par des données interprétées en supplément de propriétés intrinsèques calculées par des procédés automatiques à partir de la donnée traitée sans nécessiter une quelconque supervision contrairement aux travaux existants. Un deuxième axe consistera à ordonnancer automatiquement les horizons (surfaces) interprétés et analyser finement chaque intervalle (le lieu existant entre deux horizons), en prenant en compte son contenu (amplitude, orientation, etc.). Tout cela aboutissant à la reconstruction du pseudo-temps géologique
The objective of the thesis is to propose a segmentation of an underlying seismic image in perfect coherence with the results of a preliminary analysis by an expert (horizons, faults). laws of geology. The originality of the approach will be to develop techniques for segmenting seismic images, among others based on active contour type approaches, constrained by data interpreted in addition to intrinsic properties calculated by automatic processes from the data processed without requiring any supervision in contrast to existing work. A second axis will be to automatically schedule the horizons (surfaces) interpreted and to analyze each interval (the place between two horizons) finely, taking into account its content (amplitude, orientation, etc.). All this resulted in the reconstruction of the geological pseudo-time
APA, Harvard, Vancouver, ISO, and other styles
45

Gomes, Victor pereira. "Funções recursivas primitivas: caracterização e alguns resultados para esta classe de funções." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/8514.

Full text
Abstract:
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-08-10T14:17:41Z No. of bitstreams: 1 arquivo total.pdf: 975005 bytes, checksum: 6f8194b9c0cb9c0bbd07b1d2b0ba4b9e (MD5)
Made available in DSpace on 2016-08-10T14:17:41Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 975005 bytes, checksum: 6f8194b9c0cb9c0bbd07b1d2b0ba4b9e (MD5) Previous issue date: 2016-06-21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The class of primitive recursive functions is not a formal version to the class of algorithmic functions, we study this special class of numerical functions due to the fact of that many of the functions known as algorithmic are primitive recursive. The approach on the class of primitive recursive functions aims to explore this special class of functions and from that, present solutions for the following problems: (1) given the class of primitive recursive derivations, is there an algorithm, that is, a mechanical procedure for recognizing primitive recursive derivations? (2) Is there a universal function for the class of primitive recursive functions? If so, is this function primitive recursive? (3) Are all the algorithmic functions primitive recursive? To provide solutions to these issues, we base on the hypothetical-deductive method and argue based on the works of Davis (1982), Mendelson (2009), Dias e Weber (2010), Rogers (1987), Soare (1987), Cooper (2004), among others. We present the theory of Turing machines which is a formal version to the intuitive notion of algorithm, and after that the famous Church-Turing tesis which identifies the class of algorithmic functions with the class of Turing-computable functions. We display the class of primitive recursive functions and show that it is a subclass of Turing-computable functions. Having explored the class of primitive recursive functions we proved as results that there is a recognizer algorithm to the class of primitive recursive derivations; that there is a universal function to the class of primitive recursive functions which does not belong to this class; and that not every algorithmic function is primitive recursive.
A classe das funções recursivas primitivas não constitui uma versão formal para a classe das funções algorítmicas, estudamos esta classe especial de funções numéricas devido ao fato de que muitas das funções conhecidas como algorítmicas são recursivas primitivas. A abordagem acerca da classe das funções recursivas primitivas tem como objetivo explorar esta classe especial de funções e, a partir disto, apresentar soluções para os seguintes problemas: (1) dada a classe das derivações recursivas primitivas, há um algoritmo, ou seja, um procedimento mecânico, para reconhecer derivações recursivas primitivas? (2) Existe uma função universal para a classe das funções recursivas primitivas? Se sim, essa função é recursiva primitiva? (3) Toda função algorítmica é recursiva primitiva? Para apresentar soluções para estas questões, nos pautamos no método hipotético-dedutivo e argumentamos com base nos manuais de Davis (1982), Mendelson (2009), Dias e Weber (2010), Rogers (1987), Soare (1987), Cooper (2004), entre outros. Apresentamos a teoria das máquinas de Turing, que constitui uma versão formal para a noção intuitiva de algoritmo, e, em seguida, a famosa tese de Church-Turing, a qual identifica a classe das funções algorítmicas com a classe das funções Turing-computáveis. Exibimos a classe das funções recursivas primitivas, e mostramos que a mesma constitui uma subclasse das funções Turing-computáveis. Tendo explorado a classe das funções recursivas primitivas, como resultados, provamos que existe um algoritmo reconhecedor para a classe das derivações recursivas primitivas; que existe uma função universal para a classe das funções recursivas primitivas a qual não pertence a esta classe; e que nem toda função algorítmica é recursiva primitiva.
APA, Harvard, Vancouver, ISO, and other styles
46

Stacha, Radek. "Optimalizace kogeneračního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231502.

Full text
Abstract:
Master thesis is focused on optimization of cogeneration system for purpose of rating optimization methods and evaluating properties of these methods. For each method there is description together with block schemes. First part of thesis is devoted to description of methods and their comparison. Second part consists of development of hybrid algorithm, which is used to optimize cogeneration systém model. Each algorithm compared is together with hybrid algorithms included in annexes.
APA, Harvard, Vancouver, ISO, and other styles
47

Heil, Katharina Friedlinde. "Systems biological approach to Parkinson's disease." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31043.

Full text
Abstract:
Parkinson’s Disease (PD) is the second most common neurodegenerative disease in the Western world. It shows a high degree of genetic and phenotypic complexity with many implicated factors, various disease manifestations but few clear causal links. Ongoing research has identified a growing number of molecular alterations linked to the disease. Dopaminergic neurons in the substantia nigra, specifically their synapses, are the key-affected region in PD. Therefore, this work focuses on understanding the disease effects on the synapse, aiming to identify potential genetic triggers and synaptic PD associated mechanisms. Currently, one of the main challenges in this area is data quality and accessibility. In order to study PD, publicly available data were systematically retrieved and analysed. 418 PD associated genes could be identified, based on mutations and curated annotations. I curated an up-to-date and complete synaptic proteome map containing a total of 6,706 proteins. Region specific datasets describing the presynapse, postsynapse and synaptosome were also delimited. These datasets were analysed, investigating similarities and differences, including reproducibility and functional interpretations. The use of Protein-Protein-Interaction Network (PPIN) analysis was chosen to gain deeper knowledge regarding specific effects of PD on the synapse. Thus I generated a customised, filtered, human specific Protein-Protein Interaction (PPI) dataset, containing 211,824 direct interactions, from four public databases. Proteomics data and PPI information allowed the construction of PPINs. These were analysed and a set of low level statistics, including modularity, clustering coefficient and node degree, explaining the network’s topology from a mathematical point of view were obtained. Apart from low-level network statistics, high-level topology of the PPINs was studied. To identify functional network subgroups, different clustering algorithms were investigated. In the context of biological networks, the underlying hypothesis is that proteins in a structural community are more likely to share common functions. Therefore I attempted to identify PD enriched communities of synaptic proteins. Once identified, they were compared amongst each other. Three community clusters could be identified as containing largely overlapping gene sets. These contain 24 PD associated genes. Apart from the known disease associated genes in these communities, a total of 322 genes was identified. Each of the three clusters is specifically enriched for specific biological processes and cellular components, which include neurotransmitter secretion, positive regulation of synapse assembly, pre- and post-synaptic membrane, scaffolding proteins, neuromuscular junction development and complement activation (classical pathway) amongst others. The presented approach combined a curated set of PD associated genes, filtered PPI information and synaptic proteomes. Various small- and large-scale analytical approaches, including PPIN topology analysis, clustering algorithms and enrichment studies identified highly PD affected synaptic proteins and subregions. Specific disease associated functions confirmed known research insights and allowed me to propose a new list of so far unknown potential disease associated genes. Due to the open design, this approach can be used to answer similar research questions regarding other complex diseases amongst others.
APA, Harvard, Vancouver, ISO, and other styles
48

Götz, Andreas W. [Verfasser], and Andreas [Akademischer Betreuer] Görling. "The Limited Expansion of Diatomic Overlap Density Functional Theory (LEDO-DFT): Development and Implementation of Algorithms, Optimization of Auxiliary Orbitals and Benchmark Calculations / Andreas Walter Götz. Betreuer: Andreas Görling." Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2005. http://d-nb.info/1035574977/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Whitinger, Robert. "An Algorithm for the Machine Calculation of Minimal Paths." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3119.

Full text
Abstract:
Problems involving the minimization of functionals date back to antiquity. The mathematics of the calculus of variations has provided a framework for the analytical solution of a limited class of such problems. This paper describes a numerical approximation technique for obtaining machine solutions to minimal path problems. It is shown that this technique is applicable not only to the common case of finding geodesics on parameterized surfaces in R3, but also to the general case of finding minimal functionals on hypersurfaces in Rn associated with an arbitrary metric.
APA, Harvard, Vancouver, ISO, and other styles
50

Corman, Etienne. "Functional representation of deformable surfaces for geometry processing." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX075/document.

Full text
Abstract:
La création et la compréhension des déformations de surfaces sont des thèmes récurrent pour le traitement de géométrie 3D. Comme les surfaces lisses peuvent être représentées de multiples façon allant du nuage ​​de points aux maillages polygonales, un enjeu important est de pouvoir comparer ou déformer des formes discrètes indépendamment de leur représentation. Une réponse possible est de choisir une représentation flexible des surfaces déformables qui peut facilement être transportées d'une structure de données à une autre.Dans ce but, les "functional map" proposent de représenter des applications entre les surfaces et, par extension, des déformations comme des opérateurs agissant sur des fonctions. Cette approche a été introduite récemment pour le traitement de modèle 3D, mais a été largement utilisé dans d'autres domaines tels que la géométrie différentielle, la théorie des opérateurs et les systèmes dynamiques, pour n'en citer que quelques-uns. Le principal avantage de ce point de vue est de détourner les problèmes encore non-résolus, tels que la correspondance forme et le transfert de déformations, vers l'analyse fonctionnelle dont l'étude et la discrétisation sont souvent mieux connues. Cette thèse approfondit l'analyse et fournit de nouvelles applications à ce cadre d'étude. Deux questions principales sont discutées.Premièrement, étant donné deux surfaces, nous analysons les déformations sous-jacentes. Une façon de procéder est de trouver des correspondances qui minimisent la distorsion globale. Pour compléter l'analyse, nous identifions les parties les moins fiables du difféomorphisme grâce une méthode d'apprentissage. Une fois repérés, les défauts peuvent être éliminés de façon différentiable à l'aide d'une représentation adéquate des champs de vecteurs tangents.Le deuxième développement concerne le problème inverse : étant donné une déformation représentée comme un opérateur, comment déformer une surface en conséquence ? Dans une première approche, nous analysons un encodage de la structure intrinsèque et extrinsèque d'une forme en tant qu'opérateur fonctionnel. Dans ce cadre, l'objet déformé peut être obtenu, à rotations et translations près, en résolvant une série de problèmes d'optimisation convexe. Deuxièmement, nous considérons une version linéarisée de la méthode précédente qui nous permet d'appréhender les champs de déformation comme agissant sur la métrique induite. En conséquence la résolution de problèmes difficiles, tel que le transfert de déformation, sont effectués à l'aide de simple systèmes linéaires d'équations
Creating and understanding deformations of surfaces is a recurring theme in geometry processing. As smooth surfaces can be represented in many ways from point clouds to triangle meshes, one of the challenges is being able to compare or deform consistently discrete shapes independently of their representation. A possible answer is choosing a flexible representation of deformable surfaces that can easily be transported from one structure to another.Toward this goal, the functional map framework proposes to represent maps between surfaces and, to further extents, deformation of surfaces as operators acting on functions. This approach has been recently introduced in geometry processing but has been extensively used in other fields such as differential geometry, operator theory and dynamical systems, to name just a few. The major advantage of such point of view is to deflect challenging problems, such as shape matching and deformation transfer, toward functional analysis whose discretization has been well studied in various cases. This thesis investigates further analysis and novel applications in this framework. Two aspects of the functional representation framework are discussed.First, given two surfaces, we analyze the underlying deformation. One way to do so is by finding correspondences that minimize the global distortion. To complete the analysis we identify the least and most reliable parts of the mapping by a learning procedure. Once spotted, the flaws in the map can be repaired in a smooth way using a consistent representation of tangent vector fields.The second development concerns the reverse problem: given a deformation represented as an operator how to deform a surface accordingly? In a first approach, we analyse a coordinate-free encoding of the intrinsic and extrinsic structure of a surface as functional operator. In this framework a deformed shape can be recovered up to rigid motion by solving a set of convex optimization problems. Second, we consider a linearized version of the previous method enabling us to understand deformation fields as acting on the underlying metric. This allows us to solve challenging problems such as deformation transfer are solved using simple linear systems of equations
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography