Dissertations / Theses on the topic 'Function approximation and processing control'

To see the other types of publications on this topic, follow the link: Function approximation and processing control.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 dissertations / theses for your research on the topic 'Function approximation and processing control.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Skelly, Margaret Mary. "Hierarchical Reinforcement Learning with Function Approximation for Adaptive Control." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081357818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Swingler, Kevin. "Mixed order hyper-networks for function approximation and optimisation." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25349.

Full text
Abstract:
Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n →R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature.
APA, Harvard, Vancouver, ISO, and other styles
3

Haro, Antonio. "Example Based Processing For Image And Video Synthesis." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5283.

Full text
Abstract:
The example based processing problem can be expressed as: "Given an example of an image or video before and after processing, apply a similar processing to a new image or video". Our thesis is that there are some problems where a single general algorithm can be used to create varieties of outputs, solely by presenting examples of what is desired to the algorithm. This is valuable if the algorithm to produce the output is non-obvious, e.g. an algorithm to emulate an example painting's style. We limit our investigations to example based processing of images, video, and 3D models as these data types are easy to acquire and experiment with. We represent this problem first as a texture synthesis influenced sampling problem, where the idea is to form feature vectors representative of the data and then sample them coherently to synthesize a plausible output for the new image or video. Grounding the problem in this manner is useful as both problems involve learning the structure of training data under some assumptions to sample it properly. We then reduce the problem to a labeling problem to perform example based processing in a more generalized and principled manner than earlier techniques. This allows us to perform a different estimation of what the output should be by approximating the optimal (and possibly not known) solution through a different approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Ebeigbe, Donald Ehima. "CONTROL OF RIGID ROBOTS WITH LARGE UNCERTAINTIES USING THE FUNCTION APPROXIMATION TECHNIQUE." Cleveland State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=csu1568034334694515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Jong Min. "A Study on Architecture, Algorithms, and Applications of Approximate Dynamic Programming Based Approach to Optimal Control." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5048.

Full text
Abstract:
This thesis develops approximate dynamic programming (ADP) strategies suitable for process control problems aimed at overcoming the limitations of MPC, which are the potentially exorbitant on-line computational requirement and the inability to consider the future interplay between uncertainty and estimation in the optimal control calculation. The suggested approach solves the DP only for the state points visited by closed-loop simulations with judiciously chosen control policies. The approach helps us combat a well-known problem of the traditional DP called 'curse-of-dimensionality,' while it allows the user to derive an improved control policy from the initial ones. The critical issue of the suggested method is a proper choice and design of function approximator. A local averager with a penalty term is proposed to guarantee a stably learned control policy as well as acceptable on-line performance. The thesis also demonstrates versatility of the proposed ADP strategy with difficult process control problems. First, a stochastic adaptive control problem is presented. In this application an ADP-based control policy shows an "active" probing property to reduce uncertainties, leading to a better control performance. The second example is a dual-mode controller, which is a supervisory scheme that actively prevents the progression of abnormal situations under a local controller at their onset. Finally, two ADP strategies for controlling nonlinear processes based on input-output data are suggested. They are model-based and model-free approaches, and have the advantage of conveniently incorporating the knowledge of identification data distribution into the control calculation with performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
6

Grönlund, Christer. "Spatio-temporal processing of surface electromyographic signals : information on neuromuscular function and control." Doctoral thesis, Umeå universitet, Institutionen för strålningsvetenskaper, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-958.

Full text
Abstract:
During muscle contraction, electrical signals are generated by the muscle cells. The analysis of those signals is called electromyography (EMG). The EMG signal is mainly determined by physiological factors including so called central factors (central nervous system origin) and peripheral factors (muscle tissue origin). In addition, during the acquisition of EMG signals, technical factors are introduced (measurement equipment origin). The aim of this dissertation was to develop and evaluate methods to estimate physiological properties of the muscles using multichannel surface EMG (MCsEMG) signals. In order to obtain accurate physiological estimates, a method for automatic signal quality estimation was developed. The method’s performance was evaluated using visually classified signals, and the results demonstrated high classification accuracy. A method for estimation of the muscle fibre conduction velocity (MFCV) and the muscle fibre orientation (MFO) was developed. The method was evaluated with synthetic signals and demonstrated high estimation precision at low contraction levels. In order to discriminate between the estimates of MFCV and MFO belonging to single or populations of motor units (MUs), density regions of so called spatial distributions were examined. This method was applied in a study of the trapezius muscle and demonstrated spatial separation of MFCV (as well as MFO) even at high contraction levels. In addition, a method for quantification of MU synchronisation was developed. The performance on synthetic sEMG signals showed high sensitivity on MU synchronisation and robustness to changes in MFCV. The method was applied in a study of the biceps brachii muscle and the relation to force tremor during fatigue. The results showed that MU synchronisation accounted for about 40 % of the force tremor. In conclusion, new sEMG methods were developed to study muscle function and motor control in terms of muscle architecture, muscle fibre characteristics, and processes within the central nervous system.
APA, Harvard, Vancouver, ISO, and other styles
7

Grönlund, Christer. "Spatio-temporal processing of surface electromyographic signals : information on neuromuscular function and control /." Umeå : Umeå universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dadashi, Shirin. "Modeling and Approximation of Nonlinear Dynamics of Flapping Flight." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78224.

Full text
Abstract:
The first and most imperative step when designing a biologically inspired robot is to identify the underlying mechanics of the system or animal of interest. It is most common, perhaps, that this process generates a set of coupled nonlinear ordinary or partial differential equations. For this class of systems, the models derived from morphology of the skeleton are usually very high dimensional, nonlinear, and complex. This is particularly true if joint and link flexibility are included in the model. In addition to complexities that arise from morphology of the animal, some of the external forces that influence the dynamics of animal motion are very hard to model. A very well-established example of these forces is the unsteady aerodynamic forces applied to the wings and the body of insects, birds, and bats. These forces result from the interaction of the flapping motion of the wing and the surround- ing air. These forces generate lift and drag during flapping flight regime. As a result, they play a significant role in the description of the physics that underlies such systems. In this research we focus on dynamic and kinematic models that govern the motion of ground based robots that emulate flapping flight. The restriction to ground based biologically inspired robotic systems is predicated on two observations. First, it has become increasingly popular to design and fabricate bio-inspired robots for wind tunnel studies. Second, by restricting the robotic systems to be anchored in an inertial frame, the robotic equations of motion are well understood, and we can focus attention on flapping wing aerodynamics for such nonlinear systems. We study nonlinear modeling, identification, and control problems that feature the above complexities. This document summarizes research progress and plans that focuses on two key aspects of modeling, identification, and control of nonlinear dynamics associated with flapping flight.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Kölling, Peter [Verfasser]. "Numerical studies on coherent control of semiconductor quantum dots based on k.p-calculations in envelope function approximation / Peter Kölling." Paderborn : Universitätsbibliothek, 2019. http://d-nb.info/1176019848/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Herron, Robyn. "Connectivity analysis of brain function in children with foetal alcohol spectrum disorder and control children during number processing." Master's thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/3244.

Full text
Abstract:
Includes abstract.
Includes bibliographical references (leaves 84-91).
Maternal drinking during pregnancy is a significant problem in the Western Cape, South Africa, with an accompanying high incidence of children diagnosed with foetal alcohol spectrum disorder (FASD). Little is known about the neural correlates governing the disorder that manifest as behavioural abnormalities and cognitive impairments, particularly in arithmethic calculation, repeatedly reported in affected children. The effect of prenatal alcohol exposure on number processing in children was investigated in a functional magnetic resonance imaging (fMRI) study (Meintjes et al., 2007). The results indicate significant differences in activation between alcohol-exposed and non-exposed control children during Exact Addition and Proximity Judgement tasks. This raised the question of whether the groups of children differ in functional connectivity during the number processing tasks. Therefore, the objective of this study was to analyse connectivity between functionally specialised brain areas in the previously collected fMRI data. The fMRI data of 14 controls and 7 alcohol-exposed children for Exact Addition and 15 controls and 9 alcohol-exposed children for Proximity Judgement was available for analysis. A primary aim was to determine normal functional connectivity in control children during number processing and a secondary aim, to investigate any differences in functional connectivity in children with FASD.
APA, Harvard, Vancouver, ISO, and other styles
11

Poungponsri, Suranai. "An Approach Based On Wavelet Decomposition And Neural Network For ECG Noise Reduction." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/101.

Full text
Abstract:
Electrocardiogram (ECG) signal processing has been the subject of intense research in the past years, due to its strategic place in the detection of several cardiac pathologies. However, ECG signal is frequently corrupted with different types of noises such as 60Hz power line interference, baseline drift, electrode movement and motion artifact, etc. In this thesis, a hybrid two-stage model based on the combination of wavelet decomposition and artificial neural network is proposed for ECG noise reduction based on excellent localization features: wavelet transform and the adaptive learning ability of neural network. Results from the simulations validate the effectiveness of this proposed method. Simulation results on actual ECG signals from MIT-BIH arrhythmia database [30] show this approach yields improvement over the un-filtered signal in terms of signal-to-noise ratio (SNR).
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Lu. "Task Load Modelling for LTE Baseband Signal Processing with Artificial Neural Network Approach." Thesis, KTH, Signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160947.

Full text
Abstract:
This thesis gives a research on developing an automatic or guided-automatic tool to predict the hardware (HW) resource occupation, namely task load, with respect to the software (SW) application algorithm parameters in an LTE base station. For the signal processing in an LTE base station it is important to get knowledge of how many HW resources will be used when applying a SW algorithm on a specic platform. The information is valuable for one to know the system and platform better, which can facilitate a reasonable use of the available resources. The process of developing the tool is considered to be the process of building a mathematical model between HW task load and SW parameters, where the process is dened as function approximation. According to the universal approximation theorem, the problem can be solved by an intelligent method called articial neural networks (ANNs). The theorem indicates that any function can be approximated with a two-layered neural network as long as the activation function and number of hidden neurons are proper. The thesis documents a work ow on building the model with the ANN method, as well as some research on data subset selection with mathematical methods, such as Partial Correlation and Sequential Searching as a data pre-processing step for the ANN approach. In order to make the data selection method suitable for ANNs, a modication has been made on Sequential Searching method, which gives a better result. The results show that it is possible to develop such a guided-automatic tool for prediction purposes in LTE baseband signal processing under specic precision constraints. Compared to other approaches, this model tool with intelligent approach has a higher precision level and a better adaptivity, meaning that it can be used in any part of the platform even though the transmission channels are dierent.
Denna avhandling utvecklar ett automatiskt eller ett guidat automatiskt verktyg for att forutsaga behov av hardvaruresurser, ocksa kallat uppgiftsbelastning, med avseende pa programvarans algoritmparametrar i en LTE basstation. I signalbehandling i en LTE basstation, ar det viktigt att fa kunskap om hur mycket av hardvarans resurser som kommer att tas i bruk nar en programvara ska koras pa en viss plattform. Informationen ar vardefull for nagon att forsta systemet och plattformen battre, vilket kan mojliggora en rimlig anvandning av tillgangliga resurser. Processen att utveckla verktyget anses vara processen att bygga en matematisk modell mellan hardvarans belastning och programvaruparametrarna, dar processen denieras som approximation av en funktion. Enligt den universella approximationssatsen, kan problemet losas genom en intelligent metod som kallas articiella neuronnat (ANN). Satsen visar att en godtycklig funktion kan approximeras med ett tva-skiktS neuralt natverk sa lange aktiveringsfunktionen och antalet dolda neuroner ar korrekt. Avhandlingen dokumenterar ett arbets- ode for att bygga modellen med ANN-metoden, samt studerar matematiska metoder for val av delmangder av data, sasom Partiell korrelation och sekventiell sokning som dataforbehandlingssteg for ANN. For att gora valet av uppgifter som lampar sig for ANN har en andring gjorts i den sekventiella sokmetoden, som ger battre resultat. Resultaten visar att det ar mojligt att utveckla ett sadant guidat automatiskt verktyg for prediktionsandamal i LTE basbandssignalbehandling under specika precisions begransningar. Jamfort med andra metoder, har dessa modellverktyg med intelligent tillvagagangssatt en hogre precisionsniva och battre adaptivitet, vilket innebar att den kan anvandas i godtycklig del av plattformen aven om overforingskanalerna ar olika.
APA, Harvard, Vancouver, ISO, and other styles
13

Aburakhis, Mohamed Khalifa I. Dr. "Continuous Time and Discrete Time Fractional Order Adaptive Control for a Class of Nonlinear Systems." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1565018404845161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Parsons, Colton A. "Variable Precision Tandem Analog-to-Digital Converter (ADC)." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1255.

Full text
Abstract:
This paper describes an analog-to-digital signal converter which varies its precision as a function of input slew rate (maximum signal rate of change), in order to best follow the input in real time. It uses Flash and Successive Approximation (SAR) conversion techniques in sequence. As part of the design, the concept of "total real-time optimization" is explored, where any delay at all is treated as an error (Error = Delay * Signal Slew Rate). This error metric is proposed for use in digital control systems. The ADC uses a 4-bit Flash converter in tandem with SAR logic that has variable precision (0 to 11 bits). This allows the Tandem ADC to switch from a fast, imprecise converter to a slow, precise converter. The level of precision is determined by the input’s peak rate of change, optimized for minimum real-time error; a secondary goal is to react quickly to input transient spikes. The implementation of the Tandem ADC is described, along with various issues which arise when designing such a converter and how they may be dealt with. These include Flash ADC inaccuracies, rounding issues, and system timing and synchronization. Most of the design is described down to the level of logic gates and related building blocks (e.g. latches and flip-flops), and various logic optimizations are used in the design to reduce calculation delays. The design also avoids active analog circuitry whenever possible – it can be almost entirely implemented with CMOS logic and passive analog components.
APA, Harvard, Vancouver, ISO, and other styles
15

Nguyen, Huu Phuc. "Développement d'une commande à modèle partiel appris : analyse théorique et étude pratique." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2323/document.

Full text
Abstract:
En théorie de la commande, un modèle du système est généralement utilisé pour construire la loi de commande et assurer ses performances. Les équations mathématiques qui représentent le système à contrôler sont utilisées pour assurer que le contrôleur associé va stabiliser la boucle fermée. Mais, en pratique, le système réel s’écarte du comportement théorique modélisé. Des non-linéarités ou des dynamiques rapides peuvent être négligées, les paramètres sont parfois difficiles à estimer, des perturbations non maitrisables restent non modélisées. L’approche proposée dans ce travail repose en partie sur la connaissance du système à piloter par l’utilisation d’un modèle analytique mais aussi sur l’utilisation de données expérimentales hors ligne ou en ligne. A chaque pas de temps la valeur de la commande qui amène au mieux le système vers un objectif choisi a priori, est le résultat d’un algorithme qui minimise une fonction de coût ou maximise une récompense. Au centre de la technique développée, il y a l’utilisation d’un modèle numérique de comportement du système qui se présente sous la forme d’une fonction de prédiction tabulée ayant en entrée un n-uplet de l’espace joint entrées/état ou entrées/sorties du système. Cette base de connaissance permet l’extraction d’une sous-partie de l’ensemble des possibilités des valeurs prédites à partir d’une sous-partie du vecteur d’entrée de la table. Par exemple, pour une valeur de l’état, on pourra obtenir toutes les possibilités d’états futurs à un pas de temps, fonction des valeurs applicables de commande. Basé sur des travaux antérieurs ayant montré la viabilité du concept en entrées/état, de nouveaux développements ont été proposés. Le modèle de prédiction est initialisé en utilisant au mieux la connaissance a priori du système. Il est ensuite amélioré par un algorithme d’apprentissage simple basé sur l’erreur entre données mesurées et données prédites. Deux approches sont utilisées : la première est basée sur le modèle d’état (comme dans les travaux antérieurs mais appliquée à des systèmes plus complexes), la deuxième est basée sur un modèle entrée-sortie. La valeur de commande qui permet de rapprocher au mieux la sortie prédite dans l’ensemble des possibilités atteignables de la sortie ou de l’état désiré, est trouvée par un algorithme d’optimisation. Afin de valider les différents éléments proposés, cette commande a été mise en œuvre sur différentes applications. Une expérimentation réelle sur un quadricoptère et des essais réels de suivi de trajectoire sur un véhicule électrique du laboratoire montrent sacapacité et son efficacité sur des systèmes complexes et rapides. D’autres résultats en simulation permettent d’élargir l’étude de ses performances. Dans le cadre d’un projet partenarial, l’algorithme a également montré sa capacité à servir d’estimateur d’état dans la reconstruction de la vitesse mécanique d’une machine asynchrone à partir des signaux électriques. Pour cela, la vitesse mécanique a été considérée comme l’entrée du système
In classical control theory, the control law is generally built, based on the theoretical model of the system. That means that the mathematical equations representing the system dynamics are used to stabilize the closed loop. But in practice, the actual system differs from the theory, for example, the nonlinearity, the varied parameters and the unknown disturbances of the system. The proposed approach in this work is based on the knowledge of the plant system by using not only the analytical model but also the experimental data. The input values stabilizing the system on open loop, that minimize a cost function, for example, the distance between the desired output and the predicted output, or maximize a reward function are calculated by an optimal algorithm. The key idea of this approach is to use a numerical behavior model of the system as a prediction function on the joint state and input spaces or input-output spaces to find the controller’s output. To do this, a new non-linear control concept is proposed, based on an existing controller that uses a prediction map built on the state-space. The prediction model is initialized by using the best knowledge a priori of the system. It is then improved by using a learning algorithm based on the sensors’ data. Two types of prediction map are employed: the first one is based on the state-space model; the second one is represented by an input-output model. The output of the controller, that minimizes the error between the predicted output from the prediction model and the desired output, will be found using optimal algorithm. The application of the proposed controller has been made on various systems. Some real experiments for quadricopter, some actual tests for the electrical vehicle Zoé show its ability and efficiency to complex and fast systems. Other the results in simulation are tested in order to investigate and study the performance of the proposed controller. This approach is also used to estimate the rotor speed of the induction machine by considering the rotor speed as the input of the system
APA, Harvard, Vancouver, ISO, and other styles
16

Djaneye-Boundjou, Ouboti Seydou Eyanaa. "Particle Swarm Optimization Stability Analysis." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1386413941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Doležal, Tomáš. "Rekonstrukce tvaru objektu založená na odezvě max(t,0)-pulsu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413154.

Full text
Abstract:
This diploma thesis deals with a spatial imaging of targets using time-domain radar responses on the max(t,0) pulse. The problem is formulated for both perfectly electrically conductive and dielectric objects. The main aim of the thesis includes a code implementation calculating the profile functions of an unknown object from the mentioned time responses and a code for the subsequent reconstruction of an object in the MATLAB environment. A graphical user interface was created for testing purposes. The 3D probability function technique was used for the final reconstruction. The implemented technique achieves interesting results, which are presented in the final part of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
18

Pomportes-Castagnet, Laura. "Influence de stratégies nutritionnelles sur le fonctionnent cognitif au cours d’une sollicitation physiologique." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4057/document.

Full text
Abstract:
Dans de nombreuses activités physiques et sportives, la performance dépend de l’efficacité des processus physiologiques et cognitifs sollicités dans l’action. Plus précisément, il semblerait que celle-ci soit fréquemment influencée par l’efficacité des processus décisionnels qui s’effectuent sous pression temporelle. A ce titre, ce travail de thèse s’intéresse à l’effet de l’administration de trois supplémentations nutritionnelles classiquement consommées par les athlètes (hydrates de carbone, caféine et guarana) sur le fonctionnement cognitif au cours d’un exercice. Nos résultats indiquent que l’ingestion isolée de ces trois composés améliore la vitesse du traitement de l’information lors d'une tâche décisionnelle dès la fin d’un exercice. Par ailleurs, l’utilisation de la caféine en rinçage de bouche semble aussi pertinente, puisque nos résultats suggèrent une amélioration probable de l’efficacité des processus relatifs à la gestion d’un conflit au cours de l’exercice. Enfin, une diminution de la perception de l’effort est aussi rapportée lors de l’ingestion de caféine et de guarana, ou de l’utilisation d’hydrates de carbone en rinçage de bouche. L’ensemble de ces résultats indique une potentialisation de l’effet de l’exercice sur la performance cognitive. Il suggère aussi que la mise en place de supplémentations nutritionnelles lors d’un exercice améliore l’efficacité de processus cognitifs qui s’avèrent être essentiels à la performance sportive
In sport and exercise activities, successful performances strongly depend on the ability to simultaneously carry out cognitive and physical demands. More precisely, it would seem that performance is frequently influenced by the efficacy of decision-making realized under strong temporal pressure. The aim of this thesis work is to assess the effect of nutritional supplements that is carbohydrate, caffeine and guarana on cognitive functions during an acute exercise. Overall, our results suggest that ingestion of these three supplements enhance speed of information processing during a decision-making task at the end of exercise. Additionally, caffeine mouth rinsing seems worthwhile since a likely enhancement of inhibition processes has been reported after use during exercise. Finally, a decrease of perceived exertion has been reported with caffeine and guarana ingestion along with carbohydrate mouth rinsing. In conclusion, our results indicate the potentiation of exercise effects on cognitive function. Furthermore, they suggest nutritional supplements could enhance cognitive processes during exercise in what may be a predictive factor of performance enhancement
APA, Harvard, Vancouver, ISO, and other styles
19

Goubet, Étienne. "Contrôle non destructif par analyse supervisée d'images 3D ultrasonores." Cachan, Ecole normale supérieure, 1999. http://www.theses.fr/1999DENS0011.

Full text
Abstract:
L'objet de cette thèse consiste en l'élaboration d'une chaine de traitements permettant d'extraire l'information utile de données 3d ultrasonores et de caractériser les défauts éventuellement présents dans la pièce inspectée. Cette caractérisation a été abordée pour des fissures contrôlées par un même émetteur/récepteur. Dans une première partie nous rappelons les principes du contrôle non destructif par ultrasons ainsi que les représentations classiques des données ultrasonores. La deuxième partie est consacrée à l'étude d'un modèle d'extraction de l'information d'échos présents sur les données au moyen d'une base d'ondelettes adaptée. L'utilisation d'une ondelette unique translatée dans le temps est rendue possible par un travail sur une représentation complexe des données réelles originales. Une première étape permet de détecter et de positionner les échos d'amplitude significative. Dans un deuxième temps, on effectue une régularisation spatialement cohérente des instants de détection à l'aide d'un modèle markovien. On élimine ainsi les échos dont les instants de détection ne font pas partie de surfaces d'instants régulières. Les parties suivantes traitent de la localisation et du dimensionnement des fissures. On utilise des caractéristiques extraites du faisceau ultrasonore afin de déterminer le trajet de l'onde ultrasonore du capteur à l'objet diffractant lorsque la réponse de l'écho est maximale. On met en correspondance l'instant de détection obtenu pour cet écho et le temps de parcours selon le trajet défini afin de positionner un point d'arête dans la pièce. On obtient ainsi un ensemble de points de discrétisation pour chaque arête. Dans le cadre de données 3d obtenues sur un matériau isotrope, on élimine les points d'arête extrêmes en utilisant un critère de comparaison sur les courbes échodynamiques associées aux points de détection sur les données réelles et sur des données simulées équivalentes. La localisation est abordée pour des fissures situées dans un matériau isotrope ou acier revêtu d'anisotrope.
APA, Harvard, Vancouver, ISO, and other styles
20

Castano, Antoine. "Methode d'analyse des cotes de fabrication." Paris 6, 1988. http://www.theses.fr/1988PA066123.

Full text
Abstract:
Modele definissant la distance entre deux surfaces par une fonction aleatoire. Ce modele permet l'analyse du comportement de la piece dans son montage d'usinage. Il permet d'exploiter les donnees des machines a mesurer tridimensionnelles ainsi que les controles statistiques. On en deduit une methode generale de determination des cotations de fabrication. Applications a la productique, a la fabrication assistee, aux systemes flexibles de fabrication et a l'informatique industrielle
APA, Harvard, Vancouver, ISO, and other styles
21

Lunot, Vincent. "Techniques d'approximation rationnelle en synthèse fréquentielle : problème de Zolotarev et algorithme de Schur." Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00711860.

Full text
Abstract:
Cette thèse présente des techniques d'optimisation et d'approximation rationnelle ayant des applications en synthèse et identification de systèmes passifs. La première partie décrit un problème de Zolotarev : on cherche à maximiser sur une famille d'intervalles l'infimum du module d'une fonction rationnelle de degré donné, tout en contraignant son module à ne pas dépasser 1 sur une autre famille d'intervalles. On s'intéresse dans un premier temps à l'existence et à la caractérisation des solutions d'un tel problème. Deux algorithmes, de type Remes et correction différentielle, sont ensuite présentés et étudiés. Le lien avec la synthèse de filtres hyperfréquences est détaillé. La théorie présentée permet en fait le calcul de fonctions de filtrage, multibandes ou monobandes, respectant un gabarit fixé. Celle-ci a été appliquée à la conception de plusieurs filtres hyperfréquences multibandes dont les réponses théoriques et les mesures sont données. La deuxième partie concerne l'approximation rationnelle Schur d'une fonction Schur. Une fonction Schur est une fonction analytique dans le disque unité bornée par 1 en module. On étudie tout d'abord l'algorithme de Schur multipoints, qui fournit un paramétrage des fonctions strictement Schur. Le lien avec les fonctions rationnelles orthogonales, obtenu grâce à un théorème de type Geronimus, est ensuite présenté. Celui-ci permet alors d'établir certaines propriétés d'approximation dans le cas peu étudié où les points d'interpolation tendent vers le bord du disque. En particulier, une convergence en métrique de Poincaré est obtenue grâce à une extension d'un théorème de type Szego. Une étude numérique sur l'approximation rationnelle Schur à degré fixé est aussi réalisée.
APA, Harvard, Vancouver, ISO, and other styles
22

Vaiter, Samuel. "Régularisations de Faible Complexité pour les Problèmes Inverses." Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01026398.

Full text
Abstract:
Cette thèse se consacre aux garanties de reconstruction et de l'analyse de sensibilité de régularisation variationnelle pour des problèmes inverses linéaires bruités. Il s'agit d'un problème d'optimisation convexe combinant un terme d'attache aux données et un terme de régularisation promouvant des solutions vivant dans un espace dit de faible complexité. Notre approche, basée sur la notion de fonctions partiellement lisses, permet l'étude d'une grande variété de régularisations comme par exemple la parcimonie de type analyse ou structurée, l'antiparcimonie et la structure de faible rang. Nous analysons tout d'abord la robustesse au bruit, à la fois en termes de distance entre les solutions et l'objet original, ainsi que la stabilité de l'espace modèle promu. Ensuite, nous étudions la stabilité de ces problèmes d'optimisation à des perturbations des observations. À partir d'observations aléatoires, nous construisons un estimateur non biaisé du risque afin d'obtenir un schéma de sélection de paramètre.
APA, Harvard, Vancouver, ISO, and other styles
23

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

(20390), Baolin Wu. "Fuzzy modelling and identification with genetic algorithms based learning." Thesis, 1996. https://figshare.com/articles/thesis/Fuzzy_modelling_and_identification_with_genetic_algorithms_based_learning/21345057.

Full text
Abstract:

Modelling is an essential step towards a solution to complex system problems. Traditional mathematical methods are inadequate in describing the complex systems when the complexity increases. Fuzzy logic has provided an alternative way in dealing with complexity in real world.

This thesis looks at a practical approach for complex system modelling using fuzzy logic. This approach is usually called fuzzy modelling. The main aim of this thesis is to explore the capabilities of fuzzy logic in complex system modelling using available data. The fuzzy model concerned is the Sugeno-Takage-Kang model (TSK model). A genetic algorithm based learning algorithm (GABL) is proposed for fuzzy modelling. It basically contains four blocks, namely the partition, GA, tuning and termination blocks. The functioning of each block is described and the proposed algorithm is tested using a number of examples from different applications such as function approximation and processing control.

APA, Harvard, Vancouver, ISO, and other styles
25

Lai, Jiun-Jao, and 賴俊兆. "Bilinear System Control Using Function Approximation Technique." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/25167568604284579800.

Full text
Abstract:
碩士
國立臺灣大學
生物產業機電工程學研究所
97
This study presents a new adaptive controller based on function approximation technique for uncertain nonhomogeneous bilinear systems containing time-varying uncertainties with unknown bounds. For applying conventional robust strategies, we must know the variation bounds of some of the uncertainties. Moreover, due to the time-varying property of these uncertainties, traditional adaptive schemes can not be adopted. This paper solves the stabilization problem when FAT (Function Approximation Technique) is used to approximate the time-varying nonlinearity with unknown bounds of the system and results bounded error performance is generated. Meanwhile, the singularity problem of the control input can be overcome if the bound of the input gain matrix is available. The proposed approach is based on Lyapunov-like function with rigorous derivation. The computer simulation result is provided to verify the performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
26

Cheng, Hung-Yeh, and 鄭鴻業. "TCMAC for Function Approximation and Motor Control." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/u58t27.

Full text
Abstract:
碩士
中原大學
電機工程研究所
91
Abstract The purpose of this paper are to investigate structure of Cerebellar Model Articulation Controller with triangle membership functions (TCMAC),where the TCMAC is used to assist the original proportional control system. The proposed TCMAC was adopted here are regular triangle membership functions. The rectangular function in the conventional CMAC are replaced with membership functions of fuzzy theory for smoothing the network output and obtaining the optimal solution. Because the CMAC’s error was evenly distributed to every mapped memory. It would easy converge at local optimal solution and the CMAC’s output was not as smooth as the target function. In order to improve the CMAC’s approximation ability and to simplify the TCMAC structure, we adopted the membership functions, also, we employ a structure of signal input instead of the structure of two input. Therefore, the propose structure can reduce the memory requirement a great deal in the conventional CMAC. Beside, correcting the TCMAC shortens the design procedure and reduces the difficulties in design. The results of this propose designed algorithm was tested with program simulation. According to simulated result, the TCMAC structure was fund that the learning speed of TCMAC is faster the conventional CMAC. Finally the simulation result for the controlled motor system by P controller and TCMAC. where the propose structure has cuts down the temporary state respond time and shorten the stable state time. It has performed to demonstrate the feasibility and to understand the improvement in its control performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Chie, Ming Chih, and 簡銘志. "Adaotive impedance control of robot manipulators based on function approximation thechnique." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/64798792048025023144.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
91
This thesis presents an adaptive impedance control scheme for an n-link constrained rigid robot manipulator without using the regressor. In addition, inversion of the estimated inertia matrix is also avoided and the new design is free from end-point acceleration measurements. The dynamics of the robot manipulator is assumed that all of the matrices in robot model are unavailable. Since these matrices are time-varying and their variation bounds are not given, traditional adaptive or robust designs do not apply. The function approximation technique is used here to represent uncertainties in some finite linear combinations of the orthogonal basis. The dynamics of the output tracking can thus be proved to be a stable first order filter driven by function approximation errors. Using the Lyapunov stability theory, a set of update laws is derived to give closed loop stability with proper tracking performance. A 2 DOF planar robot with environment constraint is used in the computer simulations to test the efficacy of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Chih-Hsuan, and 李志軒. "Adaptive Control for Surge of Centrifugal Compressors - A Function Approximation Approach." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/25384703343056586085.

Full text
Abstract:
碩士
淡江大學
航空太空工程學系碩士班
97
Compressor instabilities in fluid field such as surge is the main instabilities phenomena in operation of jet engine. It reduces the performance by energy loss and causes structural damage by vibration due to the instabilities. In this study, we use drive torque actuation in active surge control of centrifugal compressor by function approximation approach to design. The proposed method is simulated on a compressor model using a real compression system.
APA, Harvard, Vancouver, ISO, and other styles
29

Chang, Lee-Shang, and 張力祥. "Differentiable Cerebellar Model Articulation Controller for Function Approximation and Motor Control." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/65829777330659379999.

Full text
Abstract:
碩士
中原大學
電機工程研究所
91
he cerebellar model articulation controller (CMAC) with rectangular function used in the receptive field owns the property of evenly weighting to access ,store and renew the data of memory units . But the output of this kind of CMAC is stairwise and not differentiable .To improve this disadvantage ,the differentiable CMAC(DCMAC) with gaussiam function instead can effectively make the output differentiable . In this thesis ,it is shown that the performance of a DCMAC is better than that a CMAC. Furthermore ,the DCMAC two with modified gaussiam function for weighting was testified proposed and the one with a “blunt” gaussiam function is called the blunt DCMAC and ”aculeate” gaussiam function is called the aculeate DCMAC .The simulation results show that the blunt DCMAC works better than others in function approximation ,and the aculeate DCMAC in control performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Shih-Chiang, and 李世強. "An Adaptive Control for Rotating Stall and Surge of Jet Engines - A Function Approximation Approach." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/01437820233718229456.

Full text
Abstract:
碩士
淡江大學
航空太空工程學系
93
Compressor instabilities such as surge and rotating stall are highly unwanted phenomena in operation of jet engines. This is because these two instabilities reduce the performance and cause damage to aircraft engines. In this study, we design a model reference adaptive controller based on the function approximation technique to stabilize these two instabilities. Based upon this scheme, the controller parameters neither are restricted to be constant nor the bounds should be available a priori. The functions of the controller parameters are assumed to be piecewise continuous and satisfy the Dirichlet''s conditions. Furthermore expressing these controller parameters in a finite-term Fourier series, they can be estimated by updating the Fourier coefficients.A Lyapunov stability approach is implemented to provide the update laws for the estimation of those time-invariant coefficients and guarantees the output error convergence.Therefore, the adaptive controller requires less model information and maintains consistent performance for the system when some controller parameters are disturbed.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Po-Chang, and 陳柏璋. "Function Approximation Technique Based Adaptive Control Design for Uncertain Non-autonomous Systems with Applications to Hydraulic Active Suspension Systems." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/58932194922584700159.

Full text
Abstract:
博士
國立臺灣科技大學
機械工程系
93
This dissertation addresses the problem of controlling four classes of non-autonomous systems containing general uncertainties (i.e., unknown time-varying nonlinearities without available variation bounds) with investigations to the design of hydraulic active suspension systems (ASS). Owing to the presence of the general uncertainties in the system dynamics, traditional adaptive schemes or robust control strategies are not applicable. To deal with this problem, this dissertation proposes several stable adaptive controllers based on function approximation technique (FAT), some of which are then applied to the ASS. Therefore, this dissertation is organized into two parts: Part 1 develops the FAT based adaptive control theory and Part 2 presents the control of ASS using FAT based adaptive controllers. In Part 1, we first derive a FAT based adaptive tracking controller for a class of matched single-input single-output (SISO) non-autonomous systems with general uncertainties. The FAT is used to construct approximation models for uncertainties, so that their update laws can be chosen from the conventional Lyapunov-like design to ensure the closed-loop stability. Different from the approaches using state-dependent approximation models (SDAM), such as neural networks or fuzzy systems, here we employ time-dependent orthogonal basis functions to be the approximator. By utilizing the proposed controller, uniformly ultimate boundedness of the closed-loop system can be arrived at along with guaranteed transient-state performance. Afterwards, we consider a class of uncertain SISO systems in “perturbed chain-of-integrator” form. Since the structure of the system is much more complicated than that in strict-feedback or pure-feedback form, most of SDAM based backstepping designs are infeasible. To deal with the problem, this dissertation proposes an adaptive multiple-surface tracking (AMST) controller, where the multiple-surface design is used to cope with the uncertainty mismatch problem and the uncertainties in each error surface are still tackled by the FAT based function estimators (consisting of time-dependent approximators with proper update laws). Because of the state independence of the approximators, the real control input will not appear until the last step of the derivation comes up. Hence, the pure-feedback restriction can be completely removed. Uniformly ultimately bounded performance is still obtained by means of the AMST design. In addition, explicit upper bound for the error signals of each error surface is acquired with adjustable size to avoid unwanted peaking phenomenon. The result of the adaptive tracking controller (in matched SISO case) is then extended to a matched MIMO square system whose subsystems are of different block sizes. By utilizing the extended controller to cope with the matched uncertainties in each and every subsystem, similar performance result as in SISO counterpart is guaranteed. At the end of Part 1, the AMST method is combined with the extended adaptive tracking controller to form a systematic design procedure for a mismatched MIMO square system composed of several perturbed chain-of-integrator subsystems with full interconnection among each other. This system is beyond the assumption of “block-triangular” form and the combination provides an effective tool to deal with its control problem. The closed-loop system with the extended ASMT controller can be shown to be uniformly ultimately bounded and the transient performance can be also ensured. In spite of the above development, an important issue to be considered in implementing the control laws is the “singular problem”. To avoid this problem, in the proposed designs, the input-channel uncertainties are suppressed by employing a robust control term with additional assumption that the bounds of the input-channel uncertainties (or input-channel uncertainty matrix in MIMO cases) are available. Modifications for the term to be feasible in either SISO or MIMO case are made in this dissertation with statements to the required conditions. In Part 2, we proceed to investigate the control of a non-autonomous quarter-car ASS with uncertain passive components, unknown car-body loads, and external disturbances on the car-body part. Since most of these uncertainties are of unknown bounds and some of them yet possess time-varying nature, the ASS designs based on traditional adaptive schemes or robust strategies are infeasible. The FAT based adaptive tracking controller is thus applied to deal with the uncertain car-body dynamics, so that the car-body motion can be convergent to some pre-described desired trajectories. Afterwards, a nonlinear filter is introduced into the control loop to generate the car-body desired trajectory, which is able to switch the objective between ride comfort and suspension travel according to the current suspension deflection. To realize the control force, a hydraulic actuator is employed with consideration to its uncertain dynamics. The FAT based adaptive tracking controller is again used to achieve the force tracking of the actuator. Then, the actuator model is combined with the uncertain ASS to form a full-blown model of the quarter-car hydraulic ASS. In order to cope with both of the matched (in actuator part) and the mismatched (in suspension part) uncertainties, an AMST controller is derived. The objective is the same as in the case without actuator to accomplish the tracking control of the car-body motion with incorporation of the nonlinear filter. The closed-loop stability (including the internal dynamics) is ensured via the Lyapunov analysis and computer simulations are performed to verify the effectiveness of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Tsai, Chun-Fu, and 蔡竣富. "The Research of Function Processing Select Program to Improve the Efficiency by Designing and Developing of ASP.NET Chart Control Technology." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/26952210223839584974.

Full text
Abstract:
碩士
國立高雄應用科技大學
電機工程系博碩士班
101
Taiwan Power Company distributes user into ten contract categories and thirty-four industry categories according to Electricity property from user. The user’s Load Synthesis will affect Taiwan Power Company to formulate tariff Rate. The Load Synthesis of user groups can be analyzed through the load Characteristics Investigation procedure of user group. The load survey data is real huge. The management and efficiency use of the data will affect the load composition analysis. Besides, in order to reduce the operation uninitiated because of the personnel Changes of Taiwan Power Company, it is obvious that the analyzing platform of user group Load composition is designed through the standard program establishment is important. This survey uses object-oriented concept to design this analyzing platform, and integrates Web-based database technique to achieve the target for platform friendly. This study will illustrate how to establish function processing select program by object-oriented technology, establish platform and integrated related database. The consequence of this research make user can achieve load composition analysis quickly through this platform, and promote the degree of credibility and reliability of the tariff rate for the Taiwan Power Company. Keywords:Function processing select program、Load Survey、Load Synthesis、Network Database
APA, Harvard, Vancouver, ISO, and other styles
33

Zaspel, Joachim C. "Automating pilot function performance assesssment using fuzzy systems and a genetic algorithm." Thesis, 1997. http://hdl.handle.net/1957/33694.

Full text
Abstract:
Modern civil commercial transport aircraft provide the means for the safest of all forms of transportation. While advanced computer technology ranging from flight management computers to warning and alerting devices contributed to flight safety significantly, it is undisputed that the flightcrew represents the most frequent primary cause factor in airline accidents. From a system perspective, machine actors such as the autopilot and human actors (the flightcrew) try to achieve goals (desired states of the aircraft). The set of activities to achieve a goal is called a function. In modern flightdecks both machine actors and human actors perform functions. Recent accident studies suggest that deficiencies in the flightcrew's ability to monitor how well either machines or themselves perform a function are a factor in many accidents and incidents. As humans are inherently bad monitors, this study proposes a method to automatically assess the status of a function in order to increase flight safety as part of an intelligent pilot aid, called the AgendaManager. The method was implemented for the capture altitude function: seeking to attain and maintain a target altitude. Fuzzy systems were used to compute outputs indicating how well the capture altitude function was performed from inputs describing the state of the aircraft. In order to conform to human expert assessments, the fuzzy systems were trained using a genetic algorithm (GA) whose objective was to minimize the discrepancy between system outputs and human expert assessments based on 72 scenarios. The resulting systems were validated by analyzing how well they conformed to new data drawn from another 32 scenarios. The results of the study indicated that even though the training procedure facilitated by the GA was able to improve conformance to human expert assessments, overall the systems performed too poorly to be deployed in a real environment. Nevertheless, experience and insights gained from the study will be valuable in the development of future automated systems to perform function assessment.
Graduation date: 1998
APA, Harvard, Vancouver, ISO, and other styles
34

"Investigating the Influence of Top-Down Mechanisms on Hemispheric Asymmetries in Verbal Memory." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.18703.

Full text
Abstract:
abstract: It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
Dissertation/Thesis
Ph.D. Speech and Hearing Science 2013
APA, Harvard, Vancouver, ISO, and other styles
35

Krishnanand, K. N. "Glowworm Swarm Optimization : A Multimodal Function Optimization Paradigm With Applications To Multiple Signal Source Localization Tasks." Thesis, 2007. http://hdl.handle.net/2005/480.

Full text
Abstract:
Multimodal function optimization generally focuses on algorithms to find either a local optimum or the global optimum while avoiding local optima. However, there is another class of optimization problems which have the objective of finding multiple optima with either equal or unequal function values. The knowledge of multiple local and global optima has several advantages such as obtaining an insight into the function landscape and selecting an alternative solution when dynamic nature of constraints in the search space makes a previous optimum solution infeasible to implement. Applications include identification of multiple signal sources like sound, heat, light and leaks in pressurized systems, hazardous plumes/aerosols resulting from nuclear/ chemical spills, fire-origins in forest fires and hazardous chemical discharge in water bodies, oil spills, deep-sea hydrothermal vent plumes, etc. Signals such as sound, light, and other electromagnetic radiations propagate in the form of a wave. Therefore, the nominal source profile that spreads in the environment can be represented as a multimodal function and hence, the problem of localizing their respective origins can be modeled as optimization of multimodal functions. Multimodality in a search and optimization problem gives rise to several attractors and thereby presents a challenge to any optimization algorithm in terms of finding global optimum solutions. However, the problem is compounded when multiple (global and local) optima are sought. This thesis develops a novel glowworm swarm optimization (GSO) algorithm for simultaneous capture of multiple optima of multimodal functions. The algorithm shares some features with the ant-colony optimization (ACO) and particle swarm optimization (PSO) algorithms, but with several significant differences. The agents in the GSO algorithm are thought of as glowworms that carry a luminescence quantity called luciferin along with them. The glowworms encode the function-profile values at their current locations into a luciferin value and broadcast the same to other agents in their neighborhood. The glowworm depends on a variable local decision domain, which is bounded above by a circular sensor range, to identify its neighbors and compute its movements. Each glowworm selects a neighbor that has a luciferin value more than its own, using a probabilistic mechanism, and moves toward it. That is, they are attracted to neighbors that glow brighter. These movements that are based only on local information enable the swarm of glowworms to partition into disjoint subgroups, exhibit simultaneous taxis-behavior towards, and rendezvous at multiple optima (not necessarily equal) of a given multimodal function. Natural glowworms primarily use the bioluminescent light to signal other individuals of the same species for reproduction and to attract prey. The general idea in the GSO algorithm is similar in these aspects in the sense that glowworm agents are assumed to be attracted to move toward other glowworm agents that have brighter luminescence (higher luciferin value). We present the development of the GSO algorithm in terms of its working principle, various algorithmic phases, and evolution of the algorithm from the first version of the algorithm to its present form. Two major phases ¡ splitting of the agent swarm into disjoint subgroups and local convergence of agents in each subgroup to peak locations ¡ are identified at the group level of the algorithm and theoretical performance results related to the latter phase are obtained for a simplified GSO model. Performance of the GSO algorithm against a large class of benchmark multimodal functions is demonstrated through simulation experiments. We categorize the various constants of the algorithm into algorithmic constants and parameters. We show in simulations that fixed values of the algorithmic constants work well for a large class of problems and only two parameters have some influence on algorithmic performance. We also study the performance of the algorithm in the presence of noise. Simulations show that the algorithm exhibits good performance in the presence of fairly high noise levels. We observe graceful degradation only with significant increase in levels of measurement noise. A comparison with the gradient based algorithm reveals the superiority of the GSO algorithm in coping with uncertainty. We conduct embodied robot simulations, by using a multi-robot-simulator called Player/Stage that provides realistic sensor and actuator models, in order to assess the GSO algorithm's suitability for multiple source localization tasks. Next, we extend this work to collective robotics experiments. For this purpose, we use a set of four wheeled robots that are endowed with the capabilities required to implement the various behavioral primitives of the GSO algorithm. We present an experiment where two robots use the GSO algorithm to localize a light source. We discuss an application of GSO to ubiquitous computing based environments. In particular, we propose a hazard-sensing environment using a heterogeneous swarm that consists of stationary agents and mobile agents. The agents deployed in the environment implement a modification of the GSO algorithm. In a graph of mini mum number of mobile agents required for 100% source-capture as a function of the number of stationary agents, we show that deployment of the stationary agents in a grid configuration leads to multiple phase-transitions in the heterogeneous swarm behavior. Finally, we use the GSO algorithm to address the problem of pursuit of multiple mobile signal sources. For the case where the positions of the pursuers and the moving source are collinear, we present a theoretical result that provides an upper bound on the relative speed of the mobile source below which the agents succeed in pursuing the source. We use several simulation scenarios to demonstrate the ecacy of the algorithm in pursuing mobile signal sources. In the case where the positions of the pursuers and the moving source are non-collinear, we use numerical experiments to determine an upper bound on the relative speed of the mobile source below which the pursuers succeed in pursuing the source.
APA, Harvard, Vancouver, ISO, and other styles
36

Abdulla, Mohammed Shahid. "Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization." Thesis, 2008. http://hdl.handle.net/2005/812.

Full text
Abstract:
In Chapter 2, we propose several two-timescale simulation-based actor-critic algorithms for solution of infinite horizon Markov Decision Processes (MDPs) with finite state-space under the average cost criterion. On the slower timescale, all the algorithms perform a gradient search over corresponding policy spaces using two different Simultaneous Perturbation Stochastic Approximation (SPSA) gradient estimates. On the faster timescale, the differential cost function corresponding to a given stationary policy is updated and averaged for enhanced performance. A proof of convergence to a locally optimal policy is presented. Next, a memory efficient implementation using a feature-vector representation of the state-space and TD (0) learning along the faster timescale is discussed. A three-timescale simulation based algorithm for solution of infinite horizon discounted-cost MDPs via the Value Iteration approach is also proposed. An approximation of the Dynamic Programming operator T is applied to the value function iterates. A sketch of convergence explaining the dynamics of the algorithm using associated ODEs is presented. Numerical experiments on rate based flow control on a bottleneck node using a continuous-time queueing model are presented using the proposed algorithms. Next, in Chapter 3, we develop three simulation-based algorithms for finite-horizon MDPs (FHMDPs). The first algorithm is developed for finite state and compact action spaces while the other two are for finite state and finite action spaces. Convergence analysis is briefly sketched. We then concentrate on methods to mitigate the curse of dimensionality that affects FH-MDPs severely, as there is one probability transition matrix per stage. Two parametrized actor-critic algorithms for FHMDPs with compact action sets are proposed, the ‘critic’ in both algorithms learning the policy gradient. We show w.p1convergence to a set with the necessary condition for constrained optima. Further, a third algorithm for stochastic control of stopping time processes is presented. Numerical experiments with the proposed finite-horizon algorithms are shown for a problem of flow control in communication networks. Towards stochastic optimization, in Chapter 4, we propose five algorithms which are variants of SPSA. The original one measurement SPSA uses an estimate of the gradient of objective function L containing an additional bias term not seen in two-measurement SPSA. We propose a one-measurement algorithm that eliminates this bias, and has asymptotic convergence properties making for easier comparison with the two-measurement SPSA. The algorithm, under certain conditions, outperforms both forms of SPSA with the only overhead being the storage of a single measurement. We also propose a similar algorithm that uses perturbations obtained from normalized Hadamard matrices. The convergence w.p.1 of both algorithms is established. We extend measurement reuse to design three second-order SPSA algorithms, sketch the convergence analysis and present simulation results on an illustrative minimization problem. We then propose several stochastic approximation implementations for related algorithms in flow-control of communication networks, beginning with a discrete-time implementation of Kelly’s primal flow-control algorithm. Convergence with probability1 is shown, even in the presence of communication delays and stochastic effects seen in link congestion indications. Two relevant enhancements are then pursued :a) an implementation of the primal algorithm using second-order information, and b) an implementation where edge-routers rectify misbehaving flows. Also, discrete-time implementations of Kelly’s dual algorithm and primal-dual algorithm are proposed. Simulation results a) verifying the proposed algorithms and, b) comparing stability properties with an algorithm in the literature are presented.
APA, Harvard, Vancouver, ISO, and other styles
37

(9741149), Lintao Ye. "Algorithmic and Graph-Theoretic Approaches for Optimal Sensor Selection in Large-Scale Systems." Thesis, 2020.

Find full text
Abstract:
Using sensor measurements to estimate the states and parameters of a system is a fundamental task in understanding the behavior of the system. Moreover, as modern systems grow rapidly in scale and complexity, it is not always possible to deploy sensors to measure all of the states and parameters of the system, due to cost and physical constraints. Therefore, selecting an optimal subset of all the candidate sensors to deploy and gather measurements of the system is an important and challenging problem. In addition, the systems may be targeted by external attackers who attempt to remove or destroy the deployed sensors. This further motivates the formulation of resilient sensor selection strategies. In this thesis, we address the sensor selection problem under different settings as follows.

First, we consider the optimal sensor selection problem for linear dynamical systems with stochastic inputs, where the Kalman filter is applied based on the sensor measurements to give an estimate of the system states. The goal is to select a subset of sensors under certain budget constraints such that the trace of the steady-state error covariance of the Kalman filter with the selected sensors is minimized. We characterize the complexity of this problem by showing that the Kalman filtering sensor selection problem is NP-hard and cannot be approximated within any constant factor in polynomial time for general systems. We then consider the optimal sensor attack problem for Kalman filtering. The Kalman filtering sensor attack problem is to attack a subset of selected sensors under certain budget constraints in order to maximize the trace of the steady-state error covariance of the Kalman filter with sensors after the attack. We show that the same results as the Kalman filtering sensor selection problem also hold for the Kalman filtering sensor attack problem. Having shown that the general sensor selection and sensor attack problems for Kalman filtering are hard to solve, our next step is to consider special classes of the general problems. Specifically, we consider the underlying directed network corresponding to a linear dynamical system and investigate the case when there is a single node of the network that is affected by a stochastic input. In this setting, we show that the corresponding sensor selection and sensor attack problems for Kalman filtering can be solved in polynomial time. We further study the resilient sensor selection problem for Kalman filtering, where the problem is to find a sensor selection strategy under sensor selection budget constraints such that the trace of the steady-state error covariance of the Kalman filter is minimized after an adversary removes some of the deployed sensors. We show that the resilient sensor selection problem for Kalman filtering is NP-hard, and provide a pseudo-polynomial-time algorithm to solve it optimally.
Next, we consider the sensor selection problem for binary hypothesis testing. The problem is to select a subset of sensors under certain budget constraints such that a certain metric of the Neyman-Pearson (resp., Bayesian) detector corresponding to the selected sensors is optimized. We show that this problem is NP-hard if the objective is to minimize the miss probability (resp., error probability) of the Neyman-Pearson (resp., Bayesian) detector. We then consider three optimization objectives based on the Kullback-Leibler distance, J-Divergence and Bhattacharyya distance, respectively, in the hypothesis testing sensor selection problem, and provide performance bounds on greedy algorithms when applied to the sensor selection problem associated with these optimization objectives.
Moving beyond the binary hypothesis setting, we also consider the setting where the true state of the world comes from a set that can have cardinality greater than two. A Bayesian approach is then used to learn the true state of the world based on the data streams provided by the data sources. We formulate the Bayesian learning data source selection problem under this setting, where the goal is to minimize the cost spent on the data sources such that the learning error is within a certain range. We show that the Bayesian learning data source selection is also NP-hard, and provide greedy algorithms with performance guarantees.
Finally, in light of the COVID-19 pandemic, we study the parameter estimation measurement selection problem for epidemics spreading in networks. Here, the measurements (with certain costs) are collected by conducting virus and antibody tests on the individuals in the epidemic spread network. The goal of the problem is then to optimally estimate the parameters (i.e., the infection rate and the recovery rate of the virus) in the epidemic spread network, while satisfying the budget constraint on collecting the measurements. Again, we show that the measurement selection problem is NP-hard, and provide approximation algorithms with performance guarantees.
APA, Harvard, Vancouver, ISO, and other styles
38

Schomburg, Helen. "New Algorithms for Local and Global Fiber Tractography in Diffusion-Weighted Magnetic Resonance Imaging." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-0023-3F8B-F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography