Teses / dissertações sobre o tema "Stochastic algorithms parameters identification"

Siga este link para ver outros tipos de publicações sobre o tema: Stochastic algorithms parameters identification.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 28 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Stochastic algorithms parameters identification".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Larsson, Erik. "Identification of stochastic continuous-time systems : algorithms, irregular sampling and Cramér-Rao bounds /". Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3944.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Koenig, Guillaume. "Par vagues et marées : étude de la circulation hydrodynamique d’un lagon étroit de Nouvelle-Calédonie et identification des conditions aux bords à l’aide d’un algorithme stochastique". Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0533.

Texto completo da fonte
Resumo:
Dans cette thèse, j’ai étudié l’hydrodynamique du lagon de Ouano en Nouvelle-Calédonie. Pour ce faire, j’ai implémenté un nouvel algorithme d’identification de paramètres. Le déferlement des vagues sur la barrière corallienne et les marées dominent l’hydrodynamique du lagon de Ouano. Je voulais évaluer leur impact relatif sur l’échange d’eau avec l’océan. Plusieurs études ont été menées dans le lagon auparavant. Je me base sur leur résultats pour la circulation et les outils de modélisations qu’elles ont mis en place dans ma thèse. Notamment, je réutilise le modèle CROCO ( Coastal Regional OceanCOmmunity) de C. Chevalier. J’utilise aussi des données récoltées dans le lagon en2016. Malgré ces travaux préalables, il reste des incertitudes sur la quantité d’eau amenée dans le lagon par le déferlement des vagues et la marée. De plus, la paramétrisation du déferlement, de la friction sur le récif et les conditions aux bords de marées ont incertaines dans le modèle numérique. Pour améliorer ces paramétrisations, ou même d’autres paramètres, j’ai implémenté et testé un nouvel outil. Cet outil était un algorithme stochastique d’identification de paramètres, l’algorithme Simultaneous Pertubations Stochastic Approximations(SPSA).Nous avons d’abord testé différentes versions de l’algorithme dans des environnements contrôlés, et notamment avec un modèle de turbulence unidimensionnel. J’ai ensuite utilisé cet algorithme pour identifier des conditions aux bords avec un modèle tidal linéaire du lagon de Ouano. Enfin, j’ai utilisé l’algorithme pour étudier l’impact du déferlement des vagues sur les courants mesurés comme des courants de marée dans le lagon de Ouano
In this thesis, I have studied the hydrodynamics of the Ouano coral lagoon in NewCaledonia and implemented a novel parameter identification algorithm to do so.Wave-breaking and tides dominate the Ouano lagoon; I wanted to evaluate theirimpact on the lagoon flushing.Several studies have been done in the lagoon before. I rely on both their findings forthe circulation and their tools for the modeling, namely the CROCO ( Coastal RegionalOcean COmmunity model) of C. Chevalier. I also have used data collected in 2016 inthe lagoon. However, some uncertainties remained on the amount of water broughtby the tides and the wave-breaking in the lagoon. Also, the parametrization of thewave-breaking friction coefficient and the tidal boundary conditions in the numericalmodel was uncertain.I implemented and tested a tool to improve those parametrizations or other modelparameters. This tool was a stochastic parameter identification algorithm, the Simul-taneous Perturbations Stochastic Approximations (SPSA) algorithm.We first tested different variants of the algorithm in a controlled environment andwith a 1-D turbulence model. Then I have used this algorithm to identify boundaryconditions with a linear tidal model of the Ouano lagoon. Finally, I have used thealgorithm to study the impact of the wave-breaking on the measurement of tides inthe Ouano
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Jenča, Pavol. "Identifikace parametrů elektrických motorů metodou podprostorů". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219678.

Texto completo da fonte
Resumo:
The electrical motors parameters identification is solved in this master’s thesis using subspace based methods. Electrical motors are simulated in Matlab/Simulink interactive environment, specifically permanent magnet DC motor and permanent magnet synchronous motor. Identification is developed in Matlab interactive environment. Different types of subspace algorithms are used for the estimation of parameters. Results of subspace parameters estimation are compared with least squares parameters estimation. The thesis describes subspace method, types of subspace algorithms, used electrical motors, nonlinear approach of identification and comparation of parameters identification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Debonos, Andreas A. "Estimation of non-linear ship roll parameters using stochastic identification techniques". Thesis, University of Sussex, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295784.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Alamyal, Mohamoud Omran A. "Evaluation of stochastic optimisation algorithms for induction machine winding fault identification". Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/1937.

Texto completo da fonte
Resumo:
This thesis is concerned with parameters identification and winding fault detection in induction motors using three different stochastic optimisation algorithms, namely genetic algorithm (GA), tabu search (TS) and simulated annealing (SA). Although induction motors are highly reliable, require low maintenance and have relatively high efficiency, they are subject to many electrical and mechanical types of faults. Undetected faults can lead to serious machine failures. Fault identification is, therefore, essential in order to detect and diagnose potential failures in electrical motors. Conventional methods of fault detection usually involve embedding sensors in the machines, but these are very expensive. The condition monitoring technique proposed in this thesis flags the presence of a winding fault and provides information about its nature and location by using an optimisation stochastic algorithm in conjunction with measured time domain voltage, stator current data and rotor speed data. This technique requires a mathematical ABCabc model of the three-phase induction motor. The performance of the three stochastic search methods is evaluated in this thesis for their use to identify open-circuit faults in the stator and rotor windings of a three-phase induction motor. The proposed fault detection technique is validated through the use of experimental data collected under steady-state operating conditions. Time domain terminal voltages and the rotor speed are used as input data for the induction motor model while the outputs are the calculated stator currents. These calculated currents are compared to the measured currents to produce a set of current errors that are integrated and summed to give an overall error function. Fault identification is achieved by adjusting the model parameters off-line using the stochastic search method to minimise this error function. The estimate values for the winding parameters give the best possible match between the performance of the faulty experimental machine and its mathematical ABCabc model. These estimates of the values of the motor winding parameters are used in the detection of the development of faults by identifying both the location and the nature of the winding fault. The effectiveness of the three stochastic methods to identify stator and rotor winding faults are compared in terms of the required computation resources and their success rates in converging to a solution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Zhou, Haiyan. "Stochastic Inverse Methods to Identify non-Gaussian Model Parameters in Heterogeneous Aquifers". Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/12267.

Texto completo da fonte
Resumo:
La modelación numérica del flujo de agua subterránea y del transporte de masa se está convirtiendo en un criterio de referencia en la actualidad para la evaluación de recursos hídricos y la protección del medio ambiente. Para que las predicciones de los modelos sean fiables, estos deben de estar lo más próximo a la realidad que sea posible. Esta proximidad se adquiere con los métodos inversos, que persiguen la integración de los parámetros medidos y de los estados del sistema observados en la caracterización del acuífero. Se han propuesto varios métodos para resolver el problema inverso en las últimas décadas que se discuten en la tesis. El punto principal de esta tesis es proponer dos métodos inversos estocásticos para la estimación de los parámetros del modelo, cuando estos no se puede describir con una distribución gausiana, por ejemplo, las conductividades hidráulicas mediante la integración de observaciones del estado del sistema, que, en general, tendrán una relación no lineal con los parámetros, por ejemplo, las alturas piezométricas. El primer método es el filtro de Kalman de conjuntos con transformación normal (NS-EnKF) construido sobre la base del filtro de Kalman de conjuntos estándar (EnKF). El EnKF es muy utilizado como una técnica de asimilación de datos en tiempo real debido a sus ventajas, como son la eficiencia y la capacidad de cómputo para evaluar la incertidumbre del modelo. Sin embargo, se sabe que este filtro sólo trabaja de manera óptima cuándo los parámetros del modelo y las variables de estado siguen distribuciones multigausianas. Para ampliar la aplicación del EnKF a vectores de estado no gausianos, tales como los de los acuíferos en formaciones fluvio-deltaicas, el NSEnKF propone aplicar una transformación gausiana univariada. El vector de estado aumentado formado por los parámetros del modelo y las variables de estado se transforman en variables con una distribución marginal gausiana.
Zhou ., H. (2011). Stochastic Inverse Methods to Identify non-Gaussian Model Parameters in Heterogeneous Aquifers [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12267
Palancia
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Dong, Wei. "Identification of Electrical Parameters in A Power Network Using Genetic Algorithms and Transient Measurements". Thesis, University of Nottingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523043.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

van, Wyk Hans-Werner. "A Variational Approach to Estimating Uncertain Parameters in Elliptic Systems". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27635.

Texto completo da fonte
Resumo:
As simulation plays an increasingly central role in modern science and engineering research, by supplementing experiments, aiding in the prototyping of engineering systems or informing decisions on safety and reliability, the need to quantify uncertainty in model outputs due to uncertainties in the model parameters becomes critical. However, the statistical characterization of the model parameters is rarely known. In this thesis, we propose a variational approach to solve the stochastic inverse problem of obtaining a statistical description of the diffusion coefficient in an elliptic partial differential equation, based noisy measurements of the model output. We formulate the parameter identification problem as an infinite dimensional constrained optimization problem for which we establish existence of minimizers as well as first order necessary conditions. A spectral approximation of the uncertain observations (via a truncated Karhunen-Loeve expansion) allows us to estimate the infinite dimensional problem by a smooth, albeit high dimensional, deterministic optimization problem, the so-called 'finite noise' problem, in the space of functions with bounded mixed derivatives. We prove convergence of 'finite noise' minimizers to the appropriate infinite dimensional ones, and devise a gradient based, as well as a sampling based strategy for locating these numerically. Lastly, we illustrate our methods by means of numerical examples.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Harth, Tobias [Verfasser]. "Identification of Material Parameters for Inelastic Constitutive Models : Stochastic Simulation and Design of Experiments / Tobias Harth". Aachen : Shaker, 2003. http://d-nb.info/1179036204/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Wong, king-fung, e 黃景峰. "Non-coding RNA identification along genome". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B4581949X.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Karavelić, Emir. "Stochastic Galerkin finite element method in application to identification problems for failure models parameters in heterogeneous materials". Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2501.

Texto completo da fonte
Resumo:
Cette thèse traite de rupture localisée de structures construites en matériau composite hétérogène, comme le béton, à deux échelles différentes. Ces deux échelles sont connectées par le biais de la mise à l'échelle stochastique, où toute information obtenue à l'échelle méso est utilisée comme connaissance préalable à l'échelle macro. À l'échelle méso, le modèle de réseau est utilisé pour représenter la structure multiphasique du béton, à savoir le ciment et les granulats. L'élément de poutre représenté par une poutre Timoshenko 3D intégrée avec de fortes discontinuités assure un maillage complet indépendance de la propagation des fissures. La géométrie de la taille des agrégats est prise en accord avec la courbe EMPA et Fuller tandis que la distribution de Poisson est utilisée pour la distribution spatiale. Les propriétés des matériaux de chaque phase sont obtenues avec une distribution gaussienne qui prend en compte la zone de transition d'interface (ITZ) par l'affaiblissement du béton. À l'échelle macro, un modèle de plasticité multisurface est choisi qui prend en compte à la fois la contribution d'un écrouissage sous contrainte avec une règle d'écoulement non associative ainsi que des composants d'un modèle d'adoucissement de déformation pour un ensemble complet de différents modes de défaillance 3D. Le modèle de plasticité est représenté par le critère de rendement Drucker-Prager, avec une fonction potentielle plastique similaire régissant le comportement de durcissement tandis que le comportement de ramollissement des contraintes est représenté par le critère de St. Venant. La procédure d'identification du modèle macro-échelle est réalisée de manière séquentielle. En raison du fait que tous les ingrédients du modèle à l'échelle macro ont une interprétation physique, nous avons fait l'étalonnage des paramètres du matériau en fonction de l'étape particulière. Cette approche est utilisée pour la réduction du modèle du modèle méso-échelle au modèle macro-échelle où toutes les échelles sont considérées comme incertaines et un calcul de probabilité est effectué. Lorsque nous modélisons un matériau homogène, chaque paramètre inconnu du modèle réduit est modélisé comme une variable aléatoire tandis que pour un matériau hétérogène, ces paramètres de matériau sont décrits comme des champs aléatoires. Afin de faire des discrétisations appropriées, nous choisissons le raffinement du maillage de méthode p sur le domaine de probabilité et la méthode h sur le domaine spatial. Les sorties du modèle avancé sont construites en utilisant la méthode de Galerkin stochastique fournissant des sorties plus rapidement le modèle avancé complet. La procédure probabiliste d'identification est réalisée avec deux méthodes différentes basées sur le théorème de Bayes qui permet d'incorporer de nouvelles bservations générées dans un programme de chargement particulier. La première méthode Markov Chain Monte Carlo (MCMC) est identifiée comme mettant à jour la mesure, tandis que la deuxième méthode Polynomial Chaos Kalman Filter (PceKF) met à jour la fonction mesurable. Les aspects de mise en œuvre des modèles présentés sont donnés en détail ainsi que leur validation à travers les exemples numériques par rapport aux résultats expérimentaux ou par rapport aux références disponibles dans la littérature
This thesis deals with the localized failure for structures built of heterogeneous composite material, such as concrete, at two different scale. These two scale are latter connected through the stochastic upscaling, where any information obtained at meso-scale are used as prior knowledge at macro-scale. At meso scale, lattice model is used to represent the multi-phase structure of concrete, namely cement and aggregates. The beam element represented by 3D Timoshenko beam embedded with strong discontinuities ensures complete mesh independency of crack propagation. Geometry of aggregate size is taken in agreement with EMPA and Fuller curve while Poisson distribution is used for spatial distribution. Material properties of each phase is obtained with Gaussian distribution which takes into account the Interface Transition Zone (ITZ) through the weakening of concrete. At macro scale multisurface plasticity model is chosen that takes into account both the contribution of a strain hardening with non-associative flow rule as well as a strain softening model components for full set of different 3D failure modes. The plasticity model is represented with Drucker-Prager yield criterion, with similar plastic potential function governing hardening behavior while strain softening behavior is represented with St. Venant criterion. The identification procedure for macro-scale model is perfomed in sequential way. Due to the fact that all ingredients of macro-scale model have physical interpretation we made calibration of material parameters relevant to particular stage. This approach is latter used for model reduction from meso-scale model to macro-scale model where all scales are considered as uncertain and probability computation is performed. When we are modeling homogeneous material each unknown parameter of reduced model is modeled as a random variable while for heterogeneous material, these material parameters are described as random fields. In order to make appropriate discretizations we choose p-method mesh refinement over probability domain and h-method over spatial domain. The forward model outputs are constructed by using Stochastic Galerkin method providing outputs more quickly the the full forward model. The probabilistic procedure of identification is performed with two different methods based on Bayes’s theorem that allows incorporating new observation generated in a particular loading program. The first method Markov Chain Monte Carlo (MCMC) is identified as updating the measure, whereas the second method Polynomial Chaos Kalman Filter (PceKF) is updating the measurable function. The implementation aspects of presented models are given in full detail as well as their validation throughthe numerical examples against the experimental results or against the benchmarks available from literature
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Querry, Stephane. "Stochastic optimization by evolutionary methods applied to autonomous aircraft flight control". Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD031.

Texto completo da fonte
Resumo:
Le but de ce doctorat est de déterminer dans quelle mesure les algorithmes issus de l’intelligence artificielle, principalement les Algorithmes Evolutionnaires et la Programmation Génétique, pourraient aider les algorithmes de l’automatique classique afin de permettre aux engins autonomes de disposer de capacités bien supérieures, et ce dans les domaines de l’identification, de la planification de trajectoire, du pilotage et de la navigation.De nouveaux algorithmes ont été développés, dans les domaines de l’identification, de la planification de trajectoire, de la navigation et du contrôle, et ont été testés sur des systèmes de simulation et des aéronefs du monde réel (Oktokopter du ST2I, Bebop.Drone de la société Parrot, Twin Otter et F-16 de la NASA) de manière à évaluer les apports de ces nouvelles approches par rapport à l’état de l’art.La plupart de ces nouvelles approches ont permis d’obtenir de très bons résultats comparés à l’état de l’art, notamment dans le domaine de l’identification et de la commande, et un approfondissement des travaux devraient être engagé afin de développer le potentiel applicatifs de certains algorithmes
The object of this PhD has consisted in elaborating evolutionary computing algorithms to find interesting solutions to important problems in several domains of automation science, applied to aircrafts mission conduction and to understand what could be the advantages of using such approaches, compared to the state-of-the-art, in terms of efficiency, robustness, and effort of implementation.New algorithms have been developed, in Identification, Path planning, Navigation and Control and have been tested on simulation and on real world platforms (AR.Drone 3.0 UAV (Parrot), Oktokopter UAV, Twin Otter and military fighter F-16 (NASA LaRC)), to assess the performances improvements, given by the new proposed approaches.Most of these new approaches provide very interesting results; and research work (on control by evolutionary algorithms, identification by genetic programming and relative navigation) should be engaged to plan potential applications in different real world technologies
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Souflas, Ioannis. "Qualitative Adaptive Identification for Powertrain Systems. Powertrain Dynamic Modelling and Adaptive Identification Algorithms with Identifiability Analysis for Real-Time Monitoring and Detectability Assessment of Physical and Semi-Physical System Parameters". Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/14427.

Texto completo da fonte
Resumo:
A complete chain of analysis and synthesis system identification tools for detectability assessment and adaptive identification of parameters with physical interpretation that can be found commonly in control-oriented powertrain models is presented. This research is motivated from the fact that future powertrain control and monitoring systems will depend increasingly on physically oriented system models to reduce the complexity of existing control strategies and open the road to new environmentally friendly technologies. At the outset of this study a physics-based control-oriented dynamic model of a complete transient engine testing facility, consisting of a single cylinder engine, an alternating current dynamometer and a coupling shaft unit, is developed to investigate the functional relationships of the inputs, outputs and parameters of the system. Having understood these, algorithms for identifiability analysis and adaptive identification of parameters with physical interpretation are proposed. The efficacy of the recommended algorithms is illustrated with three novel practical applications. These are, the development of an on-line health monitoring system for engine dynamometer coupling shafts based on recursive estimation of shaft’s physical parameters, the sensitivity analysis and adaptive identification of engine friction parameters, and the non-linear recursive parameter estimation with parameter estimability analysis of physical and semi-physical cyclic engine torque model parameters. The findings of this research suggest that the combination of physics-based control oriented models with adaptive identification algorithms can lead to the development of component-based diagnosis and control strategies. Ultimately, this work contributes in the area of on-line fault diagnosis, fault tolerant and adaptive control for vehicular systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Degenne, Rémy. "Impact of structure on the design and analysis of bandit algorithms". Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7179.

Texto completo da fonte
Resumo:
Cette thèse porte sur des problèmes d'apprentissage statistique séquentiel, dits bandits stochastiques à plusieurs bras. Dans un premier temps un algorithme de bandit est présenté. L'analyse de cet algorithme, comme la majorité des preuves usuelles de bornes de regret pour algorithmes de bandits, utilise des intervalles de confiance pour les moyennes des bras. Dans un cadre paramétrique,on prouve des inégalités de concentration quantifiant la déviation entre le paramètre d'une distribution et son estimation empirique, afin d'obtenir de tels intervalles. Ces inégalités sont exprimées en fonction de la divergence de Kullback-Leibler. Trois extensions du problème de bandits sont ensuite étudiées. Premièrement on considère le problème dit de semi-bandit combinatoire, dans lequel un algorithme choisit un ensemble de bras et la récompense de chaque bras est observée. Le regret minimal atteignable dépend alors de la corrélation entre les bras. On considère ensuite un cadre où on change le mécanisme d'obtention des observations provenant des différents bras. Une source de difficulté du problème de bandits est la rareté de l'information: seul le bras choisi est observé. On montre comment on peut tirer parti de la disponibilité d'observations supplémentaires gratuites, ne participant pas au regret. Enfin, une nouvelle famille d'algorithmes est présentée afin d'obtenir à la fois des guaranties de minimisation de regret et d'identification du meilleur bras. Chacun des algorithmes réalise un compromis entre regret et temps d'identification. On se penche dans un deuxième temps sur le problème dit d'exploration pure, dans lequel un algorithme n'est pas évalué par son regret mais par sa probabilité d'erreur quant à la réponse à une question posée sur le problème. On détermine la complexité de tels problèmes et on met au point des algorithmes approchant cette complexité
In this Thesis, we study sequential learning problems called stochastic multi-armed bandits. First a new bandit algorithm is presented. The analysis of that algorithm uses confidence intervals on the mean of the arms reward distributions, as most bandit proofs do. In a parametric setting, we derive concentration inequalities which quantify the deviation between the mean parameter of a distribution and its empirical estimation in order to obtain confidence intervals. These inequalities are presented as bounds on the Kullback-Leibler divergence. Three extensions of the stochastic multi-armed bandit problem are then studied. First we study the so-called combinatorial semi-bandit problem, in which an algorithm chooses a set of arms and the reward of each of these arms is observed. The minimal attainable regret then depends on the correlation between the arm distributions. We consider then a setting in which the observation mechanism changes. One source of difficulty of the bandit problem is the scarcity of information: only the arm pulled is observed. We show how to use efficiently eventual supplementary free information (which do not influence the regret). Finally a new family of algorithms is introduced to obtain both regret minimization and est arm identification regret guarantees. Each algorithm of the family realizes a trade-off between regret and time needed to identify the best arm. In a second part we study the so-called pure exploration problem, in which an algorithm is not evaluated on its regret but on the probability that it returns a wrong answer to a question on the arm distributions. We determine the complexity of such problems and design with performance close to that complexity
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Anglade, Célimène. "Contribution à l'identification des paramètres rhéologiques des suspensions cimentaires à l'état frais". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30031/document.

Texto completo da fonte
Resumo:
Le travail de thèse s'inscrit dans la modélisation numérique de l'écoulement des matériaux cimentaires à l'état frais couplée à un outil d'identification des paramètres. Il traite en particulier l'étape de mise en place de l'identification par analyse inverse. D'abord, une analyse de la littérature fait ressortir l'existence d'outils rhéométriques dédiés aux suspensions cimentaires ; le passage des grandeurs macroscopiques à celles locales est faite, soit par le biais de l'utilisation de géométries conventionnelles, soit au moyen de méthodes de calibration. Néanmoins, ces outils ne permettent pas de trouver une signature rhéologique unique pour une même suspension. De plus, les stratégies d'identification des paramètres relatifs aux matériaux cimentaires frais sont peu nombreuses et limitées aux données locales. Ensuite, une stratégie qui consiste à identifier les paramètres d'une loi supposée, directement à partir des mesures macroscopiques simulées (couples, vitesses de rotation imposées au mobile de cisaillement) a été développée et validée en 2D, en discutant notamment de l'efficacité des algorithmes d'optimisation testés (méthode du simplexe et algorithmes génétiques), en fonction du degré de connaissances que l'utilisateur a du matériau. Enfin, la méthode a été appliquée en 3D sur des fluides modèles supposés homogènes. Elle apparaît efficace en fluide pseudo-plastique, notamment par combinaison des algorithmes d'optimisation. Mais il reste des obstacles à lever en fluide visco-plastique, vraisemblablement liés aux outils expérimentaux plutôt qu'à la procédure d'identification elle-même
The thesis work is part of the numerical modeling of the flow of cementitious materials in the fresh state coupled with an identification procedure of the parameters. It deals in particular with the step of the development of the identification by inverse analysis. First,the literature review reveals the existence of rheometric tools dedicated to cementitious suspensions; The passage from the macroscopic quantities to the local ones is made either by the use of conventional geometries or by means of calibration methods. Nevertheless, these tools do not make it possible to find the expected single rheological signature for a given suspension. In addition, there are few studies reporting strategies for identifying constitutive parameters in the case of fresh cement-based materials and they are limited to local data. Then, a strategy consisting in identifying the parameters of a supposed law, directly on the basis of the simulated macroscopic measurements (torques, rotational speeds imposed on the shearing tool) was developed and validated in 2D, discussing in particular the efficiency Of the optimization algorithms tested (simplex method and genetic algorithms), according to the degree of knowledge that the user has of the material. Finally, the method has been applied in 3D on model fluids, assuming that they are homogeneous. The method appears effective in the case of pseudo-plastic fluid, in particular by combining both optimization algorithms used. But there remain obstacles to overcome in the case of visco-plastic fluids, probably related to the experimental tools rather than to the procedure of identification itself
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Giesbrecht, Mateus 1984. "Propostas imuno-inspiradas para identificação de sistemas e realização de séries temporais multivariáveis no espaço de estado". [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260671.

Texto completo da fonte
Resumo:
Orientador: Celso Pascoli Bottura
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-22T08:49:57Z (GMT). No. of bitstreams: 1 Giesbrecht_Mateus_D.pdf: 4188992 bytes, checksum: a2d91ff20132430d1389b8cd758b80bc (MD5) Previous issue date: 2012
Resumo: Nesta tese é descrito como alguns problemas relacionados à identificação de sistemas discretos multivariáveis, à realização de séries temporais discretas multivariáveis e à modelagem de séries temporais discretas multivariáveis, podem ser formulados como problemas de otimização. Além da formulação dos problemas de otimização, nesta tese também são apresentadas algumas propostas imuno-inspiradas para a solução de cada um dos problemas, assim como os resultados e conclusões da aplicação dos métodos propostos. Os métodos aqui propostos apresentam resultados e performance melhores que aqueles obtidos por métodos conhecidos para solução dos problemas estudados, e podem ser aplicados em problemas cujas condições não sejam favoráveis para aplicação dos métodos conhecidos na literatura
Abstract: In this thesis it is described how some problems related to multivariable system identification, multivariable time series realization and multivariable time series modeling, can be formulated as optimization problems. Additionally, in this thesis some immune-inspired methods to solve each problem are also shown, and also the results and conclusions resultant from the application of the proposed methods. The performance and the results obtained with the methods here proposed are better than the results produced by known methods to solve the studied problems and can be applied even if the problem conditions are not suitable to the methods presented in the literature
Doutorado
Automação
Doutor em Engenharia Elétrica
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Santos, Rodríguez Cristian de. "Backanalysis methodology based on multiple optimization techniques for geotechnical problems". Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/334179.

Texto completo da fonte
Resumo:
Nowadays, thanks to the increase of computers capability to solve huge and complex problems, and also thanks to the endless effort of the geotechnical community to define better and more sophisticated constitutive models, the challenge to predict and simulate soil behavior has been eased. However, due to the increase in that sophistication, the number of parameters that define the problem has also increased. Moreover, frequently, some of those parameters do not have a real geotechnical meaning as they just come from mathematical expressions, which makes them difficult to identify. As a consequence, more effort has to be placed on parameters identification in order to fully define the problem. This thesis aims to provide a methodology to facilitate the identification of parameters of soil constitutive models by backanalysis. The best parameters are defines as those that minimize an objective function based on the differences between measurements and computed values. Different optimization techniques have been used in this study, from the most traditional ones, such as the gradient based methods, to the newest ones, such as adaptive genetic algorithms and hybrid methods. From this study, several recommendations have been put forward in order to take the most advantage of each type of optimization technique. Along with that, an extensive analysis has been carried out to determine the influence on soil parameters identification of what to measure, where to measure and when to measure in the context of tunneling. The Finite Element code Plaxis has been used as a tool for the direct analysis. A FORTRAN code has been developed to automate the entire backanalysis procedure. The Hardening Soil Model (HSM) has been adopted to simulate the soil behavior. Several soil parameters of the HSM implemented in Plaxis, such as E_50^ref, E_ur^ref, c and f, have been identified for different geotechnical scenarios. First, a synthetic tunnel case study has been used to analyze all the different approaches that have been proposed in this thesis. Then, two complex real cases of a tunnel construction (Barcelona Metro Line 9) and a large excavation (Girona High-Speed Railway Station) have been presented to illustrate the potential of the methodology. Special focus on the influence of construction procedures and instruments error structure has been placed for the tunnel backanalysis, whereas in the station backanalysis, more effort has been devoted to the potential of the concept of adaptive design by backanalysis. Moreover, another real case, involving a less conventional geotechnical problem, such as Mars surface exploratory rovers, has been also presented to test the backanalysis methodology and the reliability of the Wong & Reece wheel-terrain model; widely adopted by the terramechanics community, but nonetheless, still not fully accepted when analyzing lightweight rovers as the ones that have been used in recent Mars exploratory missions.
Actualmente, gracias al aumento de la capacidad de los ordenadores para resolver problemas grandes y complejos, y gracias también al gran esfuerzo de la comunidad geotécnica de definir mejores y más sofisticados modelos constitutivos, se ha abordado el reto de predecir y simular el comportamiento del terreno. Sin embargo, debido al aumento de esa sofisticación, también ha aumentado el número de parámetros que definen el problema. Además, frecuentemente, muchos de esos parámetros no tienen un sentido geotécnico real dado que vienen directamente de expresiones puramente matemáticas, lo cual dificulta su identificación. Como consecuencia, es necesario un mayor esfuerzo en la identificación de los parámetros para poder definir apropiadamente el problema. Esta tesis pretende proporcionar una metodología que facilite la identificación mediante el análisis inverso de los parámetros de modelos constitutivos del terreno. Los mejores parámetros se definen como aquellos que minimizan una función objetivo basada en la diferencia entre medidas y valores calculados. Diferentes técnicas de optimización han sido utilizadas en este estudio, desde las más tradicionales, como los métodos basados en el gradiente, hasta las más modernas, como los algoritmos genéticos adaptativos y los métodos híbridos. De este estudio, se han extraído varias recomendaciones para sacar el mayor provecho de cada una de las técnicas de optimización. Además, se ha llevado a cabo un análisis extensivo para determinar la influencia sobre qué medir, dónde medir y cuándo medir en el contexto de la excavación de un túnel. El código de Elementos Finitos Plaxis ha sido utilizado como herramienta de cálculo del problema directo. El desarrollo de un código FORTRAN ha sido necesario para automatizar todo el procedimiento de Análisis Inverso. El modelo constitutivo de Hardening Soil ha sido adoptado para simular el comportamiento del terreno. Varios parámetros del modelo constitutivo de Hardening implementado en Plaxis, como E_50^ref, E_ur^ref, c y f, han sido identificados para diferentes escenarios geotécnicos. Primero, se ha utilizado un caso sintético de un túnel donde se han analizado todas las distintas técnicas que han sido propuestas en esta tesis. Después, dos casos reales complejos de una construcción de un túnel (Línea 9 del Metro de Barcelona) y una gran excavación (Estación de Girona del Tren de Alta Velocidad) se han presentado para ilustrar el potencial de la metodología. Un enfoque especial en la influencia del procedimiento constructivo y la estructura del error de las medidas se le ha dado al análisis inverso del túnel, mientras que en el análisis inverso de la estación el esfuerzo se ha centrado más en el concepto del diseño adaptativo mediante el análisis inverso. Además, otro caso real, algo menos convencional en términos geotécnicos, como es la exploración de la superficie de Marte mediante robots, ha sido presentado para examinar la metodología y la fiabilidad del modelo de interacción suelo-rueda de Wong y Reece; extensamente adoptado por la comunidad que trabajo en Terramecánica, pero aún no totalmente aceptada para robots ligeros como los que se han utilizado recientemente en las misiones de exploración de Marte.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Clausner, André. "Möglichkeiten zur Steuerung von Trust-Region Verfahren im Rahmen der Parameteridentifikation". Thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114847.

Texto completo da fonte
Resumo:
Zur Simulation technischer Prozesse ist eine hinreichend genaue Beschreibung des Materialverhaltens notwendig. Die hierfür häufig verwendeten phänomenologischen Ansätze, wie im vorliegenden Fall die HILLsche Fließbedingung, enthalten materialspezifische Parameter, welche nicht direkt messbar sind. Die Identifikation dieser Materialparameter erfolgt in der Regel durch Minimierung eines Fehlerquadratfunktionals, welches Differenzen von Messwerten und zugehörigen numerisch berechneten Vergleichswerten enthält. In diesem Zusammenhang haben sich zur Lösung dieser Minimierungsaufgabe die Trust-Region Verfahren als gut geeignet herausgestellt. Die Aufgabe besteht darin, die verschiedenen Möglichkeiten zur Steuerung eines Trust-Region Verfahrens, im Hinblick auf die Eignung für das vorliegende Identifikationsproblem, zu untersuchen. Dazu werden die Quadratmittelprobleme und deren Lösungsverfahren überblicksmäßig betrachtet. Danach wird näher auf die Trust-Region Verfahren eingegangen, wobei sich im Weiteren auf Verfahren mit positiv definiten Ansätzen für die Hesse-Matrix, den Levenberg-Marquardt Verfahren, beschränkt wird. Danach wird ein solcher Levenberg-Marquardt Algorithmus in verschiedenen Ausführungen implementiert und an dem vorliegenden Identifikationsproblem getestet. Als Ergebnis stellt sich eine gute Kombination aus verschiedenen Teilalgorithmen des Levenberg-Marquardt Algorithmus mit einer hohen Konvergenzgeschwindigkeit heraus, welche für das vorliegende Problem gut geeignet ist.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Li, Zhao Xing, e 李昭興. "Parameters identification and controller design using genetic algorithms". Thesis, 1996. http://ndltd.ncl.edu.tw/handle/42971349483967805005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Jian-JhihWang e 王建智. "Identification of Modal Parameters of System by Data-driven Stochastic Subspace Identification". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/d6qv48.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Min-HsuanChung e 鍾旻軒. "Identification of Modal Parameters under Nonstationary Ambient Vibration by Data-driven Stochastic Subspace Identification". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/k4ge58.

Texto completo da fonte
Resumo:
碩士
國立成功大學
航空太空工程學系
107
In most modal analysis of ambient vibrations, it is usually assumed that excitation is a stationary white noise because of the randomness of ambient excitation. However, most of the realistic ambient vibrations are nonstationary signals that statistics change over time. In this thesis, Data-driven Stochastic Subspace Identification (SSI-Data) method is employed to study the nonstationary ambient vibration problems. This identification method is based on the state subspace model which is a stationary time series model. In order to identify nonstationary vibrations, this thesis improves the curve fitting method proposed by previous studies, which convert the output responses to the approximate stationary responses. In the SSI-Data method, the order of a system is determined by singular value decomposition and the modal parameters are then calculated. However, for the identification of realistic nonstationary vibrations, there is no obvious jump point in the singular value decomposition diagram. Therefore, this thesis adopts stabilization diagram to determine the orders of the system, and then formulates the criterions and procedures to extract the modal parameters.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Hanzely, Filip. "Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters". Diss., 2020. http://hdl.handle.net/10754/664789.

Texto completo da fonte
Resumo:
Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges. In this thesis, we deal with each of these sources of difficulty in a different way. To efficiently address the big data issue, we develop new methods which in each iteration examine a small random subset of the training data only. To handle the big model issue, we develop methods which in each iteration update a random subset of the model parameters only. Finally, to deal with ill-conditioned problems, we devise methods that incorporate either higher-order information or Nesterov’s acceleration/momentum. In all cases, randomness is viewed as a powerful algorithmic tool that we tune, both in theory and in experiments, to achieve the best results. Our algorithms have their primary application in training supervised machine learning models via regularized empirical risk minimization, which is the dominant paradigm for training such models. However, due to their generality, our methods can be applied in many other fields, including but not limited to data science, engineering, scientific computing, and statistics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Jyun-FaJhang e 張竣發. "Identification of Modal Parameters of Systems under Ambient Vibration by the Data-driven Stochastic Subspace Identification". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/84t8h3.

Texto completo da fonte
Resumo:
碩士
國立成功大學
航空太空工程學系
106
In operational modal analysis, we usually assume that excitation is a stationary white noise for ambient randomness. According to previous studies, a non-white response can be obtained through a hypothetical system using stationary white noise. Furthermore, the response can be applied to the system to simulate the dynamic behavior of the system in a non-white ambient vibration. In the research, we solve the problems of modal identifi-cation for non-white ambient vibration by Data-driven Stochastic Subspace Identifica-tion (SSI-DATA). SSI-DATA determines the orders of the system by singular value de-composition and calculates the modal parameters. For the modal parameter identification of system in ambient vibration of non-white noise, the numerical analysis shows that the order of the identified system and the order of the hypothetical system must be taken into consideration when determining the system order, so that we get correct identifica-tions with sufficient dynamic information.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Lu, Yi Feng, e 盧宜豐. "Stochastic Approaches for the Pressure Fluctuation and the Hydraulic Parameters Identification in Phreatic Aquifer". Thesis, 1996. http://ndltd.ncl.edu.tw/handle/41979535722738968649.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Χίος, Ιωάννης. "Identification of multivariate stochastic functional models with applications in damage detection of structures". Thesis, 2012. http://hdl.handle.net/10889/5562.

Texto completo da fonte
Resumo:
This thesis addresses the identification of stochastic systems operating under different conditions, based on data records corresponding to a sample of such operating conditions. This topic is very important, as systems operating under different, though constant conditions at different occasions (time intervals) are often encountered in practice. Typical examples include mechanical, aerospace or civil structures that operate under different environmental conditions (temperature or humidity, for instance) on different occasions (period of day, and so on). Such different operating conditions may affect the system characteristics, and therefore its dynamics. Given a set of data records corresponding to distinct operating conditions, it is most desirable to establish a single global model capable of describing the system throughout the entire range of admissible operating conditions. In the present thesis this problem is treated via a novel stochastic Functional Pooling (FP) identification framework which introduces functional dependencies (in terms of the operating condition) in the postulated model structure. The FP framework offers significant advantages over other methods providing global models by interpolating a set of conventional models (one for each operating condition), as it: (i) treats data records corresponding to different operating conditions simultaneously, and fully takes cross-dependencies into account thus yielding models with optimal statistical accuracy, (ii) uses a highly parsimonious representation which provides precise information about the system dynamics at any specified operating condition without resorting to customary interpolation schemes, (iii) allows for the determination of modeling uncertainty at any specified operating condition via formal interval estimates. To date, all research efforts on the FP framework have concentrated in identifying univariate (single excitation-single response) stochastic models. The present thesis aims at (i) properly formulating and extending the FP framework to the case of multivariate stochastic systems operating under multiple operating conditions, and (ii) introducing an approach based on multivariate FP modeling and statistical hypothesis testing for damage detection under different operating conditions. The case of multivariate modeling is more challenging compared to its univariate counterpart as the couplings between the corresponding signals lead to more complicated model structures, whereas their nontrivial parametrization raises issues on model identifiability. The main focus of this thesis is on models of the Functionally Pooled Vector AutoRegressive with eXogenous excitation (FP-VARX) form, and Vector AutoRegressive Moving Average (FP-VARMA) form. These models may be thought of as generalizations of their conventional VARX/VARMA counterparts with the important distinction being that the model parameters are explicit functions of the operating condition. Initially, the identification of FP-VARX models is addressed. Least Squares (LS) and conditional Maximum Likelihood (ML) type estimators are formulated, and their consistency along with their asymptotic normality is established. Conditions ensuring FP-VARX identifiability are postulated, whereas model structure specification is based upon proper forms of information criteria. The performance characteristics of the identification approach are assessed via Monte Carlo studies, which also demonstrate the effectiveness of the proposed framework and its advantages over conventional identification approaches based on VARX modeling. Subsequently, an experimental study aiming at identifying the temperature effects on the dynamics of a smart composite beam via conventional model and novel global model approaches is presented. The conventional model approaches are based on non-parametric and parametric VARX representations, whereas the global model approaches are based on parametric Constant Coefficient Pooled (CCP) and Functionally Pooled (FP) VARX representations. Although the obtained conventional model and global representations are in rough overall agreement, the latter simultaneously use all available data records and offer improved accuracy and compactness. The CCP-VARX representations provide an ``averaged'' description of the structural dynamics over temperature, whereas their FP-VARX counterparts allow for the explicit, analytical modeling of temperature dependence, and attain improved estimation accuracy. In addition, the identification of FP-VARMA models is addressed. Two-Stage Least Squares (2SLS) and conditional ML type estimators are formulated, and their consistency and asymptotic normality are established. Furthermore, an effective method for 2SLS model estimation featuring a simplified procedure for obtaining residuals in the first stage is introduced. Conditions ensuring FP-VARMA model identifiability are also postulated. Model structure specification is based upon a novel two-step approach using Canonical Correlation Analysis (CCA) and proper forms of information criteria, thus avoiding the use of exhaustive search procedures. The performance characteristics of the identification approach are assessed via a Monte Carlo study, which also demonstrates the effectiveness of the proposed framework over conventional identification approaches based on VARMA modeling. An approach based on the novel FP models and statistical hypothesis testing for damage detection under different operating conditions is also proposed. It includes two versions: the first version is based upon the obtained modal parameters, whereas the second version is based upon the discrete-time model parameters. In an effort to streamline damage detection, procedures for compressing the information carried by the modal or the discrete-time model parameters via Principal Component Analysis (PCA) are also employed. The effectiveness of the proposed damage detection approach is assessed on a smart composite beam with hundreds of experiments corresponding to different temperatures. In its present form, the approach relies upon response (output-only) vibration data, although excitation-response data may be also used. FP-VAR modeling is used identify the temperature dependent structural dynamics, whereas a new scheme for model structure selection is introduced which avoids the use of exhaustive search procedures. The experimental results verify the capability of both versions of the approach to infer reliable damage detection under different temperatures. Furthermore, alternative methods attempting removal of the temperature effects from the damage sensitive features are also employed, allowing for a detailed and concise comparison. Finally, some special topics on global VARX modeling are treated. The focus is on the identification of the Pooled (P) and Constant Coefficient Pooled (CCP) VARX model classes. Although both model classes are of limited scope, they are useful tools for global model identification. In analogy to the FP-VARX/VARMA model case, the LS and conditional ML type estimators are studied for both model classes, whereas conditions ensuring model identifiability are also postulated. The relationships interconnecting the P-VARX and CCP-VARX models to the FP-VARX models in terms of compactness and achievable accuracy are studied, whereas their association to the conventional VARX models is also addressed. The effectiveness and performance characteristics of the novel global modeling approaches are finally assessed via Monte Carlo studies.
Η παρούσα διατριβή πραγματεύεται την αναγνώριση πολυμεταβλητών στοχαστικών συστημάτων που παρουσιάζουν πολλαπλές συνθήκες λειτουργίας, βασιζόμενοι σε δεδομένα που αντιστοιχούν σε ένα δείγμα ενδεικτικών συνθηκών λειτουργίας. Η σπουδαιότητα του προβλήματος είναι μεγάλη, καθώς στην πράξη συναντώνται πολύ συχνά συστήματα όπου οι επιμέρους συνθήκες λειτουργίας παραμένουν σταθερές ανά χρονικά διαστήματα. Τυπικά παραδείγματα περιλαμβάνουν μηχανολογικές, αεροναυτικές και δομικές κατασκευές που λειτουργούν κάτω από διαφορετικές συνθήκες (π.χ. θερμοκρασίας και/ή υγρασίας) σε διαφορετικές συνθήκες (π.χ. περίοδος της ημέρας). Οι διαφορετικές συνθήκες λειτουργίας ενδέχεται να επηρεάσουν ένα σύστημα και ως εκ τούτου τα δυναμικά χαρακτηριστικά του. Λαμβάνοντας υπόψη ένα σύνολο δεδομένων που αντιστοιχούν σε διαφορετικές συνθήκες λειτουργίας, είναι επιθυμητή η εύρεση ενός "γενικευμένου" μοντέλου ικανού να περιγράψει το σύστημα σε όλο το φάσμα των αποδεκτών συνθηκών λειτουργίας. Στην παρούσα διατριβή το πρόβλημα αυτό αντιμετωπίζεται μέσω ενός καινοτόμου πλαισίου αναγνώρισης στοχαστικών μοντέλων Συναρτησιακής Σώρευσης (stochastic Functional Pooling Framework), το οποίο εισάγει συναρτησιακές εξαρτήσεις (αναφορικά με την κατάσταση λειτουργίας) στην δομή του μοντέλου. Το συγκεκριμένο πλαίσιο Συναρτησιακής Σώρευσης προσφέρει σημαντικά πλεονεκτήματα σε σχέση με άλλες μεθόδους εύρεσης γενικευμένων μοντέλων που χρησιμοποιούν μεθόδους παρεμβολής (interpolation) σε ένα σύνολο συμβατικών μοντέλων (ένα για κάθε συνθήκη λειτουργίας), όπως: (i) Η ταυτόχρονη διαχείριση δεδομένων που αντιστοιχούν σε διαφορετικές συνθήκες λειτουργίας, καθώς και η διευθέτηση των αλληλοεξαρτήσεων μεταξύ δεδομένων που ανήκουν σε διαφορετικές συνθήκες λειτουργίας παρέχοντας με τον τρόπο αυτό μοντέλα με βέλτιστη στατιστική ακρίβεια, (ii) η χρήση συμπτυγμένων μοντέλων τα οποία περιγράφουν με ακρίβεια τα δυναμικά χαρακτηριστικά του συστήματος σε κάθε κατάσταση λειτουργίας, αποφεύγοντας έτσι την χρήση συμβατικών μεθόδων παρεμβολής, (iii) ο προσδιορισμός των αβεβαιοτήτων στη μοντελοποίηση κάθε κατάστασης λειτουργίας μέσω εκτίμησης κατάλληλων διαστημάτων εμπιστοσύνης. Μέχρι στιγμής, η έρευνα πάνω στο πλαίσιο Συναρτησιακής Σώρευσης έχει επικεντρωθεί στα βαθμωτά στοχαστικά μοντέλα. Η παρούσα διατριβή σαν στόχο έχει (i) την κατάλληλη διαμόρφωση και επέκταση του πλαισίου Συναρτησιακής Σώρευσης για την περίπτωση πολυμεταβλητών στοχαστικών συστημάτων που λειτουργούν με πολλαπλές συνθήκες λειτουργίας , και (ii) την εισαγωγή μιας καινοτόμου μεθοδολογίας ανίχνευσης βλαβών για συστήματα που παρουσιάζουν πολλαπλές συνθήκες λειτουργίας βασιζόμενη σε πολυμεταβλητά μοντέλα Συναρτησιακής Σώρευσης και στον στατιστικό έλεγχο υποθέσεων. Η περίπτωση των πολυμεταβλητών μοντέλων παρουσιάζει τεχνικές δυσκολίες που δεν συναντώνται στα βαθμωτά μοντέλα, καθώς η δομή των μοντέλων είναι πιο περίπλοκη ενώ η παραμετροποίησή τους είναι μη-τετριμμένη θέτοντας έτσι ζητήματα αναγνωρισιμότητας (model identifiability). Η παρούσα διατριβή εστιάζει σε Συναρτησιακά Σωρευμένα Διανυσματικά μοντέλα ΑυτοΠαλινδρόμησης με εΞωγενή είσοδο (Functionally Pooled Vector AutoRegressive with eXogenous excitation; FP-VARX), και σε Διανυσματικά μοντέλα ΑυτοΠαλινδρόμησης με Κινητό Μέσο Όρο (Functionally Pooled AutoRegressive with Moving Average; FP-VARMA). Τα μοντέλα αυτά μπορεί να θεωρηθούν ως γενικεύσεις των συμβατικών μοντέλων VARX/VARMA με την σημαντική διαφοροποίηση ότι οι παράμετροι του μοντέλου είναι συναρτήσεις της συνθήκης λειτουργίας. Το πρώτο κεφάλαιο της διατριβής επικεντρώνεται στην αναγνώριση μοντέλων FP-VARX. Αναπτύσσονται εκτιμήτριες βασισμένες στις μεθόδους των Ελαχίστων Τετραγώνων (Least Squares; LS) και της Μέγιστης Πιθανοφάνειας (Maximum Likelihood; ML), ενώ στη συνέχεια μελετώνται η συνέπεια (consistency) και η ασυμπτωτική κατανομή (asymptotic distribution)τους. Επιπλέον, καθορίζονται συνθήκες που εξασφαλίζουν την αναγνωρισιμότητα (identifiability) των FP-VARX μοντέλων, ενώ ο προσδιορισμός της δομής τους βασίζεται σε κατάλληλα τροποποιημένα κριτήρια πληροφορίας (information criteria). Η αποτίμηση της μοντελοποίησης με FP-VARX, καθώς επίσης και η αποτελεσματικότητά τους έναντι των συμβατικών μοντέλων VARX εξακριβώνεται μέσω προσομοιώσεων Monte Carlo. Στο δεύτερο κεφάλαιο διερευνάται η αναγνώριση των θερμοκρασιακών επιρροών στα δυναμικά χαρακτηριστικά μιας ευφυούς δοκού από σύνθετο υλικό. Το πρόβλημα μελετάται χρησιμοποιώντας συμβατικά μοντέλα καθώς και "γενικευμένα" μοντέλα. Η συμβατική μοντελοποίηση περιλαμβάνει μη-παραμετρικές παραστάσεις που βασίζονται στην μέθοδο Welch (ανάλυση στο πεδίο συχνοτήτων), καθώς και παραμετρικές παραστάσεις βασισμένες στα μοντέλα VARX (ανάλυση στο πεδίο χρόνου). H "γενικευμένη" μοντελοποίηση περιλαμβάνει παραστάσεις Σώρευσης με Σταθερές Παραμέτρους (Constant Coefficient Pooled VARX; CCP-VARX), καθώς και VARX παραστάσεις Συναρτησιακής Σώρευσης (Functionally Pooled VARX; FP-VARX). Η ανάλυση υποδεικνύει ότι τα χαρακτηριστικά των "γενικευμένων" και των συμβατικών μοντέλων βρίσκονται σε γενική συμφωνία μεταξύ τους. Ωστόσο, τα "γενικευμένα" μοντέλα περιγράφουν τα δυναμικά χαρακτηριστικά του συστήματος με μικρότερο αριθμό παραμέτρων, γεγονός που προσδίδει μεγαλύτερη ακρίβεια στην εκτίμησή τους. Το μοντέλο CCP-VARX τείνει να σταθμίσει τα δυναμικά χαρακτηριστικά του συστήματος σε κάποιον "μέσο όρο" με σχετική ακρίβεια. Απεναντίας το μοντέλο FP-VARX υπερέχει σε ακρίβεια, καθώς επιδεικνύει μια εξομαλυμένη καθοριστική εξάρτηση των δυναμικών χαρακτηριστικών του συστήματος με την θερμοκρασία, γεγονός που είναι συμβατό με την φυσική του προβλήματος. Το τρίτο κεφάλαιο επικεντρώνεται στην αναγνώριση μοντέλων FP-VARMA. Αναπτύσσονται εκτιμήτριες βασισμένες στις μεθόδους των Ελαχίστων Τετραγώνων Δύο Σταδίων (Two Stage Least Squares; 2SLS) και της Μέγιστης Πιθανοφάνειας (Maximum Likelihood; ML), ενώ στην συνέχεια μελετώνται η συνέπεια και η ασυμπτωτική κατανομή τους. Επιπλέον, εισάγεται μια νέα μέθοδος για την εκτίμηση 2SLS που απλοποιεί σημαντικά την διαδικασία εξαγωγής υπολοίπων (residuals) από το πρώτο στάδιο. Επίσης, καθορίζονται οι συνθήκες που εξασφαλίζουν αναγνωρισιμότητα στα μοντέλα FP-VARMA. Ο προσδιορισμός της δομής των μοντέλων FP-VARMA πραγματοποιείται χάρη σε μια μεθοδολογία δύο σταδίων που βασίζεται στην Ανάλυση Κανονικοποιημένων Συσχετίσεων (Canonical Correlation Analysis; CCA) και κριτηρίων πληροφορίας, αποφεύγοντας έτσι την εκτεταμένη χρήση αλγορίθμων αναζήτησης. Η αποτίμηση της μοντελοποίησης με FP-VARMA, καθώς επίσης και η αποτελεσματικότητά τους έναντι των συμβατικών VARMA εξακριβώνεται μέσω προσομοιώσεων Monte Carlo. Το τέταρτο κεφάλαιο πραγματεύεται την ανίχνευση βλαβών σε συστήματα που παρουσιάζουν πολλαπλές συνθήκες λειτουργίας. Προτείνεται μια νέα μεθοδολογία που βασίζεται σε καινοτόμα μοντέλα Συναρτησιακής Σώρευσης και στον στατιστικό έλεγχο υποθέσεων. Παρουσιάζονται δυο εκδόσεις της μεθοδολογίας: η πρώτη βασίζεται στα μορφικά χαρακτηριστικά του μοντέλου ενώ η δεύτερη στις παραμέτρους του μοντέλου. Επιπλέον, χρησιμοποιούνται μέθοδοι συμπίεσης της πληροφορίας που περιέχουν τα μορφικά χαρακτηριστικά ή οι παράμετροι του μοντέλου μέσω της Ανάλυσης Κύριων Συνιστωσών (Principal Component Analysis; PCA) σε μια προσπάθεια απλοποίησης της διαδικασίας ανίχνευσης βλαβών. Η αποτελεσματικότητα της μεθοδολογίας επαληθεύεται πειραματικά σε μια "ευφυή" δοκό από σύνθετο υλικό, η οποία ταλαντώνεται σε διαφορετικές θερμοκρασίες. Στην παρούσα μορφή της η μεθοδολογία χρησιμοποιεί δεδομένα απόκρισης ταλάντωσης, ωστόσο δεδομένα διέγερσης-απόκρισης μπορούν να χρησιμοποιηθούν εφόσον κριθεί σκόπιμο. Η εξάρτηση των δυναμικών χαρακτηριστικών της δοκού με την θερμοκρασία περιγράφεται με τη χρήση μοντέλων FP-VAR, ενώ εισάγεται μια νέα μέθοδος καθορισμού της δομής του μοντέλου που αποφεύγει την χρήση αλγορίθμων αναζήτησης. Πλήθος πειραμάτων που καλύπτουν ένα ευρύ θερμοκρασιακό πεδίο, καθώς και συγκρίσεις με άλλες μεθοδολογίες ανίχνευσης βλαβών, πιστοποιούν την ικανότητα της προτεινόμενης μεθοδολογίας να διαγνώσει την κατάσταση της δοκού σε διάφορες θερμοκρασίες. Το πέμπτο κεφάλαιο ασχολείται με ειδικά θέματα μοντελοποίησης των "γενικευμένων" VARX . Ιδιαίτερη προσοχή δίνεται στην μελέτη Σωρευμένων VARX (P-VARX) και CCP-VARX μοντέλων. Σε αντιστοιχία με τα μοντέλα FP, αναπτύσσονται εκτιμήτριες LS και ML, ενώ στην συνέχεια μελετώνται οι ιδιότητές τους. Επιπλέον, καθορίζονται οι συνθήκες που εξασφαλίζουν την αναγνωρισιμότητα των μοντέλων P-VARX και CCP-VARX. Μελετώνται επίσης και οι σχέσεις που συνδέουν τις δομές των μοντέλων P-VARX και CCP-VARX με τα FP-VARX ως προς την παραμετροποίησή τους και την ακρίβεια που επιτυγχάνουν. Επιπλέον, μελετάται και η σχέση των παραπάνω μοντέλων με τα συμβατικά VARX. Η αποτίμηση των γενικευμένων μοντέλων VARX αναφορικά με το πλήθος των εκτιμώμενων παραμέτρων και την ακρίβεια που επιτυγχάνουν εξακριβώνεται μέσω προσομοιώσεων Monte Carlo.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Clausner, André. "Möglichkeiten zur Steuerung von Trust-Region Verfahren im Rahmen der Parameteridentifikation". Thesis, 2006. https://monarch.qucosa.de/id/qucosa%3A19909.

Texto completo da fonte
Resumo:
Zur Simulation technischer Prozesse ist eine hinreichend genaue Beschreibung des Materialverhaltens notwendig. Die hierfür häufig verwendeten phänomenologischen Ansätze, wie im vorliegenden Fall die HILLsche Fließbedingung, enthalten materialspezifische Parameter, welche nicht direkt messbar sind. Die Identifikation dieser Materialparameter erfolgt in der Regel durch Minimierung eines Fehlerquadratfunktionals, welches Differenzen von Messwerten und zugehörigen numerisch berechneten Vergleichswerten enthält. In diesem Zusammenhang haben sich zur Lösung dieser Minimierungsaufgabe die Trust-Region Verfahren als gut geeignet herausgestellt. Die Aufgabe besteht darin, die verschiedenen Möglichkeiten zur Steuerung eines Trust-Region Verfahrens, im Hinblick auf die Eignung für das vorliegende Identifikationsproblem, zu untersuchen. Dazu werden die Quadratmittelprobleme und deren Lösungsverfahren überblicksmäßig betrachtet. Danach wird näher auf die Trust-Region Verfahren eingegangen, wobei sich im Weiteren auf Verfahren mit positiv definiten Ansätzen für die Hesse-Matrix, den Levenberg-Marquardt Verfahren, beschränkt wird. Danach wird ein solcher Levenberg-Marquardt Algorithmus in verschiedenen Ausführungen implementiert und an dem vorliegenden Identifikationsproblem getestet. Als Ergebnis stellt sich eine gute Kombination aus verschiedenen Teilalgorithmen des Levenberg-Marquardt Algorithmus mit einer hohen Konvergenzgeschwindigkeit heraus, welche für das vorliegende Problem gut geeignet ist.:1 Einleitung 8 2 Nichtlineare Quadratmittelprobleme 9 2.1 Herkunft der Residuen: Das Prinzip der kleinsten Fehlerquadrate 10 2.2 Auftretende Differentialmatrizen 11 2.2.1 Lipschitzbedingung für die Unterscheidung der Aufgabenklasse im Hinblick auf die Residuen 12 2.3 Aufgabenklassen 13 2.3.1 Kleine und Null-Residuen 13 2.3.2 Große Residuen 13 2.3.3 Große Probleme 14 2.4 Modellstufen für f(x) um eine lokale Konstellation xk 15 2.5 Eigenschaften der Gauß-Newton Approximation der Hesse-Matrix 16 3 Identifikation der Materialparameter der HILLschen Fließbedingung für die plastische Verformung anisotroper Werkstoffe 17 4 ¨Ubersicht über monoton fallende Optimierungsverfahren für nichtlineare Funktionen 19 4.1 Die Idee der Line-Search Verfahren 19 4.2 Die Idee der Trust-Region Verfahren 20 4.3 Übersichtstabelle Über die Verfahren zur unrestringierten Optimierung 21 4.4 Ermittlungsmethoden fÜr die Suchrichtung sk bei Line-Search Methoden 22 4.4.1 Gradientenverfahren 22 4.4.2 Das Newton Verfahren 22 4.4.3 Quasi-Newton Verfahren 23 4.4.4 Gauß-Newton Verfahren 24 4.4.5 Methode der konjugierten Gradienten 25 4.4.6 Koordinatenabstiegsmethode nach Ahlers,Schwartz,Waldmann [1] 25 4.5 Modelle für die Trust-Region Verfahren 26 4.5.1 Der Cauchy Punkt 26 4.5.2 Das Newton Trust-Region Verfahren 27 4.5.3 Quasi-Newton Trust-Region Verfahren 27 4.5.4 Gauß-Newton Trust-Region: Levenberg-Marquardt Verfahren 27 4.6 Vergleich der Hauptstrategien 27 5 Die Trust-Region Verfahren 29 5.1 Die Konvergenz des Trust-Region Algorithmus zu stationären Punkten 34 5.2 Die Berechnung des Trust-Region Schrittes 35 5.3 Der Cauchy Punkt 37 5.4 Die Lösungsverfahren 38 5.5 Nahezu exakte Lösung des Trust-Region Problems, Regularisierung . 38 5.6 Struktur und Lösung der nahezu exakten Methode für den Normalfall 42 5.6.1 Ermitteln des Minimums s( lambda) des aktuellen Modells 46 5.6.1.1 Lösung mittels Cholesky Faktorisierung 47 5.6.1.2 Lösung mittels QR-Faktorisierung 47 5.6.1.3 Lösung mittels Singulärwertzerlegung 47 5.6.2 Das Ermitteln des Regularisierungsparameters 48 5.6.3 Ermitteln der Ableitung 0i( ) 51 5.6.4 Abbruch der -Iteration 52 5.6.5 Absichern der -Iteration 52 5.6.6 Ermitteln des Verhältnisses k 52 5.6.7 Auffrischen der Schrittnebenbedingung k 53 5.6.8 Startwerte für den Trust-Region Algorithmus 56 5.6.8.1 Startwerte 0 für den Trust-Region Radius 56 5.6.8.2 Startwerte für den Regularisierungsparameter 0 56 5.6.9 Konvergenz von Algorithmen, basierend auf nahezu exakten Lösungen 57 5.7 Approximation des Trust-Region Problems 57 5.7.1 Die Dogleg Methode 58 5.7.2 Die zweidimensionale Unterraumminimierung 60 5.7.3 Das Steihaug Vorgehen 61 5.7.4 Konvergenz der Approximationsverfahren 62 6 Trust-Region Verfahren mit positiv definiter Approximation der Hesse-Matrix: Das Levenberg-Marquardt Verfahren 63 6.1 Vorhandene Matrizen und durchführbare Methoden 64 6.2 Lösen des Levenberg-Marquardt Problems 66 6.2.1 Ermitteln von s( ) 68 6.2.1.1 Cholesky Faktorisierung 68 6.2.1.2 QR-Faktorisierung 68 6.2.1.3 Singulärwertzerlegung 68 6.2.2 Ermittlung des Regularisierungsparameter 69 6.2.3 Absichern der -Iteration 71 6.2.3.1 Absichern für die Strategie von Hebden 71 6.2.3.2 Absichern für die Newtonmethode 72 6.2.4 Weitere Teilalgorithmen 73 6.3 Ein prinzipieller Levenberg-Marquardt Algorithmus 73 7 Skalierung der Zielparameter 74 8 Abbruchkriterien für die Optimierungsalgorithmen 76 8.1 Abbruchkriterien bei Erreichen eines lokalen Minimums 76 8.2 Abbruchkriterien bei Erreichen der Maschinengenauigkeit für Trust-Region Verfahren 77 9 Test der Implementation des Levenberg-Marquardt Verfahrens 78 9.1 Test der Leistung für einzelne Parameter 79 9.2 Test der Leistung für Optimierungen mit mehreren Parametern 80 9.3 Test des Moduls 1 80 9.4 Test Modul 2 und Modul 3 81 9.5 Test des Moduls 4 81 9.6 Test des Moduls 5 81 9.7 Test des Modul 6 82 9.8 Test des Modul 7 83 9.9 Test des Modul 8 84 9.10 Modul 9 und Modul 10 84 9.11 Test mit verschiedenen Verfahrensparametern 85 9.12 Optimale Konfiguration 86 10 Zusammenfassung 87 11 Ausblick 88 11.1 Weiterführendes zu dem bestehenden Levenberg-Marquardt Verfahren 88 11.2 Weiterführendes zu den Trust-Region Verfahren 88 11.3 Weiterführendes zu den Line-Search Verfahren 89 11.4 Weiterführendes zu den Gradientenverfahren 89 Literaturverzeichnis 93 A Implementation: Das skalierte Levenberg-Marquardt Verfahren 95 A.1 Modul 1.x: 0-Wahl 95 A.1.1 Modul 1.1 95 A.1.2 Modul 1.2 96 A.1.3 Modul 1.3 96 A.1.4 Programmtechnische Umsetzung Modul 1 96 A.2 Modul 2.x: Wahl der Skalierungsmatrix 96 A.2.1 Modul 2.1 96 A.2.2 Modul 2.2 97 A.2.3 Programmtechnische Umsetzung Modul 2 97 A.3 Modul 3.x: Wahl der oberen und unteren Schranke l0, u0 für die - Iteration 97 A.3.1 Modul 3.1 97 A.3.2 Modul 3.2 97 A.3.3 Programmtechnische Umsetzung Modul 3 98 A.4 Modul 4.x: Wahl des Startwertes für den Regularisierungsparameter 0 98 A.4.1 Modul 4.1 98 A.4.2 Modul 4.2 99 A.4.3 Modul 4.3 99 A.4.4 Modul 4.4 99 A.4.5 Programmtechnische Umsetzung Modul 4 100 A.5 Modul 5.x: Die abgesicherte -Iteration 100 A.5.1 Modul 5.1 Die Iteration nach dem Schema von Hebden für 1 101 A.5.2 Modul 5.2 Die abgesicherte Iteration mit dem Newtonverfahren für 2 101 A.5.3 Die abgesicherte Iteration mit dem Newtonverfahren für 2 mittels Cholesky Zerlegung 102 A.5.4 Programmtechnische Umsetzung Modul 5 102 A.6 Modul 6.x: Die Ermittlung des Verhältnisses k 103 A.6.1 Modul 6.1: Herkömmliche Ermittlung 103 A.6.2 Modul 6.2: Numerisch stabile Ermittlung 104 A.6.3 Programmtechnische Umsetzung Modul 6 104 A.7 Modul 7.x: Auffrischen der Schrittnebenbedingung 105 A.7.1 Modul 7.1: Einfache Wahl 105 A.7.2 Modul 7.2: Wahl mit Berücksichtigung von Werten k < 0 105 A.7.3 Modul 7.3: Wahl mit Approximation von ffl 105 A.7.4 Programmtechnische Umsetzung Modul 7 106 A.8 Modul 8.x: Entscheidung über Akzeptanz des nächsten Schrittes sk . 107 A.8.1 Modul 8.1: Eine Akzeptanzbedingung 107 A.8.2 Modul 8.2: Zwei Akzeptanzbedingungen 107 A.8.3 Programmtechnische Umsetzung Modul 8 107 A.9 Modul 9.x: Abbruchbedingungen für den gesamten Algorithmus 107 A.9.1 Programmtechnische Umsetzung Modul 9 108 A.10 Modul 10.x: Berechnung des Schrittes s( ) 108 A.10.1 Modul 10.1 108 A.10.2 Modul 10.2 108 A.10.3 Programmtechnische Umsetzung Modul 10 108 A.11 Benötigte Prozeduren 109 A.11.1 Vektormultiplikation 109 A.11.2 Matrixmultiplikation 109 A.11.3 Matrixaddition 109 A.11.4 Cholesky Faktorisierung 110 A.11.5 Transponieren einer Matrix 111 A.11.6 Invertieren einer Matrix 111 A.11.6.1 Determinante einer Matrix 111 A.11.7 Normen 112 A.11.7.1 Euklidische Vektornorm 112 A.11.7.2 Euklidische Matrixnorm 112 A.11.8 Ermittlung von 1 112 A.11.9 Ermittlung von 2 112 A.11.10Ermittlung von 01 112 A.11.11Ermittlung von 02 .112 A.11.12Ermittlung von mk(s) 113 A.12 Programmablauf 113 A.13 Fehlercodes 114 B Weiterführendes: Allgemeines 116 B.1 Total Least Squares, Orthogonal distance regression 116 B.2 Lipschitz Konstante und Lipschitz Stetigkeit in nichtlinearen Quadratmittelproblemen 116 B.3 Beweis für das Prinzip der kleinsten Fehlerquadrate als beste Möglichkeit der Anpassung von Modellgleichungen an Messwerte 117 B.4 Konvergenzraten 119 B.5 Betrachtung der Normalengleichung als äquivalente Extremalbedingung 119 B.6 Der Cauchy Punkt 120 B.7 Minimumbedingungen 122 C Weiterführendes: Matrizen 123 C.1 Reguläre und singuläre Matrizen 123 C.2 Rang einer Matrix 123 C.3 Definitheit von quadratischen Matrizen 124 C.4 Kondition einer Matrix 125 C.5 Spaltenorthonormale und orthogonale Matrizen 125 C.6 Singulärwertzerlegung einer Matrix, SVD 126 C.7 Der Lanczos Algorithmus 127 C.8 Die QR Zerlegung einer Matrix 127 C.8.1 Gram Schmidt Orthogonalisierung 127 C.8.2 Householder Orthogonalisierung 127 C.9 Die Cholesky Faktorisierung 130 C.10 Die LINPACK Technik 131 D Daten und Bilder zum Levenberg-Marquardt Verfahren 132 D.1 Wichtige Funktionsverläufe des LM-Verfahrens 134 D.2 Einzelne Parameteroptimierungen 136 D.3 Kombinierte Parameteroptimierungen, P1,P2,P3 139 D.4 Vergleich Ableitungsgüte, Konvergenzproblem 142 D.5 Test des Modul 1 145 D.6 Test Modul 4 und 5 146 D.7 Test des Modul 6 147 D.8 Test des Modul 7 148 D.9 Test des Modul 8 151 D.10 Test verschiedener Algorithmusparameter 152 D.11 Standartalgorithmus und Verbesserter 155
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Lockett, Alan Justin. "General-purpose optimization through information maximization". Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5459.

Texto completo da fonte
Resumo:
The primary goal of artificial intelligence research is to develop a machine capable of learning to solve disparate real-world tasks autonomously, without relying on specialized problem-specific inputs. This dissertation suggests that such machines are realistic: If No Free Lunch theorems were to apply to all real-world problems, then the world would be utterly unpredictable. In response, the dissertation proposes the information-maximization principle, which claims that the optimal optimization methods make the best use of the information available to them. This principle results in a new algorithm, evolutionary annealing, which is shown to perform well especially in challenging problems with irregular structure.
text
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Rocha, I. "Model-based strategies for computer-aided operation of recombinant E. coli fermentation". Doctoral thesis, 2003. http://hdl.handle.net/1822/1269.

Texto completo da fonte
Resumo:
Tese de Doutoramento em Engenharia Química e Biológica
The main objectives of this thesis were the development of model-based strategies for improving the performance of a high-cell density recombinant Escherichia coli fed-batch fermentation. The construction of a mathematical model framework as well as the derivation of optimal and adaptive control laws were used to accomplish these tasks. An on-line data acquisition system was also developed for an accurate characterization of the process and for the implementation of the control algorithms. The mathematical model of the process is composed of mass balance equations to the most relevant state variables of the process. Kinetic equations are based on the three possible metabolic pathways of the microorganism: glucose oxidation, fermentation of glucose and acetate oxidation. A genetic algorithm was used to derive the kinetic structure and to estimate both yield and kinetic coefficients of the model, minimizing the normalized quadratic differences between simulated and real values of the state variables. After parameter estimation, a sensitivity function analysis was applied to evaluate the influence of the various parameters on model behavior. Sensitivity functions revealed the sensitivity of the state variables to variations in each model parameter. Thus, essential parameters were selected and the model could be re-written in a simplified version that could also describe accurately experimental data. A system for the on-line monitoring of the major state variables was also developed. Glucose and acetate concentrations were measured with a developed Flow Injection Analysis system, while the carbon dioxide and oxygen transfer rates were calculated from data obtained with exhaust gas analysis. The fermentation culture weight was also continuously assessed with a balance, allowing the use of more precise mass-based concentrations, while environmental variables like pH, dissolved oxygen and temperatures were controlled and assessed via a Digital Control Unit. The graphical programming environment LabVIEW was used to acquire and integrate these variables in a supervisory computer, allowing the performance of integrated monitoring and control of the process. A model-based adaptive linearizing control law was derived for the regulation of acetate concentration during fermentations. The non-linear model was subjected to transformations in order to obtain a linear behavior for the control loop when a non-linear control is applied. The implementation of the control law was performed through a C script embedded in the supervisory LabVIEW program. Finally, two optimization techniques for the maximization of biomass concentration were compared: a first order gradient method and a stochastic method based on the biological principle of natural evolution, using a genetic algorithm. The former method revealed less efficient concerning to the computed maximum, and dependence on good initial values.
A presente tese teve como principais objectivos o desenvolvimento de estratégias baseadas em modelos para melhorar o desempenho da fermentação em modo semi-continuo em altas densidades celulares de Escherichia coil recombinada. Para o efeito, foi construído um modelo matemático representativo do processo e a partir deste foram desenvolvidos algoritmos de controlo óptimo e adaptativo. De forma a possibilitar a implementação de leis de controlo em linha e a caracterização do processo fermentativo, foi desenvolvido um sistema informático de aquisição e envio de dados. O modelo matemático representativo do processo em estudo foi elaborado tendo por base as equações dinâmicas de balanço mássico para as variáveis de estado mais relevantes, contemplando as três possíveis vias metabólicas do microrganismo. A estrutura cinética, bem como os parâmetros do modelo foram determinados por recurso a uma abordagem sistemática tendo por base a minimização das diferenças quadráticas entra dados reais e dados simulados, com recurso a uma ferramenta de optimização estocástica denominada de Algoritmos Genéticos. Após a etapa de identificação do modelo matemático, foram calculadas as sensibilidades relativas ao longo do tempo das variáveis de estado do modelo relativamente aos vários parâmetros determinados. Os resultados desta análise de sensibilidade possibilitaram avaliar a relevância de cada um dos parâmetros em causa, permitindo propor uma estrutura de modelo menos complexa, por exclusão dos parâmetros menos importantes. O sistema elaborado para a aquisição e envio em linha de dados da fermentação inclui um sistema de FIA (Flow Injection Analysis) desenvolvido para a medição das concentrações de acetato e glucose, uma unidade de controlo digital que controla as variáveis físicas mais relevantes para o processo, e um equipamento de Espectrometria de Massas para analisar as correntes gasosas de entrada e saída do fermentador. O sistema dispõe ainda de duas balanças, uma das quais para a aferição em linha do peso do caldo de fermentação, permitindo o use de concentrações mássicas que proporcionam resultados mais exactos. A aquisição e integração destas variáveis medidas são, efectuadas através de um software de supervisão elaborado no ambiente de programação gráfico LabVIEW. Adicionalmente, foi elaborada uma lei de controlo adaptativo linearizante para a regulação da concentração de acetato no meio de fermentação. A síntese da lei de controlo não linear foi efectuada por técnicas de geometria diferencial com linearização do sistema por retroacção de estado. A adaptação foi feita tendo por base a estimação de parâmetros variáveis no tempo, nos quais se concentram as incertezas do modelo. A implementação ao processo real da referida lei de controlo foi efectuada por recurso a um programa elaborado em C incluindo no programa supervisor elaborado em LabVIEW. Finalmente, para a optimização da quantidade de biomassa formada no final da fermentação por manipulação do caudal de alimentação, foram estudadas duas ferramentas de optimização: um método de gradiente e uma ferramenta baseada em Algoritmos Genéticos. Esta última revelou-se mais eficaz tanto na convergência para o valor óptimo, como na estimativa inicial fornecida.
Fundação para a Ciência e a Tecnologia (FCT) – PRAXIS XXI/16961/98.
União Europeia - Fundo Social Europeu (FSE) – III Quadro Comunitário de Apoio (QCA III).
Fundação Calouste Gulbenkian (FCQ) - Educação e Bolsas.
Agência de Inovação (ADI) - PROTEXPRESS.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia