Siga este enlace para ver otros tipos de publicaciones sobre el tema: Surrogate dynamics.

Tesis sobre el tema "Surrogate dynamics"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Surrogate dynamics".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Koch, Christiane. "Quantum dissipative dynamics with a surrogate Hamiltonian". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2002. http://dx.doi.org/10.18452/14816.

Texto completo
Resumen
Diese Dissertation untersucht Quantensysteme in kondensierter Phase, welche mit ihrer Umgebung wechselwirken und durch ultrakurze Laserpulse angeregt werden. Die Zeitskalen der verschiedenen beteiligten Prozessen lassen sich bei solchen Problemen nicht separieren, weshalb die Standardmethoden zur Behandlung offener Quantensysteme nicht angewandt werden können. Die Methode des Surrogate Hamiltonian stellt ein Beispiel neuer Herangehensweisen an dissipative Quantendynamik dar. Die Weiterentwicklung der Methode und ihre Anwendung auf Phänomene, die zur Zeit experimentell untersucht werden, stehen im Mittelpunkt dieser Arbeit. Im ersten Teil der Arbeit werden die einzelnen dissipativen Prozesse klassifiziert und diskutiert. Insbesondere wird ein Modell der Dephasierung in die Methode des Surrogate Hamiltonian eingeführt. Dies ist wichtig für zukünftige Anwendungen der Methode, z.b. auf kohärente Kontrolle oder Quantencomputing. Diesbezüglich hat der Surrogate Hamiltonian einen großen Vorteil gegenüber anderen zur Verfügung stehenden Methoden dadurch, daß er auf dem Spin-Bad, d.h. auf einer vollständig quantenmechanischen Beschreibung der Umgebung, beruht. Im nächsten Schritt wird der Surrogate Hamiltonian auf ein Standardproblem für Ladungstransfer in kondensierter Phase angewandt, zwei nichtadiabatisch gekoppelte harmonische Oszillatoren, die in ein Bad eingebettet sind. Dieses Modell stellt eine große Vereinfachung von z.B. einem Molekül in Lösung dar, es dient hier jedoch als Testbeispiel für die theoretische Beschreibung eines prototypischen Ladungstransferereignisses. Alle qualitativen Merkmale eines solchen Experimentes können wiedergegeben und Defizite früherer Behandlungen identifiziert werden. Ultraschnelle Experimente beobachten Reaktionsdynamik auf der Zeitskala von Femtosekunden. Dies kann besonders gut durch den Surrogate Hamiltonian als einer Methode, die auf einer zeitabhängigen Beschreibung beruht, erfaßt werden. Die Kombination der numerischen Lösung der zeitabhängigen Schrödingergleichung mit der Wignerfunktion, die die Visualisierung eines Quantenzustands im Phasenraum ermöglicht, gestattet es, dem Ladungstransferzyklus intuitiv Schritt für Schritt zu folgen. Der Nutzen des Surrogate Hamiltonian wird weiterhin durch die Verbindung mit der Methode der Filterdiagonalisierung erhöht. Dies gestattet es, aus mit dem Surrogate Hamiltonian nur für relative kurze Zeite konvergierte Erwartungswerten Ergebnisse in der Frequenzdomäne zu erhalten. Der zweite Teil der Arbeit beschäftigt sich mit der theoretischen Beschreibung der laserinduzierten Desorption kleiner Moleküle von Metalloxidoberflächen. Dieses Problem stellt ein Beispiel dar, in dem alle Aspekte mit derselben methodischen Genauigkeit beschrieben werden, d.h. ab initio Potentialflächen werden mit einem mikroskopischen Modell für die Anregungs- und Relaxationsprozesse verbunden. Das Modell für die Wechselwirkung zwischen angeregtem Adsorbat-Substrat-System und Elektron-Loch-Paaren des Substrats beruht auf einer vereinfachten Darstellung der Elektron-Loch-Paare als ein Bad aus Dipolen und auf einer Dipol-Dipol-Wechselwirkung zwischen System und Bad. Alle Parameter können aus Rechnungen zur elektronischen Struktur abgeschätzt werden. Desorptionswahrscheinlichkeiten und Desorptionsgeschwindigkeiten werden unabhängig voneinander im experimentell gefundenen Bereich erhalten. Damit erlaubt der Surrogate Hamiltonian erstmalig eine vollständige Beschreibung der Photodesorptionsdynamik auf ab initio-Basis.
This thesis investigates condensed phase quantum systems which interact with their environment and which are subject to ultrashort laser pulses. For such systems the timescales of the involved processes cannot be separated, and standard approaches to treat open quantum systems fail. The Surrogate Hamiltonian method represents one example of a number of new approaches to address quantum dissipative dynamics. Its further development and application to phenomena under current experimental investigation are presented. The single dissipative processes are classified and discussed in the first part of this thesis. In particular, a model of dephasing is introduced into the Surrogate Hamiltonian method. This is of importance for future work in fields such as coherent control and quantum computing. In regard to these subjects, it is a great advantage of the Surrogate Hamiltonian over other available methods that it relies on a spin, i.e. a fully quantum mechanical description of the bath. The Surrogate Hamiltonian method is applied to a standard model of charge transfer in condensed phase, two nonadiabatically coupled harmonic oscillators immersed in a bath. This model is still an oversimplification of, for example, a molecule in solution, but it serves as testing ground for the theoretical description of a prototypical ultrafast pump-probe experiment. All qualitative features of such an experiment are reproduced and shortcomings of previous treatments are identified. Ultrafast experiments attempt to monitor reaction dynamics on a femtosecond timescale. This can be captured particularly well by the Surrogate Hamiltonian as a method based on a time-dependent picture. The combination of the numerical solution of the time-dependent Schrödinger equation with the phase space visualization given by the Wigner function allows for a step by step following of the sequence of events in a charge transfer cycle in a very intuitive way. The utility of the Surrogate Hamiltonian is furthermore significantly enhanced by the incorporation of the Filter Diagonalization method. This allows to obtain frequency domain results from the dynamics which can be converged within the Surrogate Hamiltonian approach only for comparatively short times. The second part of this thesis is concerned with the theoretical treatment of laser induced desorption of small molecules from oxide surfaces. This is an example which allows for a description of all aspects of the problem with the same level of rigor, i.e. ab initio potential energy surfaces are combined with a microscopic model for the excitation and relaxation processes. This model of the interaction between the excited adsorbate-substrate complex and substrate electron-hole pairs relies on a simplified description of the electron-hole pairs as a bath of dipoles, and a dipole-dipole interaction between system and bath. All parameters are connected to results from electronic structure calculations. The obtained desorption probabilities and desorption velocities are simultaneously found to be in the right range as compared to the experimental results. The Surrogate Hamiltonian approach therefore allows for a complete description of the photodesorption dynamics on an ab initio basis for the first time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hibbs, Ryan E. "Conformational dynamics of the acetylcholine binding protein, a Nicotinic receptor surrogate". Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3237010.

Texto completo
Resumen
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed December 8, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Conradie, Tanja. "Modelling of nonlinear dynamic systems : using surrogate data methods". Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51834.

Texto completo
Resumen
Thesis (MSc)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: This study examined nonlinear modelling techniques as applied to dynamic systems, paying specific attention to the Method of Surrogate Data and its possibilities. Within the field of nonlinear modelling, we examined the following areas of study: attractor reconstruction, general model building techniques, cost functions, description length, and a specific modelling methodology. The Method of Surrogate Data was initially applied in a more conventional application, i.e. testing a time series for nonlinear, dynamic structure. Thereafter, it was used in a less conventional application; i.e. testing the residual vectors of a nonlinear model for membership of identically and independently distributed (i.i.d) noise. The importance of the initial surrogate analysis of a time series (determining whether the apparent structure of the time series is due to nonlinear, possibly chaotic behaviour) was illustrated. This study confrrmed that omitting this crucial step could lead to a flawed conclusion. If evidence of nonlinear structure in the time series was identified, a radial basis model was constructed, using sophisticated software based on a specific modelling methodology. The model is an iterative algorithm using minimum description length as the stop criterion. The residual vectors of the models generated by the algorithm, were tested for membership of the dynamic class described as i.i.d noise. The results of this surrogate analysis illustrated that, as the model captures more of the underlying dynamics of the system (description length decreases), the residual vector resembles Li.d noise. It also verified that the minimum description length criterion leads to models that capture the underlying dynamics of the time series, with the residual vector resembling Li.d noise. In the case of the "worst" model (largest description length), the residual vector could be distinguished from Li.d noise, confirming that it is not the "best" model. The residual vector of the "best" model (smallest description length), resembled Li.d noise, confirming that the minimum description length criterion selects a model that captures the underlying dynamics of the time series. These applications were illustrated through analysis and modelling of three time series: a time series generated by the Lorenz equations, a time series generated by electroencephalograhpic signal (EEG), and a series representing the percentage change in the daily closing price of the S&P500 index.
AFRIKAANSE OPSOMMING: In hierdie studie ondersoek ons nie-lineere modelleringstegnieke soos toegepas op dinamiese sisteme. Spesifieke aandag word geskenk aan die Metode van Surrogaat Data en die moontlikhede van hierdie metode. Binne die veld van nie-lineere modellering het ons die volgende terreine ondersoek: attraktor rekonstruksie, algemene modelleringstegnieke, kostefunksies, beskrywingslengte, en 'n spesifieke modelleringsalgoritme. Die Metode and Surrogaat Data is eerstens vir 'n meer algemene toepassing gebruik wat die gekose tydsreeks vir aanduidings van nie-lineere, dimanise struktuur toets. Tweedens, is dit vir 'n minder algemene toepassing gebruik wat die residuvektore van 'n nie-lineere model toets vir lidmaatskap van identiese en onafhanlike verspreide geraas. Die studie illustreer die noodsaaklikheid van die aanvanklike surrogaat analise van 'n tydsreeks, wat bepaal of die struktuur van die tydsreeks toegeskryf kan word aan nie-lineere, dalk chaotiese gedrag. Ons bevesting dat die weglating van hierdie analise tot foutiewelike resultate kan lei. Indien bewyse van nie-lineere gedrag in die tydsreeks gevind is, is 'n model van radiale basisfunksies gebou, deur gebruik te maak van gesofistikeerde programmatuur gebaseer op 'n spesifieke modelleringsmetodologie. Dit is 'n iteratiewe algoritme wat minimum beskrywingslengte as die termineringsmaatstaf gebruik. Die model se residuvektore is getoets vir lidmaatskap van die dinamiese klas wat as identiese en onafhanlike verspreide geraas bekend staan. Die studie verifieer dat die minimum beskrywingslengte as termineringsmaatstaf weI aanleiding tot modelle wat die onderliggende dinamika van die tydsreeks vasvang, met die ooreenstemmende residuvektor wat nie onderskei kan word van indentiese en onafhanklike verspreide geraas nie. In die geval van die "swakste" model (grootse beskrywingslengte), het die surrogaat analise gefaal omrede die residuvektor van indentiese en onafhanklike verspreide geraas onderskei kon word. Die residuvektor van die "beste" model (kleinste beskrywingslengte), kon nie van indentiese en onafhanklike verspreide geraas onderskei word nie en bevestig ons aanname. Hierdie toepassings is aan die hand van drie tydsreekse geillustreer: 'n tydsreeks wat deur die Lorenz vergelykings gegenereer is, 'n tydsreeks wat 'n elektroenkefalogram voorstel en derdens, 'n tydsreeks wat die persentasie verandering van die S&P500 indeks se daaglikse sluitingsprys voorstel.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Millard, Daniel C. "Identification and control of neural circuit dynamics for natural and surrogate inputs in-vivo". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53405.

Texto completo
Resumen
A principal goal of neural engineering is to control the activation of neural circuits across space and time. The ability to control neural circuits with surrogate inputs is needed for the development of clinical neural prostheses and the experimental interrogation of connectivity between brain regions. Electrical stimulation provides a clinically viable method for activating neural tissue and the emergence of optogenetic stimulation has redefined the limitations on stimulating neural tissue experimentally. However, it remains poorly understood how these tools activate complex neural circuits. The goal of this proposed project was to gain a greater understanding of how to control the activity of neural circuits in-vivo using a combination of experimental and computational approaches. Voltage sensitive dye imaging was used to observe the spatiotemporal activity within the rodent somatosensory cortex in response to systematically varied patterns of sensory, electrical, and optogenetic stimulation. First, the cortical response to simple patterns of sensory and artificial stimuli was characterized and modeled, revealing distinct neural response properties due to the differing synchrony with which the neural circuit was engaged. Then, we specifically designed artificial stimuli to improve the functional relevance of the resulting downstream neural responses. Finally, through direct optogenetic modulation of thalamic state, we demonstrate control of the nonlinear propagation of neural activity within the thalamocortical circuit. The combined experimental and computational approach described in this thesis provides a comprehensive description of the nonlinear dynamics of the thalamocortical circuit to surrogate stimuli. Together, the characterization, modeling, and overall control of downstream neural activity stands to inform the development of central nervous system sensory prostheses, and more generally provides the initial tools and framework for the control of neural activity in-vivo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Segee, Molly Catherine. "Surrogate Models for Transonic Aerodynamics for Multidisciplinary Design Optimization". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/71321.

Texto completo
Resumen
Multidisciplinary design optimization (MDO) requires many designs to be evaluated while searching for an optimum. As a result, the calculations done to evaluate the designs must be quick and simple to have a reasonable turn-around time. This makes aerodynamic calculations in the transonic regime difficult. Running computational fluid dynamics (CFD) calculations within the MDO code would be too computationally expensive. Instead, CFD is used outside the MDO to find two-dimensional aerodynamic properties of a chosen airfoil shape, BACJ, at a number of points over a range of thickness-to-chord ratios, free-stream Mach numbers, and lift coefficients. These points are used to generate surrogate models which can be used for the two-dimensional aerodynamic calculations required by the MDO computational design environment. Strip theory is used to relate these two-dimensional results to the three-dimensional wing. Models are developed for the center of pressure location, the lift curve slope, the wave drag, and the maximum allowable lift coefficient before buffet. These models have good agreement with the original CFD results for the airfoil. The models are integrated into the aerodynamic and aeroelastic sections of the MDO code.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Minsavage, Kaitlyn Emily. "Neural Networks as Surrogates for Computational Fluid Dynamics Predictions of Hypersonic Flows". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1610017352981371.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lagerstrom, Tiffany. "All in the Family: The Role of Sibling Relationships as Surrogate Attachment Figures". Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/scripps_theses/1138.

Texto completo
Resumen
While several studies have analyzed the impact of mother-child attachment security on the child’s emotion regulation abilities, few studies have proposed interventions to help children improve emotion regulation abilities in the presence of an insecure mother-child attachment. This current study extends previous findings about the influence of mother-child attachment on the child’s emotion regulation abilities and contributes new research in determining whether an older sibling can moderate this effect. This study predicts that across points of assessments: 18 months, 5 years, 10 years, and 15 years, the quality of mother-child attachment security will influence the child’s performance on an emotion regulation task, such that securely attached children will demonstrate the most persistence and least distress, children with Anxious-Avoidant attachment will demonstrate the least persistence, and children with Anxious-Ambivalent will demonstrate the most distress. If, at any point, the child develops an insecure relationship with the mother and a secure relationship with the older sibling, the child’s persistence is expected to increase and the child’s distress is expected to decrease. In this way, the older sibling will serve as a surrogate attachment figure. These research findings have important implications for parenting behaviors as well as clinical practices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Brouwer, Kirk Rowse. "Enhancement of CFD Surrogate Approaches for Thermo-Structural Response Prediction in High-Speed Flows". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543340520905498.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Sadet, Jérémy. "Surrogate models for the analysis of friction induced vibrations under uncertainty". Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2022. http://www.theses.fr/2022UPHF0014.

Texto completo
Resumen
Le crissement automobile est une nuisance sonore qui intéresse à la fois les chercheurs et les industriels. Ce phénomène fugace, perçu par les acquéreurs de véhicule comme gage de piètre qualité, induit un coût de plus en plus important pour les équipementiers automobiles dû aux réclamations-client. Par conséquent, il est primordial de proposer et développer des méthodes permettant de prédire avec efficacité l’occurrence de cette nuisance sonore grâce à des modèles de simulation numérique. Ainsi, cette thèse se propose de poursuivre les récents travaux montrant l’apport certain d’une intégration des incertitudes au sein des simulations numériques de crissement. L’objectif est de proposer une stratégie de propagation d’incertitudes pour des simulations de crissement en maintenant des coûts numériques acceptables (en phase d’avant-projet). Plusieurs méthodes numériques sont évaluées et améliorées pour permettre des calculs à la fois précis et dans des temps de calcul compatibles avec les contraintes de l’industrie. Après avoir positionné ce travail de thèse par rapport aux avancées des chercheurs travaillant sur la thématique du crissement, une nouvelle méthode d’amélioration des solutions propres d’un problème aux valeurs propres complexes est proposée. Pour réduire les coûts numériques de telles études, trois modèles de substitution (processus gaussien, processus gaussien profond et réseau de neurones profonds) sont étudiés et comparés pour proposer les stratégies optimales que ce soit en termes de méthode ou de paramétrage. La construction de l’ensemble d’apprentissage est un élément clé pour assurer les futures prédictions des modèles de substitution. Une nouvelle stratégie/méthode d’optimisation exploitant l’optimisation bayésienne est présentée pour cibler idéalement le choix des données de l’ensemble d’apprentissage, données potentiellement coûteuses d’un point de vue numérique. Ces méthodes sont ensuite exploitées pour proposer une technique de propagation des incertitudes selon une modélisation par sous-ensembles flous
The automotive squeal is a noise disturbance, which has won the interest of the research and industrialists over the year. This elusive phenomenon, perceived by the vehicle purchasers as a poor-quality indicator, causes a cost which becomes more and more important for the car manufacturers, due to client’s claims. Thus, it is all the more important to propose and develop methods allowing predicting the occurring of this noise disturbance with efficiency, thanks to numerical simulations. Hence, this thesis proposes to pursue the recent works that showed the certain contributions of an integration of uncertainties into the squeal numerical simulations. The objective is to suggest a strategy of uncertainty propagation, for squeal simulations, maintaining numerical cost acceptable (especially, for pre-design phases). Several numerical methods are evaluated and improved to allow precise computations and with computational time compatible with the constraints of the industry. After positioning this thesis work with respect to the progress of the researchers working on the squeal subject, a new numerical method is proposed to improve the computation of the eigensolutions of a large quadratic eigenvalue problem. To reduce the numerical cost of such studies, three surrogate models (gaussian process, deep gaussian process and deep neural network) are studied and compared to suggest the optimal strategy in terms of methodology or model setting. The construction of the training set is a key aspect to insure the predictions of these surrogate models. A new optimisation strategy, hinging on bayesian optimisation, is proposed to efficiently target the samples of the training set, samples which are probably expensive to compute from a numerical point of view. These optimisation methods are then used to present a new uncertainty propagation method, relying on a fuzzy set modelisation
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Taheri, Mehdi. "Machine Learning from Computer Simulations with Applications in Rail Vehicle Dynamics and System Identification". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81417.

Texto completo
Resumen
The application of stochastic modeling for learning the behavior of multibody dynamics models is investigated. The stochastic modeling technique is also known as Kriging or random function approach. Post-processing data from a simulation run is used to train the stochastic model that estimates the relationship between model inputs, such as the suspension relative displacement and velocity, and the output, for example, sum of suspension forces. Computational efficiency of Multibody Dynamics (MBD) models can be improved by replacing their computationally-intensive subsystems with stochastic predictions. The stochastic modeling technique is able to learn the behavior of a physical system and integrate its behavior in MBS models, resulting in improved real-time simulations and reduced computational effort in models with repeated substructures (for example, modeling a train with a large number of rail vehicles). Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, various sampling plans are investigated, and a space-filling Latin Hypercube sampling plan based on the traveling salesman problem (TPS) is suggested for efficiently representing the entire parameter space. The simulation results confirm the expected increased modeling efficiency, although further research is needed for improving the accuracy of the predictions. The prediction accuracy is expected to improve through employing a sampling strategy that considers the discrete nature of the training data and uses infill criteria that considers the shape of the output function and detects sample spaces with high prediction errors. It is recommended that future efforts consider quantifying the computation efficiency of the proposed learning behavior by overcoming the inefficiencies associated with transferring data between multiple software packages, which proved to be a limiting factor in this study. These limitations can be overcome by using the user subroutine functionality of SIMPACK and adding the stochastic modeling technique to its force library.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ghanipoor, Machiani Sahar. "Modeling Driver Behavior at Signalized Intersections: Decision Dynamics, Human Learning, and Safety Measures of Real-time Control Systems". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/71798.

Texto completo
Resumen
Traffic conflicts associated to signalized intersections are one of the major contributing factors to crash occurrences. Driver behavior plays an important role in the safety concerns related to signalized intersections. In this research effort, dynamics of driver behavior in relation to the traffic conflicts occurring at the onset of yellow is investigated. The area ahead of intersections in which drivers encounter a dilemma to pass through or stop when the yellow light commences is called Dilemma Zone (DZ). Several DZ-protection algorithms and advance signal settings have been developed to accommodate the DZ-related safety concerns. The focus of this study is on drivers' decision dynamics, human learning, and choice behavior in DZ, and DZ-related safety measures. First, influential factors to drivers' decision in DZ were determined using a driver behavior survey. This information was applied to design an adaptive experiment in a driving simulator study. Scenarios in the experimental design are aimed at capturing drivers learning process while experiencing safe and unsafe signal settings. The result of the experiment revealed that drivers do learn from some of their experience. However, this learning process led into a higher level of risk aversion behavior. Therefore, DZ-protection algorithms, independent of their approach, should not have any concerns regarding drivers learning effect on their protection procedure. Next, the possibility of predicting drivers' decision in different time frames using different datasets was examined. The results showed a promising prediction model if the data collection period is assumed 3 seconds after yellow. The prediction model serves advance signal protection algorithms to make more intelligent decisions. In the next step, a novel Surrogate Safety Number (SSN) was introduced based on the concept of time to collision. This measure is applicable to evaluate different DZ-protection algorithms regardless of their embedded methodology, and it has the potential to be used in developing new DZ-protection algorithms. Last, an agent-based human learning model was developed integrating machine learning and human learning techniques. An abstracted model of human memory and cognitive structure was used to model agent's behavior and learning. The model was applied to DZ decision making process, and agents were trained using the driver simulator data. The human learning model resulted in lower and faster-merging errors in mimicking drivers' behavior comparing to a pure machine learning technique.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Lebel, David. "Statistical inverse problem in nonlinear high-speed train dynamics". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC2189/document.

Texto completo
Resumen
Ce travail de thèse traite du développement d'une méthode de télédiagnostique de l'état de santé des suspensions des trains à grande vitesse à partir de mesures de la réponse dynamique du train en circulation par des accéléromètres embarqués. Un train en circulation est un système dynamique dont l'excitation provient des irrégularités de la géométrie de la voie ferrée. Ses éléments de suspension jouent un rôle fondamental de sécurité et de confort. La réponse dynamique du train étant dépendante des caractéristiques mécaniques des éléments de suspension, il est possible d'obtenir en inverse des informations sur l'état de ces éléments à partir de mesures accélérométriques embarquées. Connaître l'état de santé réel des suspensions permettrait d'améliorer la maintenance des trains. D’un point de vue mathématique, la méthode de télédiagnostique proposée consiste à résoudre un problème statistique inverse. Elle s'appuie sur un modèle numérique de dynamique ferroviaire et prend en compte l'incertitude de modèle ainsi que les erreurs de mesures. Les paramètres mécaniques associés aux éléments de suspension sont identifiés par calibration Bayésienne à partir de mesures simultanées des entrées (les irrégularités de la géométrie de la voie) et sorties (la réponse dynamique du train) du système. La calibration Bayésienne classique implique le calcul de la fonction de vraisemblance à partir du modèle stochastique de réponse et des données expérimentales. Le modèle numérique étant numériquement coûteux d'une part, ses entrées et sorties étant fonctionnelles d'autre part, une méthode de calibration Bayésienne originale est proposée. Elle utilise un métamodèle par processus Gaussien de la fonction de vraisemblance. Cette thèse présente comment un métamodèle aléatoire peut être utilisé pour estimer la loi de probabilité des paramètres du modèle. La méthode proposée permet la prise en compte du nouveau type d'incertitude induit par l'utilisation d'un métamodèle. Cette prise en compte est nécessaire pour une estimation correcte de la précision de la calibration. La nouvelle méthode de calibration Bayésienne a été testée sur le cas applicatif ferroviaire, et a produit des résultats concluants. La validation a été faite par expériences numériques. Par ailleurs, l'évolution à long terme des paramètres mécaniques de suspensions a été étudiée à partir de mesures réelles de la réponse dynamique du train
The work presented here deals with the development of a health-state monitoring method for high-speed train suspensions using in-service measurements of the train dynamical response by embedded acceleration sensors. A rolling train is a dynamical system excited by the track-geometry irregularities. The suspension elements play a key role for the ride safety and comfort. The train dynamical response being dependent on the suspensions mechanical characteristics, information about the suspensions state can be inferred from acceleration measurements in the train by embedded sensors. This information about the actual suspensions state would allow for providing a more efficient train maintenance. Mathematically, the proposed monitoring solution consists in solving a statistical inverse problem. It is based on a train-dynamics computational model, and takes into account the model uncertainty and the measurement errors. A Bayesian calibration approach is adopted to identify the probability distribution of the mechanical parameters of the suspension elements from joint measurements of the system input (the track-geometry irregularities) and output (the train dynamical response).Classical Bayesian calibration implies the computation of the likelihood function using the stochastic model of the system output and experimental data. To cope with the fact that each run of the computational model is numerically expensive, and because of the functional nature of the system input and output, a novel Bayesian calibration method using a Gaussian-process surrogate model of the likelihood function is proposed. This thesis presents how such a random surrogate model can be used to estimate the probability distribution of the model parameters. The proposed method allows for taking into account the new type of uncertainty induced by the use of a surrogate model, which is necessary to correctly assess the calibration accuracy. The novel Bayesian calibration method has been tested on the railway application and has achieved conclusive results. Numerical experiments were used for validation. The long-term evolution of the suspension mechanical parameters has been studied using actual measurements of the train dynamical response
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Mohammadian, Saeed. "Freeway traffic flow dynamics and safety: A behavioural continuum framework". Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/227209/1/Saeed_Mohammadian_Thesis.pdf.

Texto completo
Resumen
Congestion and rear-end crashes are two undesirable phenomena of freeway traffic flows, which are interrelated and highly affected by human psychological factors. Since congestion is an everyday problem, and crashes are rare events, congestion management and crash risk prevention strategies are often implemented through separate research directions. However, overwhelming evidence has underscored the inter-relation between rear-end crashes and freeway traffic flow dynamics in recent decades. This dissertation develops novel mathematical models for freeway traffic flow dynamics and safety to integrate them into a unifiable framework. The outcomes of this PhD can enable moving towards faster and safer roads.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bunnell, Spencer Reese. "Real Time Design Space Exploration of Static and Vibratory Structural Responses in Turbomachinery Through Surrogate Modeling with Principal Components". BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8451.

Texto completo
Resumen
Design space exploration (DSE) is used to improve and understand engineering designs. Such designs must meet objectives and structural requirements. Design improvement is non-trivial and requires new DSE methods. Turbomachinery manufacturers must continue to improve existing engines to keep up with global demand. Two challenges of turbomachinery DSE are: the time required to evaluate designs, and knowing which designs to evaluate. This research addressed these challenges by developing novel surrogate and principal component analysis (PCA) based DSE methods. Node and PCA-based surrogates were created to allow faster DSE of turbomachinery blades. The surrogates provided static stress estimation within 10% error. Surrogate error was related to the number of sampled finite element (FE) models used to train the surrogate and the variables used to change the designs. Surrogates were able to provide structural evaluations three to five orders of magnitude faster than FEA evaluations. The PCA-based surrogates were then used to create a PCA-based design workflow to help designers know which designs to evaluate. The workflow used either two-point correlation or stress and geometry coupling to relate the design variables to principal component (PC) scores. These scores were projections of the FE models onto the PCs obtained from PCA. Analysis showed that this workflow could be used in DSE to better explore and improve designs. The surrogate methods were then applied to vibratory stress. A computationally simplified analysis workflow was developed to allow for enough fluid and structural analyses to create a surrogate model. The simplified analysis workflow introduced 10% error but decreased the computational cost by 90%. The surrogate methods could not directly be applied to emulation of vibration due to the large spikes which occur near resonance. A novel, indirect emulation method was developed to better estimate vibratory responses Surrogates were used to estimate the inputs to calculate the vibratory responses. During DSE these estimations were used to calculate the vibratory responses. This method reduced the error between the surrogate and FEA from 85% to 17%. Lastly, a PCA-based multi-fidelity surrogate method was developed. This assumed the PCs of the high and low-fidelities were similar. The high-fidelity FE models had tens of thousands of nodes and the low-fidelity FE models had a few hundred nodes. The computational cost to create the surrogate was decreased by 75% for the same errors. For the same computational cost, the error was reduced by 50%. Together, the methods developed in this research were shown to decrease the cost of evaluating the structural responses of turbomachinery blade designs. They also provided a method to help the designer understand which designs to explore. This research paves the way for better, and more thoroughly understood turbomachinery blade designs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Volpi, Silvia. "High-fidelity multidisciplinary design optimization of a 3D composite material hydrofoil". Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6325.

Texto completo
Resumen
Multidisciplinary design optimization (MDO) refers to the process of designing systems characterized by the interaction of multiple interconnected disciplines. High-fidelity MDO usually requires large computational resources due to the computational cost of achieving multidisciplinary consistent solutions by coupling high-fidelity physics-based solvers. Gradient-based minimization algorithms are generally applied to find local minima, due to their efficiency in solving problems with a large number of design variables. This represents a limitation to performing global MDO and integrating black-box type analysis tools, usually not providing gradient information. The latter issues generally inhibit a wide use of MDO in complex industrial applications. An architecture named multi-criterion adaptive sampling MDO (MCAS-MDO) is presented in the current research for complex simulation-based applications. This research aims at building a global derivative-free optimization tool able to employ high-fidelity/expensive black-box solvers for the analysis of the disciplines. MCAS-MDO is a surrogate-based architecture featuring a variable level of coupling among the disciplines and is driven by a multi-criterion adaptive sampling (MCAS) assessing coupling and sampling uncertainties. MCAS uses the dynamic radial basis function surrogate model to identify the optimal solution and explore the design space through parallel infill of new solutions. The MCAS-MDO is tested versus a global derivative-free multidisciplinary feasible (MDF) approach, which solves fully-coupled multidisciplinary analyses, for two analytical test problems. Evaluation metrics include number of function evaluations required to achieve the optimal solution and sample distribution. The MCAS-MDO outperforms the MDF showing a faster convergence by clustering refined function evaluations in the optimum region. The architecture is applied to a steady fluid-structure interaction (FSI) problem, namely the design of a tapered three-dimensional carbon fiber-reinforced plastic hydrofoil for minimum drag. The objective is the design of shape and composite material layout subject to hydrodynamic, structural, and geometrical constraints. Experimental data are available for the original configuration of the hydrofoil and allow validating the FSI analysis, which is performed coupling computational fluid dynamics, solving the Reynolds averaged Navier-Stokes equations, and finite elements, solving the structural equation of elastic motion. Hydrofoil forces, tip displacement, and tip twist are evaluated for several materials providing qualitative agreement with the experiments and confirming the need for the two-way versus one-way coupling approach in case of significantly compliant structures. The free-form deformation method is applied to generate shape modifications of the hydrofoil geometry. To reduce the global computational expense of the optimization, a design space assessment and dimensionality reduction based on the Karhunen–Loève expansion (KLE) is performed off-line, i.e. without the need for high-fidelity simulations. It provides with a selection of design variables for the problem at hand through basis rotation and re-parametrization. By using the KLE, an efficient design space is identified for the current problem and the number of design variables is reduced by 92%. A sensitivity analysis is performed prior to the optimization to assess the variability associated with the shape design variables and the composite material design variable, i.e. the fiber orientation. These simulations are used to initialize the surrogate model for the optimization, which is carried out for two models: one in aluminum and one in composite material. The optimized designs are assessed by comparison with the original models through evaluation of the flow field, pressure distribution on the body, and deformation under the hydrodynamic load. The drag of the aluminum and composite material hydrofoils is reduced by 4 and 11%, respectively, increasing the hydrodynamic efficiency by 4 and 7%. The optimized designs are obtained by evaluating approximately 100 designs. The quality of the results indicates that global derivative-free MDO of complex engineering applications using expensive black-box solvers can be achieved at a feasible computational cost by minimizing the design space dimensionality and performing an intelligent sampling to train the surrogate-based optimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Stucki, Chad Lamar. "Aerodynamic Design Optimization of a Locomotive Nose Fairing for Reducing Drag". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7478.

Texto completo
Resumen
Rising fuel cost has motivated increased fuel efficiency for freight trains. At cruising speed,the largest contributing factor to the fuel consumption is aerodynamic drag. As a result of stagnationand flow separation on and around lead and trailing cars, the first and last railcars experiencegreater drag than intermediate cars. Accordingly, this work focused on reducing drag on lead locomotivesby designing and optimizing an add-on nose fairing that is feasible for industrial operation.The fairing shape design was performed via computational fluid dynamic (CFD) software.The simulations consisted of two in-line freight locomotives, a stretch of rails on a raised subgrade,a computational domain, and a unique fairing geometry that was attached to the lead locomotive ineach non-baseline case. Relative motion was simulated by fixing the train and translating the rails,subgrade, and ground at a constant velocity. An equivalent uniform inlet velocity was applied atzero degree yaw to simulate relative motion between the air and the train.Five fairing families-Fairing Families A-E (FFA-FFE)-are presented in this thesis.Multidimensional regressions are created for each family to approximate drag as a function ofthe design variables. Thus, railroad companies may choose an alternative fairing if the recommendedfairing does not meet their needs and still have a performance estimate. The regression forFFE is used as a surrogate model in a surrogate based optimization. Results from a wind tunneltest and from CFD are reported on an FFE geometry to validate the CFD model. The wind tunneltest predicts a nominal drag reduction of 16%, and the CFD model predicts a reduction of 17%.A qualitative analysis is performed on the simulations containing the baseline locomotive, the optimalfairings from FFA-FFC, and the hybrid child and parent geometries from FFA & FFC. Theanalysis reveals that optimal performance is achieved for a narrow geometry from FFC becausesuction behind the fairing is greatly reduced. Similarly, the analysis reveals that concave geometriesboost the flow over the top leading edge of the locomotive, thus eliminating a vortex upstreamof the windshields. As a result, concave geometries yield greater reductions in drag.The design variable definitions for each family were strategically selected to improve manufacturability,operational safety, and aerodynamic performance relative to the previous families.As a result, the optimal geometry from FFE is believed to most completely satisfy the constraintsof the design problem and should be given the most consideration for application in the railroadindustry. The CFD solution for this particular geometry suggests a nominal drag reduction of 17%on the lead locomotive in an industrial freight train.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Crowell, Andrew R. "Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366204830.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Abraham, Yonas Beyene. "Optimization with surrogates for electronic-structure calculations /". Electronic thesis, 2004. http://etd.wfu.edu/theses/available/etd-05102004-012537/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Soltanipour, Lazarjan Milad. "Dynamic behaviour of brain and surrogate materials under ballistic impact". Thesis, University of Canterbury. Mechanical Engineering, 2015. http://hdl.handle.net/10092/10466.

Texto completo
Resumen
In the last several decades the number of the fatalities related to criminally inflicted cranial gunshot wounds has increased (Aarabi et al.; Jena et al., 2014; Mota et al., 2003). Back-spattered bloodstain patterns are often important in investigations of cranial gunshot fatalities, particularly when there is a doubt whether the death is suicide or homicide. Back-spatter is the projection of blood and tissue back toward the firearm. However, the mechanism of creation of the backspatter is not understood well. There are several hypotheses, which describe the formation of the backspatter. However, as it is difficult to study the internal mechanics of formation of the backspatter in animal experiments as the head is opaque and sample properties vary from animal to animal. Performing ballistic experiments on human cadavers is rarely not possible for ethical reasons. An alternative is to build a realistic physical 3D model of the human head, which can be used for reconstruction of crime scenes and BPA training purposes. This requires a simulant material for each layer of the human head. In order to build a realistic model of human head, it is necessary to understand the effect of the each layer of the human head to the generation of the back-spatter. Simulant materials offer the possibility of safe, well‐controlled experiments. Suitable simulants must be biologically inert, be stable over some reasonable shelf‐life, and respond to ballistic penetration in the same way as the responding human tissues. Traditionally 10-20% (w/w) gelatine have been used as a simulant for human soft tissues in ballistic experiments. However, 10-20% of gelatine has never been validated as a brain simulant. Moreover, due to the viscoelastic nature of the brain it is not possible to find the exact mechanical properties of the brain at ballistic strain rates. Therefore, in this study several experiments were designed to obtain qualitative and quantitative data using high speed cameras to compare different concentrations of gelatine and new composite material with the bovine and ovine brains. Factors such as the form of the fragmentation, velocity of the ejected material, expansion rate, stopping distance, absorption of kinetic energy and effect of the suction as well as ejection of the air from the wound cavity and its involvement in the generation of the backspatter have been investigated. Furthermore, in this study a new composite material has been developed, which is able to create more realistic form of the fragmentation and expansion rate compared to the all different percentage of the gelatine. The results of this study suggested that none of the concentrations the gelatine used in this study were capable of recreating the form of the damage to the one observed from bovine and ovine brain. The elastic response of the brain tissue is much lower that observed in gelatine samples. None of the simulants reproduced the stopping distance or form of the damage seen in bovine brain. Suction and ejection of the air as a result of creation of the temporary cavity has a direct relation to the elasticity of the material. For example, by reducing the percentage of the gelatine the velocity of the air drawn into the cavity increases however, the reverse scenario can be seen for the ejection of the air. This study showed that elastic response of the brain tissue was not enough to eject the brain and biological materials out of the cranium. However, the intracranial pressure raises as the projectile passes through the head. This pressure has the potential of ejecting the brain and biological material backward and create back-spatter. Finally, the results of this study suggested that for each specific type of experiment, a unique simulant must be designed to meet the requirements for that particular experiment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Zhao, Liang. "Reliability-based design optimization using surrogate model with assessment of confidence level". Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1194.

Texto completo
Resumen
The objective of this study is to develop an accurate surrogate modeling method for construction of the surrogate model to represent the performance measures of the compute-intensive simulation model in reliability-based design optimization (RBDO). In addition, an assessment method for the confidence level of the surrogate model and a conservative surrogate model to account the uncertainty of the prediction on the untested design domain when the number of samples are limited, are developed and integrated into the RBDO process to ensure the confidence of satisfying the probabilistic constraints at the optimal design. The effort involves: (1) developing a new surrogate modeling method that can outperform the existing surrogate modeling methods in terms of accuracy for reliability analysis in RBDO; (2) developing a sampling method that efficiently and effectively inserts samples into the design domain for accurate surrogate modeling; (3) generating a surrogate model to approximate the probabilistic constraint and the sensitivity of the probabilistic constraint with respect to the design variables in most-probable-point-based RBDO; (4) using the sampling method with the surrogate model to approximate the performance function in sampling-based RBDO; (5) generating a conservative surrogate model to conservatively approximate the performance function in sampling-based RBDO and assure the obtained optimum satisfy the probabilistic constraints. In applying RBDO to a large-scale complex engineering application, the surrogate model is commonly used to represent the compute-intensive simulation model of the performance function. However, the accuracy of the surrogate model is still challenging for highly nonlinear and large dimension applications. In this work, a new method, the Dynamic Kriging (DKG) method is proposed to construct the surrogate model accurately. In this DKG method, a generalized pattern search algorithm is used to find the accurate optimum for the correlation parameter, and the optimal mean structure is set using the basis functions that are selected by a genetic algorithm from the candidate basis functions based on a new accuracy criterion. Plus, a sequential sampling strategy based on the confidence interval of the surrogate model from the DKG method, is proposed. By combining the sampling method with the DKG method, the efficiency and accuracy can be rapidly achieved. Using the accurate surrogate model, the most-probable-point (MPP)-based RBDO and the sampling-based RBDO can be carried out. In applying the surrogate models to MPP-based RBDO and sampling-based RBDO, several efficiency strategies, which include: (1) using local window for surrogate modeling; (2) adaptive window size for different design candidates; (3) reusing samples in the local window; (4) using violated constraints for surrogate model accuracy check; (3) adaptive initial point for correlation parameter estimation, are proposed. To assure the accuracy of the surrogate model when the number of samples is limited, and to assure the obtained optimum design can satisfy the probabilistic constraints, a conservative surrogate model, using the weighted Kriging variance, is developed, and implemented for sampling-based RBDO.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Tancred, James Anderson. "Aerodynamic Database Generation for a Complex Hypersonic Vehicle Configuration Utilizing Variable-Fidelity Kriging". University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1543801033672049.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Tahkola, M. (Mikko). "Developing dynamic machine learning surrogate models of physics-based industrial process simulation models". Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201906042313.

Texto completo
Resumen
Abstract. Dynamic physics-based models of industrial processes can be computationally heavy which prevents using them in some applications, e.g. in process operator training. Suitability of machine learning in creating surrogate models of a physics-based unit operation models was studied in this research. The main motivation for this was to find out if machine learning model can be accurate enough to replace the corresponding physics-based components in dynamic modelling and simulation software Apros® which is developed by VTT Technical Research Centre of Finland Ltd and Fortum. This study is part of COCOP project, which receive funding from EU, and INTENS project that is Business Finland funded. The research work was divided into a literature study and an experimental part. In the literature study, the steps of modelling with data-driven methods were studied and artificial neural network architectures suitable for dynamic modelling were investigated. Based on that, four neural network architectures were chosen for the case studies. In the first case study, linear and nonlinear autoregressive models with exogenous inputs (ARX and NARX respectively) were used in modelling dynamic behaviour of a water tank process build in Apros®. In the second case study, also Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) were considered and compared with the previously mentioned ARX and NARX models. The workflow from selecting the input and output variables for the machine learning model and generating the datasets in Apros® to implement the machine learning models back to Apros® was defined. Keras is an open source neural network library running on Python that was utilised in the model generation framework which was developed as a part of this study. Keras library is a very popular library that allow fast experimenting. The framework make use of random hyperparameter search and each model is tested on a validation dataset in dynamic manner, i.e. in multi-step-ahead configuration, during the optimisation. The best models based in terms of average normalised root mean squared error (NRMSE) is selected for further testing. The results of the case studies show that accurate multi-step-ahead models can be built using recurrent artificial neural networks. In the first case study, the linear ARX model achieved slightly better NRMSE value than the nonlinear one, but the accuracy of both models was on a very good level with the average NRMSE being lower than 0.1 %. The generalisation ability of the models was tested using multiple datasets and the models proved to generalise well. In the second case study, there were more difference between the models’ accuracies. This was an expected result as the studied process contains nonlinearities and thus the linear ARX model performed worse in predicting some output variables than the nonlinear ones. On the other hand, ARX model performed better with some other output variables. However, also in the second case study the model NRMSE values were on good level, being 1.94–3.60 % on testing dataset. Although the workflow to implement machine learning models in Apros® using its Python binding was defined, the actual implementation need more work. Experimenting with Keras neural network models in Apros® was noticed to slow down the simulation even though the model was fast when testing it outside of Apros®. The Python binding in Apros® do not seem to cause overhead to the calculation process which is why further investigating is needed. It is obvious that the machine learning model must be very accurate if it is to be implemented in Apros® because it needs to be able interact with the physics-based model. The actual accuracy requirement that Apros® sets should be also studied to know if and in which direction the framework made for this study needs to be developed.Dynaamisten surrogaattimallien kehittäminen koneoppimismenetelmillä teollisuusprosessien fysiikkapohjaisista simulaatiomalleista. Tiivistelmä. Teollisuusprosessien toimintaa jäljittelevät dynaamiset fysiikkapohjaiset simulaatiomallit voivat laajuudesta tai yksityiskohtien määrästä johtuen olla laskennallisesti raskaita. Tämä voi rajoittaa simulaatiomallin käyttöä esimerkiksi prosessioperaattorien koulutuksessa ja hidastaa simulaattorin avulla tehtävää prosessien optimointia. Tässä tutkimuksessa selvitettiin koneoppimismenetelmillä luotujen mallien soveltuvuutta fysiikkapohjaisten yksikköoperaatiomallien surrogaattimallinnukseen. Fysiikkapohjaiset mallit on luotu teollisuusprosessien dynaamiseen mallinnukseen ja simulointiin kehitetyllä Apros®-ohjelmistolla, jota kehittää Teknologian tutkimuskeskus VTT Oy ja Fortum. Työ on osa COCOP-projektia, joka saa rahoitusta EU:lta, ja INTENS-projektia, jota rahoittaa Business Finland. Työ on jaettu kirjallisuusselvitykseen ja kahteen kokeelliseen case-tutkimukseen. Kirjallisuusosiossa selvitettiin datapohjaisen mallinnuksen eri vaiheet ja tutkittiin dynaamiseen mallinnukseen soveltuvia neuroverkkorakenteita. Tämän perusteella valittiin neljä neuroverkkoarkkitehtuuria case-tutkimuksiin. Ensimmäisessä case-tutkimuksessa selvitettiin lineaarisen ja epälineaarisen autoregressive model with exogenous inputs (ARX ja NARX) -mallin soveltuvuutta pinnankorkeuden säädöllä varustetun vesisäiliömallin dynaamisen käyttäytymisen mallintamiseen. Toisessa case-tutkimuksessa tarkasteltiin edellä mainittujen mallityyppien lisäksi Long Short-Term Memory (LSTM) ja Gated Recurrent Unit (GRU) -verkkojen soveltuvuutta power-to-gas prosessin metanointireaktorin dynaamiseen mallinnukseen. Työssä selvitettiin surrogaattimallinnuksen vaiheet korvattavien yksikköoperaatiomallien ja siihen liittyvien muuttujien valinnasta datan generointiin ja koneoppimismallien implementointiin Aprosiin. Koneoppimismallien rakentamiseen tehtiin osana työtä Python-sovellus, joka hyödyntää Keras Python-kirjastoa neuroverkkomallien rakennuksessa. Keras on suosittu kirjasto, joka mahdollistaa nopean neuroverkkomallien kehitysprosessin. Työssä tehty sovellus hyödyntää neuroverkkomallien hyperparametrien optimoinnissa satunnaista hakua. Jokaisen optimoinnin aikana luodun mallin tarkkuutta dynaamisessa simuloinnissa mitataan erillistä aineistoa käyttäen. Jokaisen mallityypin paras malli valitaan NRMSE-arvon perusteella seuraaviin testeihin. Case-tutkimuksen tuloksien perusteella neuroverkoilla voidaan saavuttaa korkea tarkkuus dynaamisessa simuloinnissa. Ensimmäisessä case-tutkimuksessa lineaarinen ARX-malli oli hieman epälineaarista tarkempi, mutta molempien mallityyppien tarkkuus oli hyvä (NRMSE alle 0.1 %). Mallien yleistyskykyä mitattiin simuloimalla usealla aineistolla, joiden perusteella yleistyskyky oli hyvällä tasolla. Toisessa case-tutkimuksessa vastemuuttujien tarkkuuden välillä oli eroja lineaarisen ja epälineaaristen mallityyppien välillä. Tämä oli odotettu tulos, sillä joidenkin mallinnettujen vastemuuttujien käyttäytyminen on epälineaarista ja näin ollen lineaarinen ARX-malli suoriutui niiden mallintamisesta epälineaarisia malleja huonommin. Toisaalta lineaarinen ARX-malli oli tarkempi joidenkin vastemuuttujien mallinnuksessa. Kaiken kaikkiaan mallinnus onnistui hyvin myös toisessa case-tutkimuksessa, koska käytetyillä mallityypeillä saavutettiin 1.94–3.60 % NRMSE-arvo testidatalla simuloitaessa. Koneoppimismallit saatiin sisällytettyä Apros-malliin käyttäen Python-ominaisuutta, mutta prosessi vaatii lisäselvitystä, jotta mallit saadaan toimimaan yhdessä. Testien perusteella Keras-neuroverkkojen käyttäminen näytti hidastavan simulaatiota, vaikka neuroverkkomalli oli nopea Aprosin ulkopuolella. Aprosin Python-ominaisuus ei myöskään näytä itsessään aiheuttavan hitautta, jonka takia asiaa tulisi selvittää mallien implementoinnin mahdollistamiseksi. Koneoppimismallin tulee olla hyvin tarkka toimiakseen vuorovaikutuksessa fysiikkapohjaisen mallin kanssa. Jatkotutkimuksen ja Python-sovelluksen kehittämisen kannalta on tärkeää selvittää mikä on Aprosin koneoppimismalleille asettama tarkkuusvaatimus.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Peyron, Mathis. "Assimilation de données en espace latent par des techniques de deep learning". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP074.

Texto completo
Resumen
Cette thèse, située à l’intersection de l’assimilation de données (AD) et de l’apprentissage profond (AP), introduit un concept nouveau : l’assimilation de données en espace latent. Elle permet une réduction considérable des coûts de calcul et des besoins mémoire, tout en offrant le potentiel d’améliorer la précision des résultats.Il existe de nombreuses façons d’intégrer l’apprentissage profond dans les algorithmes d’assimilation de données, chacune visant des objectifs différents (Loh et al., 2018; Tang et al., 2020; Laloyaux et al., 2020; Bonavita and Laloyaux, 2020; Brajard et al., 2020; Farchi et al., 2021b; Pawar and San, 2021; Leutbecher, 2019; Ruckstuhl et al., 2021; Lin et al., 2019; Deng et al., 2018; Cheng et al., 2024). Nous étendons davantage l'intégration de l'apprentissage profond, en repensant le processus même d’assimilation. Notre approche s’inscrit dans la suite des méthodes à espace réduit (Evensen,1994; Bishop et al., 2001; Hunt et al., 2007; Courtier, 2007; Gratton and Tshimanga, 2009; Gratton et al., 2013; Lawless et al., 2008; Cao et al., 2007), qui résolvent le problème d’assimilation en effectuant des calculs dans un espace de faible dimension. Ces méthodes à espace réduit ont été principalement développées pour une utilisation opérationnelle, car la plupart des algorithmes d’assimilation de données s'avèrent être excessivement coûteux, lorsqu’ils sont implémentés dans leur forme théorique originelle.Notre méthodologie repose sur l’entraînement conjoint d’un autoencodeur et d’un réseau de neurone surrogate. L’autoencodeur apprend de manière itérative à représenter avec précision la dynamique physique considérée dans un espace de faible dimension, appelé espace latent. Le réseau surrogate est entraîné simultanément à apprendre la propagation temporelle des variables latentes. Une stratégie basée sur une fonction de coût chaînée est également proposée pour garantir la stabilité du réseau surrogate. Cette stabilité peut également être obtenue en implémentant des réseaux surrogate Lipschitz.L’assimilation de données à espace réduit est fondée sur la théorie de la stabilité de Lyapunov qui démontre mathématiquement que, sous certaines hypothèses, les matrices de covariance d’erreur de prévision et a posteriori se conforment asymptotiquement à l’espace instable-neutre (Carrassi et al., 2022), qui est de dimension beaucoup plus petite que l’espace d’état. Alors que l’assimilation de données en espace physique consiste en des combinaisons linéaires sur un système dynamique non linéaire, de grande dimension et potentiellement multi-échelle, l’assimilation de données latente, qui opère sur les dynamiques internes sous-jacentes, potentiellement simplifiées, est davantage susceptible de produire des corrections significatives.La méthodologie proposée est éprouvée sur deux cas tests : une dynamique à 400 variables - construite à partir d'un système de Lorenz chaotique de dimension 40 -, ainsi que sur le modèle quasi-géostrophique de la librairie OOPS (Object-Oriented Prediction System). Les résultats obtenus sont prometteurs
This thesis, which sits at the crossroads of data assimilation (DA) and deep learning (DL), introduces latent space data assimilation, a novel data-driven framework that significantly reduces computational costs and memory requirements, while also offering the potential for more accurate data assimilation results.There are numerous ways to integrate deep learning into data assimilation algorithms, each targeting different objectives (Loh et al., 2018; Tang et al., 2020; Laloyaux et al., 2020; Bonavita and Laloyaux, 2020; Brajard et al., 2020; Farchi et al., 2021b; Pawar and San, 2021; Leutbecher, 2019; Ruckstuhl et al., 2021; Lin et al., 2019; Deng et al., 2018; Cheng et al., 2024). We extend the integration of deep learning further by rethinking the assimilation process itself. Our approach aligns with reduced-space methods (Evensen,1994; Bishop et al., 2001; Hunt et al., 2007; Courtier, 2007; Gratton and Tshimanga, 2009; Gratton et al., 2013; Lawless et al., 2008; Cao et al., 2007), which solve the assimilation problem by performing computations within a lower-dimensional space. These reduced-space methods have been developed primarily for operational use, as most data assimilation algorithms are prohibitively costly, when implemented in their full theoretically form.Our methodology is based on the joint training of an autoencoder and a surrogate neural network. The autoencoder iteratively learns how to accurately represent the physical dynamics of interest within a low-dimensional space, termed latent space. The surrogate is simultaneously trained to learn the time propagation of the latent variables. A chained loss function strategy is also proposed to ensure the stability of the surrogate network. Stability can also be achieved by implementing Lipschitz surrogate networks.Reduced-space data assimilation is underpinned by Lyapunov stability theory, which mathematically demonstrates that, under specific hypotheses, the forecast and posterior error covariance matrices asymptotically conform to the unstable-neutral subspace (Carrassi et al., 2022), which is of much smaller dimension than the full state space. While full-space data assimilation involves linear combinations within a high-dimensional, nonlinear, and possibly multi-scale dynamic environment, latent data assimilation, which operates on the core, potentially disentangled and simplified dynamics, is more likely to result in impactful corrections.We tested our methodology on a 400-dimensional dynamics - built upon a chaotic Lorenz96 system of dimension 40 -, and on the quasi-geostrophic model of the Object-Oriented Prediction System (OOPS) framework. We obtained promising results
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hermannsson, Elvar. "Hydrodynamic Shape Optimization of Trawl Doors with Three-Dimensional Computational Fluid Dynamics Models and Local Surrogates". Thesis, KTH, Kraft- och värmeteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147352.

Texto completo
Resumen
Rising fuel prices have been inflating the operating costs of the fishing industry. Trawl doors are used to hold the fishing net open during trawling operations, and they have a great influence on the fuel consumption of vessels. Improvements in the design of trawl doors could therefore contribute significantly to increased fuel efficiency. An efficient optimization algorithm using two- and three-dimensional (2D and 3D) computational fluid dynamics (CFD) models is presented. Accurate CFD models, especially 3D, are computationally expensive. The direct use of traditional optimization algorithms, which often require a large number of evaluations, can therefore be prohibitive. The proposed method is iterative and uses low-order local response surface approximation models as surrogates for the expensive CFD model to reduce the number of iterations. The algorithm is applied to the design of two types of geometries: a typical modern trawl door, and a novel airfoil-shaped trawl door. The results from the 2D design optimization show that the hydrodynamic efficiency of the typical modern trawl door could be increased by 32%, and the novel airfoil-shaped trawl door by 13%. When the 2D optimum designs for the two geometries are compared, the novel airfoil-shaped trawl door results to be 320% more efficient than the optimized design of the typical modern trawl door. The 2D optimum designs were used as the initial designs for the 3D design optimization. The results from the 3D optimization show that the hydrodynamic efficiency could be increased by 6% for both the typical modern and novel airfoil-shaped trawl doors. Results from a 3D CFD analysis show that 3D flow effects are significant, where the values for drag are significantly underestimated in 2D CFD models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Akimoto, Mami. "Predictive uncertainty in infrared marker-based dynamic tumor tracking with Vero4DRT". Kyoto University, 2015. http://hdl.handle.net/2433/199176.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Cerqueus, Audrey. "Bi-objective branch-and-cut algorithms applied to the binary knapsack problem : surrogate bound sets, dynamic branching strategies, generation and exploitation of cover inequalities". Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=fdf0e978-37d8-4290-8495-a3fd67de78f7.

Texto completo
Resumen
Dans ce travail, nous nous intéressons à la résolution de problèmes d’optimisation combinatoire multi-objectif. Ces problèmes ont suscité un intérêt important au cours des dernières décennies. Afin de résoudre ces problèmes, particulièrement difficiles, de manière exacte et efficace, les algorithmes sont le plus souvent spécifiques au problème traité. Dans cette thèse, nous revenons sur l’approche dite de branch-and-bound et nous en proposons une extension pour obtenir un branch-and-cut, dans un contexte bi-objectif. Les problèmes de sac-à-dos sont utilisés comme support pour ces travaux. Trois axes principaux sont considérés : la définition de nouveaux ensembles bornants, l’élaboration d’une stratégie de branchement dynamique et la génération d’inégalités valides. Les ensembles bornants définis sont basés sur la relaxation surrogate, utilisant un ensemble de multiplicateurs. Des algorithmes sont élaborés, à partir de l’étude des différents multiplicateurs, afin de calculer efficacement les ensembles bornants surrogate. La stratégie de branchement dynamique émerge de la comparaison de différentes stratégies de branchement statiques, issues de la littérature. Elle fait appel à une méthode d’apprentissage par renforcement. Enfin, des inégalités de couverture sont générées et introduites, tout au long de la résolution, dans le but de l’accélérer. Ces différents apports sont validés expérimentalement et l’algorithme de branch-and-cut obtenu présente des résultats encourageants
In this work, we are interested in solving multi-objective combinatorial optimization problems. These problems have received a large interest in the past decades. In order to solve exactly and efficiently these problems, which are particularly difficult, the designed algorithms are often specific to a given problem. In this thesis, we focus on the branch-and-bound method and propose an extension by a branch-and-cut method, in bi-objective context. Knapsack problems are the case study of this work. Three main axis are considered: the definition of new upper bound sets, the elaboration of a dynamic branching strategy and the generation of valid inequalities. The defined upper bound sets are based on the surrogate relaxation, using several multipliers. Based on the analysis of the different multipliers, algorithms are designed to compute efficiently these surrogate upper bound sets. The dynamic branching strategy arises from the comparison of different static branching strategies from the literature. It uses reinforcement learning methods. Finally, cover inequalities are generated and introduced, all along the solving process, in order to improve it. Those different contributions are experimentally validated and the obtained branch-and-cut algorithm presents encouraging results
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Dietrich, Felix [Verfasser], Hans-Joachim [Akademischer Betreuer] [Gutachter] Bungartz y Gerta [Gutachter] Köster. "Data-Driven Surrogate Models for Dynamical Systems / Felix Dietrich ; Gutachter: Hans-Joachim Bungartz, Gerta Köster ; Betreuer: Hans-Joachim Bungartz". München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1137624655/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Allahvirdizadeh, Reza. "Reliability-Based Assessment and Optimization of High-Speed Railway Bridges". Licentiate thesis, KTH, Bro- och stålbyggnad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301318.

Texto completo
Resumen
Increasing the operational speed of trains has attracted a lot of interest in the last decades and has brought new challenges, especially in terms of infrastructure design methodology, as it may induce excessive vibrations. Such demands can damage bridges, which in turn increases maintenance costs, endangers the safety of passing trains and disrupts passenger comfort. Conventional design provisions should therefore be evaluated in the light of modern concerns; nevertheless, several previous studies have highlighted some of their shortcomings. It should be emphasized that most of these studies have neglected the uncertainties involved, which preventsthe reported results from representing a complete picture of the problem. In this respect, the present thesis is dedicated to evaluating the performance of conventional design methods, especially those related to running safety and passenger comfort, using probabilistic approaches. To achieve this objective, a preliminary study was carried out using the first-order reliability method for short/medium span bridges passed by trains at a wide range of operating speeds. Comparison of these results with the corresponding deterministic responses showed that applying a constant safety factor to the running safety threshold does not guarantee that the safety index will be identical for all bridges. It also shows that the conventional design approaches result in failure probabilities that are higher than the target values. This conclusion highlights the need to update the design methodology for running safety. However, it would be essential to determine whether running safety is the predominant design criterion before conducting further analysis. Therefore, a stochastic comparison between this criterion and passenger comfort was performed. Due to the significant computational cost of such investigations, subset simulation and crude Monte-Carlo (MC) simulation using meta-models based on polynomial chaos expansion were employed. Both methods were found to perform well, with running safety almost always dominating the passenger comfort limit state. Subsequently, classification-based meta-models, e.g. support vector machines, k-nearest neighbours and decision trees, were combined using ensemble techniques to investigate the influence of soil-structure interaction on the evaluated reliability of running safety. The obtained results showed a significant influence, highlighting the need for detailed investigations in further studies. Finally, a reliability-based design optimization was conducted to update the conventional design method of running safety by proposing minimum requirements for the mass per length and moment of inertia of bridges. It is worth mentioning that the inner loop of the method was solved by a crude MC simulation using adaptively trained Kriging meta-models.
Att öka tågens hastighet har väckt stort intresse under de senaste decennierna och har medfört nya utmaningar, särskilt när det gäller broanalyser, eftersom tågen inducerar stora vibrationer. Sådana vibrationer kan öka underhållskostnaderna, äventyra säkerheten för förbipasserande tåg och påverka passagerarkomforten. Konstruktionsbestämmelser bör därför utvärderas mot bakgrund av dessa problem; dock har flera tidigare studier belyst några av bristerna i dagens bestämmelser. Det bör understrykas att de flesta av dessa studier har försummat de osäkerheter som är involverade, vilket hindrar de rapporterade resultaten från att representera en fullständig bild av problemet. I detta avseende syftar denna avhandling till att utvärdera prestandan hos konventionella analysmetoder, särskilt de som rör körsäkerhet och passagerarkomfort, med hjälp av sannolikhetsmetoder. För att uppnå detta mål genomfördes en preliminär studie med första ordningens tillförlitlighetsnmetod för broar med kort/medellång spännvidd som passeras av tåg med ett brett hastighetsspektrum. Jämförelse av dessa resultat med motsvarande deterministiska respons visade att tillämpa en konstant säkerhetsfaktor för verifieringen av trafiksäkerhet inte garanterar att säkerhetsindexet kommer att vara identiskt för alla broar. Det visar också att de konventionella analysmetoderna resulterar i brottsannolikheter som är högre än målvärdena. Denna slutsats belyser behovet av att uppdatera analysmetoden för trafiksäkerhet. Det skulle emellertid vara viktigt att avgöra om trafiksäkerhet är det dominerande designkriteriet innan ytterligare analyser genomförs. Därför utfördes en stokastisk jämförelse mellan detta kriterium och kriteriet för passagerarkomfort. På grund av den betydande. analystiden för sådana beräkningar användes delmängdssimulering och Monte-Carlo (MC) simulering med metamodeller baserade på polynomisk kaosutvidgning. Båda metoderna visade sig fungera bra, med trafiksäkerhet som nästan alltid dominerade över gränsningstillståndet för passagerarkomfort. Därefter kombinerades klassificeringsbaserade metamodeller som stödvektormaskin och beslutsträd genom ensembletekniker, för att undersöka påverkan av jord-brointeraktion på den utvärderade tillförlitligheten gällande trafiksäkerhet. De erhållna resultaten visade en signifikant påverkan och betonade behovet av detaljerade undersökningar genom ytterligare studier. Slutligen genomfördes en tillförlitlighetsbaserad konstruktionsoptimering för att föreslå ett minimikrav på erforderlig bromassa per längdmeter och tröghetsmoment. Det är värt att nämna att metodens inre loop löstes med en MC-simulering med adaptivt tränade Kriging-metamodeller.

QC 20210910

Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Cheema, Prasad. "Machine Learning for Inverse Structural-Dynamical Problems: From Bayesian Non-Parametrics, to Variational Inference, and Chaos Surrogates". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24139.

Texto completo
Resumen
To ensure that the design of a structure is both robust and efficient, engineers often investigate inverse dynamical modeling problems. In particular, there are three archetypal inverse modeling problems which arise in the context of structural engineering. These are respectively: (i) The eigenvalue assignment problem, (ii) Bayesian model updating, and (iii) Operational modal analysis. It is the intent of this dissertation to investigate all three aforementioned inverse dynamical problems within the broader context of modern machine learning advancements. Firstly, the inverse eigenvalue assignment problem will be investigated via performing eigenvalue placement with respect to several different mass-spring systems. It will be shown that flexible, and robust inverse design analysis is possible by appealing to black box variational methods. Secondly, stochastic model updating will be explored via an in-house, physical T-tail structure. This will be addressed through the careful consideration of polynomial chaos theory, and Bayesian model updating, as a means to rapidly quantify structural uncertainties, and perform model updating between a finite element simulation, and the physical structure. Finally, the monitoring phase of a structure often represents an important and unique challenge for engineers. This dissertation will explore the notion of operational modal analysis for a cable-stayed bridge, by building upon a Bayesian non-parametric approach. This will be shown to circumvent the need for many classic thresholds, factors, and parameters which have often hindered analysis in this area. Ultimately, this dissertation is written with the express purpose of critically assessing modern machine learning algorithms in the context of some archetypal inverse dynamical modeling problems. It is therefore hoped that this dissertation will act as a point of reference, and inspiration for further work, and future engineers in this area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Pant, Gaurav. "Hybrid Dynamic Modelling of Engine Emissions on Multi-Physics Simulation Platform. A Framework Combining Dynamic and Statistical Modelling to Develop Surrogate Models of System of Internal Combustion Engine for Emission Modelling". Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/17223.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Spraul, Charles. "Suivi en service de la durée de vie des ombilicaux dynamiques pour l’éolien flottant". Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0007/document.

Texto completo
Resumen
Le travail présenté vise à mettre en place une méthodologie pour le suivi en service de la fatigue mécanique pour l’ombilical dynamique d’un système EMR flottant. L’approche envisagée consiste à simuler à l’aide d’outils numériques la réponse de l’ombilical aux cas de chargement observés sur site. Le post-traitement des résultats de ces simulations devant permettre d’accéder à différentes quantités d’intérêt en tout point du câble. Pour quantifier et réduire l’incertitude sur la réponse calculée de l’ombilical ce dernier doit être instrumenté. Un certain nombre de paramètres du modèle numérique feront alors l’objet d’une calibration régulière pour suivre l’évolution des caractéristiques de l’ombilical susceptibles d’évoluer. Dans ce contexte ce manuscrit présente et compare différentes méthodes pour analyser la sensibilité de la réponse de l’ombilical aux paramètres susceptibles d’être suivis. L’objectif est notamment d’orienter le choix des mesures à mettre en oeuvre. L’analyse en composantes principales permet pour cela d’identifier les principaux modes de variation de la réponse de l’ombilical en réponse aux variations des paramètres étudiés. Différentes approches sont également envisagées pour la calibration des paramètres suivis,avec en particulier le souci de quantifier l’incertitude restante sur le dommage. Les méthodes envisagées sont coûteuses en nombre d’évaluations du modèle numérique et ce dernier est relativement long à évaluer. L’emploi de méta-modèles en substitution des simulations numériques apparait donc nécessaire, et là encore différentes options sont considérées. La méthodologie proposée est appliquée à une configuration simplifiée d’ombilical dans des conditions inspirées du projet FLOATGEN
The present work introduces a methodology to monitor fatigue damage of the dynamic power cable of a floating wind turbine. The suggested approach consists in using numerical simulations to compute the power cable response at the sea states observed on site. The quantities of interest are then obtained in any location along the cable length through the post-treatment of the simulations results. The cable has to be instrumented to quantify and to reduce the uncertainties on the calculated response of the power cable. Indeed some parameters of the numerical model should be calibrated on a regular basis in order to monitor the evolution of the cable properties that might change over time. In this context, this manuscript describes and compares various approaches to analyze the sensitivity of the power cable response to the variations of the parameters to be monitored. The purpose is to provide guidance in the choice of the instrumentation for the cable. Principal components analysis allows identifying the main modes of power cable response variations when the studied parameters are varied. Various methods are also assessed for the calibration of the monitored cable parameters. Special care is given to the quantification of the remaining uncertainty on the fatigue damage. The considered approaches are expensive to apply as they require a large number of model evaluations and as the numerical simulations durations are quite long. Surrogate models are thus employed to replace the numerical model and again different options are considered. The proposed methodology is applied to a simplified configuration which is inspired by the FLOATGEN project
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Little, M. A. "Biomechanically informed nonlinear speech signal processing". Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:6f5b84fb-ab0b-42e1-9ac2-5f6acc9c5b80.

Texto completo
Resumen
Linear digital signal processing based around linear, time-invariant systems theory finds substantial application in speech processing. The linear acoustic source-filter theory of speech production provides ready biomechanical justification for using linear techniques. Nonetheless, biomechanical studies surveyed in this thesis display significant nonlinearity and non-Gaussinity, casting doubt on the linear model of speech production. In order therefore to test the appropriateness of linear systems assumptions for speech production, surrogate data techniques can be used. This study uncovers systematic flaws in the design and use of exiting surrogate data techniques, and, by making novel improvements, develops a more reliable technique. Collating the largest set of speech signals to-date compatible with this new technique, this study next demonstrates that the linear assumptions are not appropriate for all speech signals. Detailed analysis shows that while vowel production from healthy subjects cannot be explained within the linear assumptions, consonants can. Linear assumptions also fail for most vowel production by pathological subjects with voice disorders. Combining this new empirical evidence with information from biomechanical studies concludes that the most parsimonious model for speech production, explaining all these findings in one unified set of mathematical assumptions, is a stochastic nonlinear, non-Gaussian model, which subsumes both Gaussian linear and deterministic nonlinear models. As a case study, to demonstrate the engineering value of nonlinear signal processing techniques based upon the proposed biomechanically-informed, unified model, the study investigates the biomedical engineering application of disordered voice measurement. A new state space recurrence measure is devised and combined with an existing measure of the fractal scaling properties of stochastic signals. Using a simple pattern classifier these two measures outperform all combinations of linear methods for the detection of voice disorders on a large database of pathological and healthy vowels, making explicit the effectiveness of such biomechanically-informed, nonlinear signal processing techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

De, lozzo Matthias. "Modèles de substitution spatio-temporels et multifidélité : Application à l'ingénierie thermique". Thesis, Toulouse, INSA, 2013. http://www.theses.fr/2013ISAT0027/document.

Texto completo
Resumen
Cette thèse porte sur la construction de modèles de substitution en régimes transitoire et permanent pour la simulation thermique, en présence de peu d'observations et de plusieurs sorties.Nous proposons dans un premier temps une construction robuste de perceptron multicouche bouclé afin d'approcher une dynamique spatio-temporelle. Ce modèle de substitution s'obtient par une moyennisation de réseaux de neurones issus d'une procédure de validation croisée, dont le partitionnement des observations associé permet d'ajuster les paramètres de chacun de ces modèles sur une base de test sans perte d'information. De plus, la construction d'un tel perceptron bouclé peut être distribuée selon ses sorties. Cette construction est appliquée à la modélisation de l'évolution temporelle de la température en différents points d'une armoire aéronautique.Nous proposons dans un deuxième temps une agrégation de modèles par processus gaussien dans un cadre multifidélité où nous disposons d'un modèle d'observation haute-fidélité complété par plusieurs modèles d'observation de fidélités moindres et non comparables. Une attention particulière est portée sur la spécification des tendances et coefficients d'ajustement présents dans ces modèles. Les différents krigeages et co-krigeages sont assemblés selon une partition ou un mélange pondéré en se basant sur une mesure de robustesse aux points du plan d'expériences les plus fiables. Cette approche est employée pour modéliser la température en différents points de l'armoire en régime permanent.Nous proposons dans un dernier temps un critère pénalisé pour le problème de la régression hétéroscédastique. Cet outil est développé dans le cadre des estimateurs par projection et appliqué au cas particulier des ondelettes de Haar. Nous accompagnons ces résultats théoriques de résultats numériques pour un problème tenant compte de différentes spécifications du bruit et de possibles dépendances dans les observations
This PhD thesis deals with the construction of surrogate models in transient and steady states in the context of thermal simulation, with a few observations and many outputs.First, we design a robust construction of recurrent multilayer perceptron so as to approach a spatio-temporal dynamic. We use an average of neural networks resulting from a cross-validation procedure, whose associated data splitting allows to adjust the parameters of these models thanks to a test set without any information loss. Moreover, the construction of this perceptron can be distributed according to its outputs. This construction is applied to the modelling of the temporal evolution of the temperature at different points of an aeronautical equipment.Then, we proposed a mixture of Gaussian process models in a multifidelity framework where we have a high-fidelity observation model completed by many observation models with lower and no comparable fidelities. A particular attention is paid to the specifications of trends and adjustement coefficients present in these models. Different kriging and co-krigings models are put together according to a partition or a weighted aggregation based on a robustness measure associated to the most reliable design points. This approach is used in order to model the temperature at different points of the equipment in steady state.Finally, we propose a penalized criterion for the problem of heteroscedastic regression. This tool is build in the case of projection estimators and applied with the Haar wavelet. We also give some numerical results for different noise specifications and possible dependencies in the observations
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Koch, Christiane [Verfasser]. "Quantum dissipative dynamics with a surrogate Hamiltonian : the method and applications / von Christiane Koch". 2002. http://d-nb.info/966299094/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Truelove, William Anthony Lawrence. "A general methodology for generating representative load cycles for monohull surface vessels". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10434.

Texto completo
Resumen
In this thesis, a general methodology for generating representative load cycles for arbitrary monohull surface vessels is developed. The proposed methodology takes a hull geometry and propeller placement, vessel loading condition, vessel mission, and weather data (wind, waves, currents) and, from that, generates the propeller states (torque, speed, power) and steering gear states (torque, speed, power) necessary to accomplish the given mission. The propeller states, together with the steering gear states, thus define the load cycle corresponding to the given inputs (vessel, mission, weather). Some key aspects of the proposed methodology include the use of a surge-sway-yaw model for vessel dynamics as well as the use of surrogate geometries for both the hull and propeller(s). What results is a methodology that is lean (that is, it requires only sparse input), fast, easy to generalize, and reasonably accurate. The proposed methodology is validated by way of two separate case studies, case A and case B (both involving distinct car-deck ferries), with case A being a more ideal case, and case B being a less ideal case given the methodology proposed. In both cases, the load cycle generation process completed in greater than real time, achieving time ratios (simulated time to execution time) of 3.3:1 and 12.8:1 for cases A and B respectively. The generated propeller and steering gear states were then compared to data collected either at sea or from the vessels' documentation. For case A, the propeller speed, torque, and power values generated were all accurate to within +/- 3%, +/- 7%, and +/- 10% of the true values, respectively, while cruising, and accurate to within +/- 14%, +/- 36%, and +/- 42% of the true values, respectively, while maneuvering. In addition, the steering gear powers generated in case A were consistent with the capabilities of the equipment actually installed on board. For case B, the propeller speed, torque, and power values generated were all accurate to within +/- 2%, +/- 8%, and +/- 9% of the true values, respectively, while cruising, and accurate to within +/- 28%, +/- 45%, and +/- 66% of the true values, respectively, while maneuvering. In case B, however, the steering gear powers generated were questionable. Considering the results of the validation, together with the rapid process runtimes achieved and sparse inputs given, one may conclude that the methodology proposed in this thesis shows promise in terms of being able to generate representative load cycles for arbitrary monohull surface vessels.
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

"Convergent surrogate-constraint dynamic programming". 2006. http://library.cuhk.edu.hk/record=b5893071.

Texto completo
Resumen
Wang Qing.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2006.
Includes bibliographical references (leaves 72-74).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Literature survey --- p.2
Chapter 1.2 --- Research carried out in this thesis --- p.4
Chapter 2 --- Conventional Dynamic Programming --- p.7
Chapter 2.1 --- Principle of optimality and decomposition --- p.7
Chapter 2.2 --- Backward dynamic programming --- p.12
Chapter 2.3 --- Forward dynamic programming --- p.15
Chapter 2.4 --- Curse of dimensionality --- p.19
Chapter 2.5 --- Singly constrained case --- p.21
Chapter 3 --- Surrogate Constraint Formulation --- p.24
Chapter 3.1 --- Conventional surrogate constraint formulation --- p.24
Chapter 3.2 --- Surrogate dual search --- p.26
Chapter 3.3 --- Nonlinear surrogate constraint formulation --- p.30
Chapter 4 --- Convergent Surrogate Constraint Dynamic Programming: Objective Level Cut --- p.38
Chapter 5 --- Convergent Surrogate Constraint Dynamic Programming: Domain Cut --- p.44
Chapter 6 --- Computational Results and Analysis --- p.60
Chapter 6.1 --- Sample problems --- p.61
Chapter 7 --- Conclusions --- p.70
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Cosi, Filippo Giovanni. "Impact of Structure Modification on Cardiomyocyte Functionality". Doctoral thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-13B6-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Luo, Hua-Wen y 羅華文. "Dynamic Precision Control in Surrogate Assisted Optimization". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/04740429510755550684.

Texto completo
Resumen
碩士
國立臺灣大學
數學研究所
99
In many optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to t response surfaces or surrogate surface to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, trade o analysis, and optimization. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Tsai, Sung-feng y 蔡松峰. "Performance Tuning of Eigensolver via Dynamic Surrogate". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/68430764250884484890.

Texto completo
Resumen
碩士
國立臺灣大學
數學研究所
100
For using an iterative eigensolver to solve an eigenvalue problem with a large-scale matrix, adaptively choosing parameters will significantly improve its performance. Since there is no obvious relation between the execution time and parameters of an eigensolver, it is reasonable that using direct search method to optimize the parameters. In many direct search methods, however, it is likely that evaluating much number of function values is limited by time or cost. For reducing time or cost, one idea we present here is that we pause some function evaluations that have no chance to be optimal ones. During iterative process of an eigensolver, we monitor the information produced after each iteration to decide how we control iterative process dynamically: we determine whether iterations should be kept paused or restarted. We then construct a surrogate by using the function values with low accuracy and high one corresponding to paused points and convergent points, respectively. In our computer experiments, we show that the Dynamic Surrogate-Assisted Search (DSAS) Algorithm reduces the cost significantly. Hence, then we can tune the performance of an eigensolver efficiently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

"Surrogate dual search in nonlinear integer programming". 2009. http://library.cuhk.edu.hk/record=b5896898.

Texto completo
Resumen
Wang, Chongyu.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 74-78).
Abstract also in Chinese.
Abstract --- p.1
Abstract in Chinese --- p.3
Acknowledgement --- p.4
Contents --- p.5
List of Tables --- p.7
List of Figures --- p.8
Chapter 1. --- Introduction --- p.9
Chapter 2. --- Conventional Dynamic Programming --- p.15
Chapter 2.1. --- Principle of optimality and decomposition --- p.15
Chapter 2.2. --- Backward dynamic programming --- p.17
Chapter 2.3. --- Forward dynamic programming --- p.20
Chapter 2.4. --- Curse of dimensionality --- p.23
Chapter 3. --- Surrogate Constraint Formulation --- p.26
Chapter 3.1. --- Surrogate constraint formulation --- p.26
Chapter 3.2. --- Singly constrained dynamic programming --- p.28
Chapter 3.3. --- Surrogate dual search --- p.29
Chapter 4. --- Distance Confined Path Algorithm --- p.34
Chapter 4.1. --- Yen´ةs algorithm for the kth shortest path problem --- p.35
Chapter 4.2. --- Application of Yen´ةs method to integer programming --- p.36
Chapter 4.3. --- Distance confined path problem --- p.42
Chapter 4.4. --- Application of distance confined path formulation to integer programming --- p.50
Chapter 5. --- Convergent Surrogate Dual Search --- p.59
Chapter 5.1. --- Algorithm for convergent surrogate dual search --- p.62
Chapter 5.2. --- "Solution schemes for (Pμ{αk,αβ)) and f(x) = αk" --- p.63
Chapter 5.3. --- Computational Results and Analysis --- p.68
Chapter 6. --- Conclusions --- p.72
Bibliography --- p.74
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Hossain, Md Nurtaj. "Adaptive reduced order modeling of dynamical systems through novel a posteriori error estimators : Application to uncertainty quantification". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5218.

Texto completo
Resumen
Multi-query problems such as uncertainty quantification, optimization of a dynamical system require solving a governing differential equation at multiple parameter values. Therefore, for large systems, the computational cost becomes prohibitive. Replacing the original discretized higher dimensional model with a faster reduced order model (ROM) can alleviate this computationally prohibitive task significantly. However, a ROM incurs error in the solution due to approximation in a lower dimensional subspace. Moreover, ROMs lack robustness in terms of effectiveness over the entire parameter range. Accordingly, often they are classified as local and global, based on their construction in the parametric domain. Availability of an error bound or error estimator of a ROM helps in achieving this robustness, mainly by allowing adaptivity. The goal of this thesis is to propose such error estimators and use them to develop adaptive proper orthogonal decomposition-based ROM for uncertainty quantification. Therefore, two a posteriori error estimators, one for linear and another for nonlinear dynamical system, respectively, are developed based on the residual in the differential equation. To develop an a posteriori error estimator for nonlinear systems, first, an upper bound on the norm of the state transition matrix is derived and then it is used to develop the error estimator. Numerically they are compared with the error estimators available in the current literature. This comparison revealed that the proposed estimators follow the trend of the exact error more closely, thus serving as an improvement over the state-of-the-art. These error estimators are used in conjunction with a greedy search to develop adaptive algorithms for the construction of robust ROM. The adaptively trained ROM is subsequently deployed for uncertainty quantification by invoking it in a statistical simulation. For the linear dynamical system, two algorithms are proposed for building robust ROMs --- one for local, and another for global. For the nonlinear dynamical system, an adaptive algorithm is developed for the global ROM. In the training stage of global ROM, a modification is proposed --- at each iteration of the greedy search, the ROM is trained at a few local maxima in addition to the global maxima of the error estimator --- leading to an accelerated convergence. For this purpose, a multi-frequency vibrational particle swarm optimization is employed. It is shown that the proposed algorithm for adaptive training of ROMs poses ample scope of parallelization. Different numerical studies: (i) bladed disk assembly, (ii) Burgers' equation, and (iii) beam on nonlinear Winkler foundation, are performed to test the efficiency of the error estimators and the accuracy achieved by the modified greedy search. A speed-up of more than two orders of magnitude is achieved using the ROM, trained with the proposed algorithm, and error estimators. However, adaptive training of ROM is also expensive due to multiple evaluations of HDM. To address this issue, a novel random excitation-based training is proposed in this thesis. Accordingly, depending upon the parameter range of interest, bandlimited random white noise excitations are chosen and the ROM is trained from the corresponding responses. This is applied to linear and nonlinear vibrating systems with spatial periodicity and imperfection. From the numerical studies, it is found that the proposed method reduces the cost of training significantly, and successfully captures the behavior such as alternate pass- and stop-bands of vibration propagation, peak response.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Valladares, Guerra Homero Santiago. "Surrogate-based global optimization of composite material parts under dynamic loading". Thesis, 2017. https://doi.org/10.7912/C2XH2R.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
The design optimization of laminated composite structures is of relevance in automobile, naval, aerospace, construction and energy industry. While several optimization methods have been applied in the design of laminated composites, the majority of those methods are only applicable to linear or simplified nonlinear models that are unable to capture multi-body contact. Furthermore, approaches that consider composite failure still remain scarce. This work presents an optimization approach based on design and analysis of computer experiments (DACE) in which smart sampling and continuous metamodel enhancement drive the design process towards a global optimum. Kriging metamodel is used in the optimization algorithm. This metamodel enables the definition of an expected improvement function that is maximized at each iteration in order to locate new designs to update the metamodel and find optimal designs. This work uses explicit finite element analysis to study the crash behavior of composite parts that is available in the commercial code LS-DYNA. The optimization algorithm is implemented in MATLAB. Single and multi-objective optimization problems are solved in this work. The design variables considered in the optimization include the orientation of the plies as well as the size of zones that control the collapse of the composite parts. For the ease of manufacturing, the fiber orientation is defined as a discrete variable. Objective functions such as penetration, maximum displacement and maximum acceleration are defined in the optimization problems. Constraints are included in the optimization problem to guarantee the feasibility of the solutions provided by the optimization algorithm. The results of this study show that despite the brittle behavior of composite parts, they can be optimized to resist and absorb impact. In the case of single objective problems, the algorithm is able to find the global solution. When working with multi-objective problems, an enhanced Pareto is provided by the algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Chong, John y 張顯主. "Improved State of Charge Estimation of Lithium-Ion Cells via Surrogate Modeling under Dynamic Operating Conditions". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/v343mp.

Texto completo
Resumen
碩士
國立臺灣大學
機械工程學研究所
106
The goal of the thesis is to come up with an algorithm that is adequate for real-time electric vehicle battery state of charge(SOC) estimation. Therefore, the algorithm should meet the requirement of not hardware performance demanding, considering factors that influence battery SOC estimation and most importantly able to perform in dynamic operating state of electric vehicle. To cope with this, the study developed an improved algorithm based on combination of open-circuit voltage (OCV) method and coulomb counting method to estimate the SOC of lithium-ion battery. The proposed algorithm builds several surrogate models based on experimental data, and considers various influential issues such as temperature influence, battery current to improve SOC estimation. In addition, a methodology to estimate battery initial SOC during more realistic charging and discharging dynamic environment is proposed to cope with the unavailability of OCV method when not in rest state. In other words, the estimation of battery SOC is more comprehensive and can be corrected more often resulting in a more reliable battery SOC throughout battery usage. Experimental results of the algorithm shown better accuracy compared to basic OCV-Coulomb counting method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Brewer, Alex J. "Addressing wastewater epidemiology limitations with the use of dynamic population surrogates, complementary urinalyses and in-situ experiments". Thesis, 2012. http://hdl.handle.net/1957/36017.

Texto completo
Resumen
Wastewater epidemiology is an emerging discipline that requires collaborative research involving analytical chemists, drug epidemiologists, and wastewater engineers. Wastewater epidemiology involves the sampling and quantitative analysis of raw wastewaters from communities for illicit drugs and their metabolites. Mass loads (mass per day) and per capita (mg per day per person) are then calculated from concentrations and indicate the approximate quantity of illicit drugs used and excreted by the community. Limitations to wastewater epidemiology include that the population served by wastewater treatment plants within a day and between days is not well known. In addition, biodegradation of illicit drugs during transit in sewers may affect the concentration and mass flows that reach wastewater treatment plants. This thesis describes a series of studies conducted by an international collaboration between scientists and engineers from the United States and Switzerland to answer these two limitations. The experimental approaches for these studies used included high-frequency wastewater sampling strategies, the use of creatinine as a human urinary biomarker, as well as the use of unique locations as test sites including an open community, a prison in the state of Oregon, and a 5 km section of sewer in Zürich Switzerland. In Chapter 2, the diurnal study on the mass flows of illicit drugs or metabolites was formed over four days in a municipality with a population of approximately 55,000 people. The diurnal trends in illicit substances vary by substance. The high (g/day) mass flows of caffeine, methamphetamine, and creatinine indicate that lower-frequency sampling (approximately one sample per h) may representatively capture the use and excretion of these substances. However, lower and episodic mass flows of cocaine and its primary human metabolite, benzoylecgonine, indicate that higher-frequency is needed to accurately assess the use of the cocaine within the municipality. Normalization of illicit substances to creatinine gave between-day trends in illicit and legal substances that differed from non-normalized trends. Resident use of cocaine and methamphetamine were indicated by normalized mass flows that increased during early morning hours while commuters are largely absent from the community. Chapter 3 describes a series of experiments conducted at an Oregon state prison. The prison setting provided a unique opportunity to study a nearly-fixed population of individuals and their corresponding mass flows of illicit substances, the number of doses per person consumed, as well as an opportunity to quantify the level of agreement between numbers of individuals and the measured mass flows of creatinine. Methamphetamine use was more prevalent than cocaine/benzoylecgonine in the prison over the one month study in which single daily (24 h) composite samples of wastewater were collected. The hypothesis that the mass flows of methamphetamine and cocaine would be lower on days on which random urinalysis testing (RUA) is typically conducted by the prison (Monday-Thursday) was rejected. While the mass flows (mg/d) of methamphetamine were less than those for a nearby open community, the number of estimated doses per person was higher for the prison population. A higher number of positive RUA results were obtained for methamphetamine while none were positive for cocaine, which is consistent with the data obtained from wastewater. The hourly (diurnal) trend in methamphetamine mass loads indicated continual methamphetamine use/excretion inside the prison while cocaine and benzoylecgonine were detected in five hourly composite samples. Use of methamphetamine and cocaine by inmates could not be unambiguously distinguished from that of non-inmates (employees and visitors). The observed diurnal trends in creatinine mass loads were similar to those of an open community and are indicative of the general pattern of human wakefulness/activity. Predicted creatinine mass loads based on the total prison (inmates + non-inmates) were in good agreement with the measured mass loads, which indicates the potential use of creatinine as a quantitative population indicator. Additional research on the biodegradability of creatinine is needed because the prison setting was deliberately selected to minimize the potential for creatinine biodegradation. Chapter 4 addresses the data gap that exists on illicit drug transformation during in situ transit in sewers. The rates of in situ biodegradation have not yet been determined for conditions that are relevant to sewers, which include low to variable oxygen concentrations, the presence of a biofilm, and temperatures ≤ 20 °C. For this reason, two tracer tests were conducted in a 5 km stretch of sewer located near Zürich, Switzerland. The stable-isotope forms (deuterated) of cocaine and benzoylecgonine were injected into flowing wastewater and three locations up to 5 km downstream were sampled over time. Breakthrough curves were constructed from measurements of cocaine-d3 and benzoylecgonine-d3 concentration with time. The area under the curve (mass) was determined by integrating concentration over time. Benzoylecgonine-d3 was present in the injectate that should have only contained cocaine-d3; because the benzoylecgonine-d3 formation prior to injection is not known. The injected mass of cocaine-d3 did not decline over the 5 km distance. The observed mass of cocaine-d3 at 5 km was 10% greater than at 500 m, which indicates that the transformation of cocaine was not significant over the 1.5 h experiment. At 5 km downgradient, the apparent mass of benzoylecgonine-d3 had increased by 35% over that observed at 500 m. However, the apparent increase in benzoylecgonine-d3 mass was not accompanied by a corresponding loss of cocaine-d3. While uncertainty is apparent in the increase of both cocaine-d3 and benzoylecgonine-d3, the ratio of cocaine-d3/benzoylecgonine-d3 is subject only to analytical error because any errors associated with sampling and the integration of masses cancel out. The ratio of cocaine-d3/benzoylecgonine-d3 declined from 2.98 in the injectate to 1.66 at Location 3, which indicated a greater increase in benzoylecgonine-d3 relative to cocaine over the 5 km distance. Due to the benzoylecgonine-d3 that was present in the injectate, any biodegradation of cocaine-c3 to form benzoylecgonine-d3 could not be unambiguously distinguished. During the second tracer test in which benzoylecgonine-d3 was injected, the mass of benzoylecgonine-d3 did not significantly decline, which suggests that the apparent loss of benzoylecgonine-d3 during the cocaine-d3 test cannot be attributed to in-situ biodegradation. Overall, while uncertainty exists about the integrated masses for cocaine-d3 and benzoylecgonine-d3, the 5 km distance was too short in order to observe a significant loss of cocaine-d3 and formation of benzoylecgonine-d3. Recommendations for future research include conducting analysis on the injectate solution to ensure that only cocaine-d3 is introduced so that any formation of benzoylecgonine-d3 is readily apparent and quantifiable. In addition, the tracer tests should be repeated in a longer section of sewer to increase the residence time beyond 1.5 hr and degradation products of benzoylecgonine-d3 should be monitored including ecgonine and ecgonine methyl ester.
Graduation date: 2013
Access restricted to the OSU Community at author's request from Jan. 7, 2013 - Jan. 7, 2014
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

DuVal, Marc G. "Dynamic Gd-DTPA Enhanced MRI as a Surrogate Marker of Angiogenesis in Tissue-engineered Rabbit Calvarial Constructs: A Pilot Study". Thesis, 2011. http://hdl.handle.net/1807/30578.

Texto completo
Resumen
Tissue engineering is limited by inability to create early and adequate blood supply. In-vivo DCE-MRI has imaged angiogenesis in soft tissues, yet has not been considered in hard tissues. Bilateral critical defects created in parietal bones of eighteen adult rabbits were left void, treated with haluronic acid acellular matrix (HA-ACM), or HA-ACM impregnated with vascular endothelial growth factor (VegF). DCE- MRI was acquired at weeks 1,2,3,6, and 12. Histologic analysis of HA-ACM treated defects demonstrated quantitatively greater immature bone formation, increased quantity and larger blood vessels compared to void. Statistically significant greater angiogenesis evidenced by quantitative perfusion on MRI supported histologic findings. DCE MRI is a novel means of imaging angiogenesis in grafted bone defects. DCE-MRI discerns physiologically important phases of angiogenesis: Initial vasoactive response, vessel network initiation, establishment, and pruning. DCE-MRI is adaptable to non-invasive study of candidate tissue engineered constructs and in evaluating scaffolds and treatments on angiogenesis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Semendiak, Yevhenii. "From Parameter Tuning to Dynamic Heuristic Selection". 2020. https://tud.qucosa.de/id/qucosa%3A71044.

Texto completo
Resumen
The importance of balance between exploration and exploitation plays a crucial role while solving combinatorial optimization problems. This balance is reached by two general techniques: by using an appropriate problem solver and by setting its proper parameters. Both problems were widely studied in the past and the research process continues up until now. The latest studies in the field of automated machine learning propose merging both problems, solving them at design time, and later strengthening the results at runtime. To the best of our knowledge, the generalized approach for solving the parameter setting problem in heuristic solvers has not yet been proposed. Therefore, the concept of merging heuristic selection and parameter control have not been introduced. In this thesis, we propose an approach for generic parameter control in meta-heuristics by means of reinforcement learning (RL). Making a step further, we suggest a technique for merging the heuristic selection and parameter control problems and solving them at runtime using RL-based hyper-heuristic. The evaluation of the proposed parameter control technique on a symmetric traveling salesman problem (TSP) revealed its applicability by reaching the performance of tuned in online and used in isolation underlying meta-heuristic. Our approach provides the results on par with the best underlying heuristics with tuned parameters.:1 Introduction 1 1.1 Motivation 1 1.2 Research objective 2 1.3 Solution overview 2 2 Background and RelatedWork Analysis 3 2.1 Optimization Problems and their Solvers 3 2.2 Heuristic Solvers for Optimization Problems 9 2.3 Setting Algorithm Parameters 19 2.4 Combined Algorithm Selection and Hyper-Parameter Tuning Problem 27 2.5 Conclusion on Background and Related Work Analysis 28 3 Online Selection Hyper-Heuristic with Generic Parameter Control 31 3.1 Combined Parameter Control and Algorithm Selection Problem 31 3.2 Search Space Structure 32 3.3 Parameter Prediction Process 34 3.4 Low-Level Heuristics 35 3.5 Conclusion of Concept 36 4 Implementation Details 37 4.2 Search Space 40 4.3 Prediction Process 43 4.4 Low Level Heuristics 48 4.5 Conclusion 52 5 Evaluation 55 5.1 Optimization Problem 55 5.2 Environment Setup 56 5.3 Meta-heuristics Tuning 56 5.4 Concept Evaluation 60 5.5 Analysis of HH-PC Settings 74 5.6 Conclusion 79 6 Conclusion 81 7 FutureWork 83 7.1 Prediction Process 83 7.2 Search Space 84 7.3 Evaluations and Benchmarks 84 Bibliography 87 A Evaluation Results 99 A.1 Results in Figures 99 A.2 Results in numbers 105
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Pepi, Chiara. "Suitability of dynamic identification for damage detection in the light of uncertainties on a cable stayed footbridge". Doctoral thesis, 2019. http://hdl.handle.net/2158/1187384.

Texto completo
Resumen
Structural identification is a very important task especially in all those countries characterized by significant historical and architectural patrimony and strongly vulnerable infrastructures, subjected to inherent degradation with time and to natural hazards e.g. seismic loads. Structural response of existing constructions is usually estimated using suitable numerical models which are driven by a set of geometrical and/or mechanical parameters that are mainly unknown and/or affected by different levels of uncertainties. Some of these information can be obtained by experimental tests but it is practically impossible to have all the required data to have reliable response estimations. For these reasons it is current practice to calibrate some of the significant unknown and/or uncertain geometrical and mechanical parameters using measurements of the actual response (static and/or dynamic) and solving an inverse structural problem. Model calibration is also affected by uncertainties due to the quality (e.g. signal to noise ratio, random properties) of the measured data and to the algorithms used to estimate structural parameters. In this thesis a new robust framework to be used in structural identification is proposed in order to have a reliable numerical model that can be used both for random response estimation and for structural health monitoring. First a parametric numerical model of the existing structural system is developed and updated using probabilistic Bayesian framework. Second, virtual samples of the structural response affected by random loads are evaluated. Third, this virtual samples are used as virtual experimental response in order to analyze the uncertainties on the main modal parameters varying the number and time length of samples, the identification technique and the target response. Finally, the information given by the measurement uncertainties are used to assess the capability of vibration based damage identification method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Corredor, Edward Alexis Baron. "Assessment and identification of concrete box-girder bridges properties using surrogate model calibration: case study: El Tablazo bridge". Master's thesis, 2017. http://hdl.handle.net/1822/70634.

Texto completo
Resumen
Dissertação de mestrado integrado em Engenharia Civil
This work consists in identifying and assessing the properties in a pre-stressed concrete bridge related to material, geometry and physic sources, through a surrogate model. The participation of this mathematical model allows to generate a relationship between bridge properties and its dynamic response, with the purpose of creating a tool to predict the analytical values of the studied properties from measured eigenfrequencies; in this case, it is introduced the identification of damage scenarios, giving the application for validate the generated metamodel (Artificial Neural Network - ANN). A FE model is developed to simulate the studied structure, a Colombian bridge called El Tablazo, one of the higher in the country of this type (box-girder bridge), with a total length of 560 meters, located on the Sogamoso riverbed in the region of Santander - Colombia. Once the damage scenarios are defined, this work allows to indicate the basis for futures plans of structural health monitoring.
Este trabalho consiste em identificar e avaliar as propriedades de uma ponte em betão pré-esforçado em relação ao material, geometria e características físicas através de um metamodelo. A participação deste modelo matemático permite gerar uma relação entre as propriedades da ponte e sua resposta dinâmica, com o objetivo de criar uma ferramenta para prever os valores analíticos das propriedades estudadas a partir de frequências próprias medidas; neste caso, é introduzida a identificação de cenários de dano, dando uma aplicação para validar o metamodelo (Rede Neural Artificial - ANN). Um modelo de elemento finito é desenvolvido para simular a estrutura estudada, uma ponte colombiana chamada El Tablazo, uma das que apresenta maior altura do país em seu tipo (pontes em viga-caixão), com um comprimento total de 560 metros, localizada no rio Sogamoso, na região de Santander - Colômbia. Uma vez que os cenários de dano são definidos, a tese permite indicar a base para os planos futuros de monitoramento da saúde estrutural.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Pal, Sunit. "Design, Synthesis and Conformational Analysis of Hydrogen Bond Surrogate (HBS) Stabilized Helices in Natural Sequences. Helically Constrained Peptides for Potential DNA-Binding". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4837.

Texto completo
Resumen
Thesis titled, “Design, Synthesis and Conformational Analysis of Hydrogen Bond Surrogate (HBS) Stabilized Helices in Natural Sequences. Helically Constrained Peptides for Potential DNA-Binding”, describes the development of a novel covalent hydrogen bond surrogate (HBS) model and its incorporation in short (4-8 residues) unstructured peptide sequences with coded amino acids, through a facile solution phase synthetic method (SPSM), to constrain them into α- helical conformations with highest known stabilities and helicities. The synthetic protocol was developed for mass scale combinatorial synthesis of helical peptidomimetics. NMR, FT-IR, CD spectra and molecular dynamics simulations of variants of the HBS-constrained helical peptidomimetics were analyzed to determine the optimum number of sp2 atoms and the residue preferences that yield both the α-helical and the 310-helical folds with high structural integrity in the shortest sequences. The HBS-constrained helical peptidomimetics were used to derive experimental evidence that the 2-state Helix-Coil Transition occurs at each residue during helix folding and that this process is entropically driven. Further, the role of temperature on the denaturation of secondary structures was investigated using these HBS-constrained helical models. Helical peptidomimetics of the DNA-binding domain in the zinc-finger human TTK protein have been synthesized, which have proven to be promising mimics for DNA-binding and subsequent transcription regulation.
CSIR
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Kita, Alban. "An Innovative SHM Solution for Earthquake-Induced Damage Identification in Historic Masonry Structures". Doctoral thesis, 2020. http://hdl.handle.net/2158/1192486.

Texto completo
Resumen
The present Ph.D. Thesis was developed within a collaboration between the Department of Civil and Environmental Engineering of the University of Perugia, Italy, and the Department of Civil Engineering of the University of Minho, Portugal. The main objective of this research work concerned the development and validation of an innovative methodology aimed at the detection, localization and quantification of earthquake-induced damages in historic masonry structures. The high cultural, economic and political value set upon historic buildings spread out all over the world has made the earthquake-induced damage identification, as well as preservation and conservation of architectural heritage, a subject of outstanding importance. The proposed methodology, called DORI, is based on the combination of data-driven, as well as innovative model-based methods, addressing the Damage identification based on Operational modal analysis (OMA), Rapid surrogate modeling and Incremental dynamic analysis (IDA) for Cultural Heritage (CH) masonry buildings subjected to earthquakes. More in detail, the DORI methodology proposes the static-and-dynamic data fusion in the OMA-based damage detection method, and extends it through the introduction and implementation of two independent and complementary innovative model-based methods, for localization and quantification of earthquake-induced damage in permanently monitored historic masonry buildings: the former is a surrogate model-based method, a rapid tool which combines long-term vibration monitoring data (i.e. OMA) and numerical modeling, while the latter is based on non-linear seismic IDA. The Thesis focuses on the validation of different aspects of the DORI methodology, through application to four case study structures: an internationally well-known laboratory masonry structure, called the Brick House, and three CH masonry buildings equipped with permanent Structural Health Monitoring systems, namely the Consoli Palace, the Sciri Tower and the San Pietro Bell Tower. The adopted enhanced vibration-based SHM tool, by introducing crack amplitudes as predictors in the dynamic MLR model, was validated in the case of the Consoli Palace, enabling rapid and automated earthquake-induced damage detection, even for small structural damages at an early stage, conceivably caused by a moderate/light seismic event. Afterwards, the surrogate model-based procedure for earthquake-induced damage detection and localization was applied in the case of the Sciri Tower, using long-term vibration monitoring data and numerical modeling. In particular, a quadratic surrogate model is used, whose objective function considers not only experimentally identified and numerically predicted damage-induced decays in natural frequencies but also on changes in mode shapes. The procedure was validated by considering both simulated damage scenarios, as well as a slight change in structural behavior experimentally observed after a seismic event. Finally, the proposed seismic IDA-based method, introduced for the first time in this Thesis and aimed at localization and quantification of earthquake-induced damages in masonry structures, is applied to the Brick House and San Pietro Bell Tower. It relies on a priori IDA carried out from a numerical model and construction of multidimensional IDA curve sets relating meaningful local damage parameters to selected seismic intensity measures. The IDA-based procedure has demonstrated to correctly localize damage in specific parts of the structures and to quantify earthquake-induced damage with a good level of approximation. The results are particularly interesting in the case of the San Pietro Bell Tower due to the integration of the IDA-based damage identification with seismic SHM data recorded during the 2016 Central Italy seismic sequence, allowing the proposal and exploitation of some original response intensity measures. In conclusion, the DORI methodology proposed in this Thesis for earthquake-induced damage detection, localization and quantification is a novel methodological approach, successfully applied and validated in historic masonry structures, constituting a promising tool for rapid post-earthquake damage assessment of CH structures under long-term SHM monitoring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía