Thèses sur le sujet « Dynamical Inverse Problem »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Dynamical Inverse Problem.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Dynamical Inverse Problem ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Rachele, Lizabeth. « An inverse problem in elastodynamics / ». Thesis, Connect to this title online ; UW restricted, 1996. http://hdl.handle.net/1773/5735.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tregidgo, Henry. « Inverse problems and control for lung dynamics ». Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/inverse-problems-and-control-for-lung-dynamics(0f3224e6-7449-4417-bd2b-8e48ec88e2bf).html.

Texte intégral
Résumé :
Mechanical ventilation is vital for the treatment of patients in respiratory intensive care and can be life saving. However, the risks of regional pressure gradients and over-distension must be balanced with the need to maintain function. For these reasons mechanical ventilation can benefit from the regional information provided by bedside imaging such as electrical impedance tomography (EIT). In this thesis we develop and test methods to retrieve clinically meaningful measures of lung function from EIT and examine the feasibility of closing the feedback loop to enable EIT-guided control of mechanical ventilation. Working towards this goal we develop a reconstruction algorithm capable of providing fast absolute values of conductivity from EIT measurements. We couple the resulting conductivity time series to a compartmental ordinary differential equation (ODE) model of lung function in order to recover regional parameters of elastance and airway resistance. We then demonstrate how these parameters may be used to generate optimised pressure controls for mechanical ventilation that expose the lungs to minimal gradients of pressure and are stable with respect to EIT measurement errors. The EIT reconstruction algorithm we develop is capable of producing low dimensional absolute values of conductivity in real time after a limited additional setup time. We show that this algorithm retains the ability to give fast feedback on regional lung changes. We also describe methods of improving computational efficiency for general Gauss-Newton type EIT algorithms. In order to couple reconstructed conductivity time series to our ODE model we describe and test the recovery of regional ventilation distributions through a process of regularised differentiation. We prove that the parameters of our ODE model are recoverable from these ventilation distributions apart from the degenerate case where all compartments have the same parameters. We then test this recovery process under varying levels of simulated EIT measurement and modelling errors. Finally we examine the ODE lung model using control theory. We prove that the ODE model is controllable for a wide range of parameter values and link controllability to observable ventilation patterns in the lungs. We demonstrate the generation and optimisation of pressure controls with minimal time gradients and provide a bound on the resulting magnitudes of these pressures. We then test the control generation process using ODE parameter values recovered through EIT simulations at varying levels of measurement noise. Through this work we have demonstrated that EIT reconstructions can be of benefit to the control of mechanical ventilation.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hellio, Gabrielle. « Modèles stochastiques de mesures archéomagnétiques ». Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAU004/document.

Texte intégral
Résumé :
Cette thèse porte sur la construction de modèles stochastiques, régionaux et globaux du champ magnétique sur les quatre derniers millénaires à l'aide de mesures archéomagnétiques. Ces données présentent une répartition spatiale et temporelle très inhomogène, et sont caractérisées par de fortes incertitudes sur la mesure et sur la date. La reconstruction du champ constitue alors un problème inverse mal posé. Afin de déterminer la solution la plus adaptée, une information a priori sur le modèle doit être choisie. Elle consiste généralement en une régularisation arbitraire du champ magnétique (lissage en temps et en espace). Contrairement aux études précédentes, nous utilisons les statistiques temporelles du champ magnétique, dérivées des données d'observatoires, satellitaires et paléomagnétiques pour définir l'information a priori via des fonctions d'auto-covariances. Ces statistiques sont confirmées par des résultats issus de simulations numériques. Cette méthode bayésienne permet de s'affranchir de l'utilisation de fonctions supports arbitraires comme les splines pour l'interpolation temporelle. Le résultat final consiste en un ensemble de réalisations possibles du champ magnétique dont la dispersion caractérise l'incertitude sur le modèle. Afin de prendre en compte les erreurs de datation, nous développons par ailleurs une méthode basée sur l'utilisation de Markov Chain Monte Carlo (MCMC). Elle nous permet d'explorer de manière efficace l'espace des dates possibles et ainsi de sélectionner les modèles les plus probables. Cette méthode est une amélioration de la méthode de bootstrap classique, qui donne le même poids à des tirages aléatoires de dates présentant des probabilités très variables. Les ensembles de réalisations sélectionnés par la méthode MCMC aboutissent à la construction d'une densité de probabilités en lieu et place d'une courbe unique. La méthode bayésienne combinée à la méthode Markov Chain Monte Carlo nous a permis de construire des courbes régionales présentant des variations plus rapides que celles obtenues par d'autres études. Les courbes représentées sous forme de densités de probabilités ne sont pas nécessairement gaussiennes, et la méthode permet d'affiner l'estimation de l'âge de chacune des observations. La méthode bayésienne a été utilisée pour la construction de modèles globaux pour lesquels le dipôle axial présente des variations plus rapides que celui obtenu par de précédentes études. D'autre part, le champ magnétique obtenu pour les époques les plus récentes est raisonnablement similaire à celui construit à partir de mesures directes (satellites, observatoires, historiques) malgré des données beaucoup moins nombreuses et une répartition beaucoup moins homogène. Les modèles issus de cette étude offrent une alternative aux modèles existants régularisés, et pourront servir dans un objectif d'assimilation de données avec des modèles de la dynamique dans le noyau terrestre
The aim of this thesis is to build stochastic models of the magnetic field for the last four millenia from archeomagnetic measurements. The sparse repartition of these data in space and time, and their associated large measurement and dating errors lead to an ill-posed problem. To determine the best solution, one needs to choose some prior information which consists usually on arbitrary regularizations in space and time. Instead, we use the temporal statistics of the geomagnetic field available from satellites, observatories and paleomagnetic measurements, and validated by numerical simulations, to define our prior information via auto-covariance functions. This bayesian method allows to get rid of arbitrary support functions, like splines, usually necessary to interpolate the model in time. The result consists in an ensemble of several possible realizations of the magnetic field. The ensemble dispersion represents the model uncertainties. We find that the methodology can be adapted to account for the age uncertainties and we use Markov Chain Monte Carlo to explore the possible dates of observations. This method improves the bootstrap method which gives the same weight to every draws of dates presenting very disparate probabilities. Each ensemble of realizations is then constructed from each selected model and the result is presented as a probability density function. The bayesian method together with the Markov Chain Monte Carlo provides regional time series with rapid variations compared to previous studies. We find that the possible values of geomagnetic field elements are not necessarily normally distributed. Another output of the model is better age estimates of archeological artefacts. The bayesian method has been used to build global models for which the axial dipole presents more rapid variations than for previous studies. Moreover, the obtained magnetic field displays reasonably similar behavior than models obtained from direct measurements (satellites, observatories, historical), despite very few data and sparser repartition. Models obtained from this study offer an alternative to published regularized models and can be used in a purpose of data assimilation together with dynamical models in the Earth's core
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lebel, David. « Statistical inverse problem in nonlinear high-speed train dynamics ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC2189/document.

Texte intégral
Résumé :
Ce travail de thèse traite du développement d'une méthode de télédiagnostique de l'état de santé des suspensions des trains à grande vitesse à partir de mesures de la réponse dynamique du train en circulation par des accéléromètres embarqués. Un train en circulation est un système dynamique dont l'excitation provient des irrégularités de la géométrie de la voie ferrée. Ses éléments de suspension jouent un rôle fondamental de sécurité et de confort. La réponse dynamique du train étant dépendante des caractéristiques mécaniques des éléments de suspension, il est possible d'obtenir en inverse des informations sur l'état de ces éléments à partir de mesures accélérométriques embarquées. Connaître l'état de santé réel des suspensions permettrait d'améliorer la maintenance des trains. D’un point de vue mathématique, la méthode de télédiagnostique proposée consiste à résoudre un problème statistique inverse. Elle s'appuie sur un modèle numérique de dynamique ferroviaire et prend en compte l'incertitude de modèle ainsi que les erreurs de mesures. Les paramètres mécaniques associés aux éléments de suspension sont identifiés par calibration Bayésienne à partir de mesures simultanées des entrées (les irrégularités de la géométrie de la voie) et sorties (la réponse dynamique du train) du système. La calibration Bayésienne classique implique le calcul de la fonction de vraisemblance à partir du modèle stochastique de réponse et des données expérimentales. Le modèle numérique étant numériquement coûteux d'une part, ses entrées et sorties étant fonctionnelles d'autre part, une méthode de calibration Bayésienne originale est proposée. Elle utilise un métamodèle par processus Gaussien de la fonction de vraisemblance. Cette thèse présente comment un métamodèle aléatoire peut être utilisé pour estimer la loi de probabilité des paramètres du modèle. La méthode proposée permet la prise en compte du nouveau type d'incertitude induit par l'utilisation d'un métamodèle. Cette prise en compte est nécessaire pour une estimation correcte de la précision de la calibration. La nouvelle méthode de calibration Bayésienne a été testée sur le cas applicatif ferroviaire, et a produit des résultats concluants. La validation a été faite par expériences numériques. Par ailleurs, l'évolution à long terme des paramètres mécaniques de suspensions a été étudiée à partir de mesures réelles de la réponse dynamique du train
The work presented here deals with the development of a health-state monitoring method for high-speed train suspensions using in-service measurements of the train dynamical response by embedded acceleration sensors. A rolling train is a dynamical system excited by the track-geometry irregularities. The suspension elements play a key role for the ride safety and comfort. The train dynamical response being dependent on the suspensions mechanical characteristics, information about the suspensions state can be inferred from acceleration measurements in the train by embedded sensors. This information about the actual suspensions state would allow for providing a more efficient train maintenance. Mathematically, the proposed monitoring solution consists in solving a statistical inverse problem. It is based on a train-dynamics computational model, and takes into account the model uncertainty and the measurement errors. A Bayesian calibration approach is adopted to identify the probability distribution of the mechanical parameters of the suspension elements from joint measurements of the system input (the track-geometry irregularities) and output (the train dynamical response).Classical Bayesian calibration implies the computation of the likelihood function using the stochastic model of the system output and experimental data. To cope with the fact that each run of the computational model is numerically expensive, and because of the functional nature of the system input and output, a novel Bayesian calibration method using a Gaussian-process surrogate model of the likelihood function is proposed. This thesis presents how such a random surrogate model can be used to estimate the probability distribution of the model parameters. The proposed method allows for taking into account the new type of uncertainty induced by the use of a surrogate model, which is necessary to correctly assess the calibration accuracy. The novel Bayesian calibration method has been tested on the railway application and has achieved conclusive results. Numerical experiments were used for validation. The long-term evolution of the suspension mechanical parameters has been studied using actual measurements of the train dynamical response
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lyubchyk, Leonid, et Galina Grinberg. « Inverse Dynamic Models in Chaotic Systems Identification and Control Problems ». Thesis, Ternopil National Economic University, 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/36824.

Texte intégral
Résumé :
Inverse dynamic models approach for chaotic system synchronization in the presence of uncertain parameters is considered. The problem is identifying and compensating unknown state-dependent parametric disturbance describing an unmodelled dynamics that generates chaotic motion. Based on the method of inverse model control, disturbance observers and compensators are synthesized. A control law is proposed that ensures the stabilization of chaotic system movement along master reference trajectory. The results of computational simulation of controlled Rösller attractor synchronization are also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sehlstedt, Niklas. « Hybrid methods for inverse force estimation in structural dynamics ». Doctoral thesis, KTH, Vehicle Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3528.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Herman, Michael [Verfasser], et Wolfram [Akademischer Betreuer] Burgard. « Simultaneous estimation of rewards and dynamics in inverse reinforcement learning problems ». Freiburg : Universität, 2020. http://d-nb.info/1204003297/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lefeuvre, Thibault. « Sur la rigidité des variétés riemanniennes ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS562/document.

Texte intégral
Résumé :
Une variété riemannienne est dite rigide lorsque la longueur des géodésiques périodiques (cas des variétés fermées) ou des géodésiques diffusées (cas des variétés ouvertes) permet de reconstruire globalement la géométrie de la variété. Cette notion trouve naturellement son origine dans des dispositifs d’imagerie numérique tels que la tomographie par rayons X. Grâce une approche résolument analytique initiée par Guillarmou et fondée sur de l’analyse microlocale (plus particulièrement sur certaines techniques récentes dues à Faure-Sjostrand et Dyatlov-Zworski permettant une étude analytique fine des flots Anosov), nous montrons que le spectre marqué des longueurs, c’est-à-dire la donnée des longueurs des géodésiques périodiques marquées par l’homotopie, d’une variété fermée Anosov ou Anosov à pointes hyperboliques détermine localement la métrique de la variété. Dans le cas d’une variété ouverte avec ensemble capté hyperbolique, nous montrons que la distance marquée au bord, c’est-à-dire la donnée de la longueur des géodésiques diffusées marquées par l’homotopie, détermine localement la métrique. Enfin, dans le cas d’une surface asymptotiquement hyperbolique, nous montrons qu’une notion de distance renormalisée entre paire de points au bord à l’infini permet de reconstruire globalement la géométrie de la surface
A Riemannian manifold is said to be rigid if the length of periodic geodesics (in the case of a closed manifold) or scattered geodesics (in the case of an open manifold) allows to recover the full geometry of the manifold. This notion naturally arises in imaging devices such as X-ray tomography. Thanks to a analytic framework introduced by Guillarmou and based on microlocal analysis (and more precisely on the analytic study of hyperbolic flows of Faure-Sjostrand and Dyatlov-Zworski), we show that the marked length spectrum, that is the lengths of the periodic geodesics marked by homotopy, of a closed Anosov manifold or of an Anosov manifold with hyperbolic cusps locally determines its metric. In the case of an open manifold with hyperbolic trapped set, we show that the length of the scattered geodesics marked by homotopy locally determines the metric. Eventually, in the case of an asymptotically hyperbolic surface, we show that a suitable notion of renormalized distance between pair of points on the boundary at infinity allows to globally reconstruct the geometry of the surface
Styles APA, Harvard, Vancouver, ISO, etc.
9

Simon, Guillaume. « Endogeneity and instrumental variables in dynamic processes : inverse problems in finance ». Thesis, Toulouse 1, 2011. http://www.theses.fr/2011TOU10061.

Texte intégral
Résumé :
L’objectif de ma thèse est de fournir un environnement théorique pour la définition de l’endogénéité dans les processus en temps continu. La définition de l’endogénéité dans le cas statique est difficile, l’enjeu de ce travail est donc de voir quelles sont les implications et le cadre mathématique nécessaire pour définir l’endogénéité pour les processus. C’est l’objet du premier chapitre. On donne d’abord une extension des modèles séparables en termes de décomposition en semi-martingale. Pour les modèles non-séparables, on définit alors notre fonction d’intérêt comme un temps d’arrêt pour un processus de bruit additionnel, dont le rôle est joué par un mouvement Brownien pour les diffusions, et un processus de Poisson pour les processus de comptage. Ce travail a été mené dans le cadre d’un thèse CIFRE avec Société Générale Asset Management (devenue désormais Lyxor AM). SGAM était un fonds spéculatif (Hedge Fund) pour lequel le traitement de l’information présente dans les bases de données est un problème constant et difficile. De fait, comprendre la nature des processus sous-jacents aux durées de vie des Hedge Funds dans les bases de données est essentiel, c’est ce à quoi s’attache le second chapitre. Le troisième chapitre apporte une réponse claire à une problématique peu ou pas traitée (l’effet causal de certaines variables endogènes sur la durée de vie des fonds) à l’aide des conclusions du deuxième chapitre et des résultats du premier. Enfin, comme la résolution de tels problèmes nécessite de faire appel à la théorie des problèmes inverses, une application originale de cette théorie est aussi considérée pour l’allocation de portefeuille dans le dernier chapitre
The objective of this thesis is to draw the theory of endogeneity in dynamic models in continuous time. Defining endogeneity in the static case is difficult, the aim of this work is to understand what are the implications and what is the mathematical framework to define endogeneity for dynamic processes. This is the subject of the first chapter. We first provide an extension of the separable set-up to a separate dynamic framework given in term of semi-martingale decomposition. Then we define our function of interest as a stopping time for an additional noise process, whose role is played by a Brownian motion for diffusions, and a Poisson process for counting processes. Société Générale Asset Management (now Lyxor AM) has supporter this thesis. SGAM was a financial investment company (Hedge Fund) for statistical study of which Hedge Fund databases was a constant and hard problem. Consequently, understanding the nature of the underlying duration processes of Hedge Funds in databases was a crucial problem. This is the aim of the second chapter. The third chapter brings a clear answer to a rarely tackled question (the casual effect of some precise, endogeneous variables on the funds' lifetimes) thanks to the empirical findings of the second chapter and the results of the first. Finally, as the resolution of such problems needs the inverse problem theory, an original application of this theory is also considered in the last chapter for portfolio allocation
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rivers, Derick Lorenzo. « Dynamic Bayesian Approaches to the Statistical Calibration Problem ». VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3599.

Texte intégral
Résumé :
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the "classical" approach and the "inverse" regression approach. Both of these models are static models and are used to estimate "exact" measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe the measurement. The Bayesian time series analysis method of Dynamic Linear Models (DLM) can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs the use of Bayesian methodology to perform statistical calibration. The DLM framework is used to capture the time-varying parameters that may be changing or drifting over time. Dynamic based approaches to the linear, nonlinear, and multivariate calibration problem are presented in this dissertation. Simulation studies are conducted where the dynamic models are compared to some well known "static'" calibration approaches in the literature from both the frequentist and Bayesian perspectives. Applications to microwave radiometry are given.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wogrin, Sonja. « Model reduction for dynamic sensor steering : a Bayesian approach to inverse problems ». Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43739.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 97-101).
In many settings, distributed sensors provide dynamic measurements over a specified time horizon that can be used to reconstruct information such as parameters, states or initial conditions. This estimation task can be posed formally as an inverse problem: given a model and a set of measurements, estimate the parameters of interest. We consider the specific problem of computing in real-time the prediction of a contamination event, based on measurements obtained by mobile sensors. The spread of the contamination is modeled by the convection diffusion equation. A Bayesian approach to the inverse problem yields an estimate of the probability density function of the initial contaminant concentration, which can then be propagated through the forward model to determine the predicted contaminant field at some future time and its associated uncertainty distribution. Sensor steering is effected by formulating and solving an optimization problem that seeks the sensor locations that minimize the uncertainty in this prediction. An important aspect of this Dynamic Sensor Steering Algorithm is the ability to execute in real-time. We achieve this through reduced-order modeling, which (for our two-dimensional examples) yields models that can be solved two orders of magnitude faster than the original system, but only incur average relative errors of magnitude O(10-3). The methodology is demonstrated on the contaminant transport problem, but is applicable to a broad class of problems where we wish to observe certain phenomena whose location or features are not known a priori.
by Sonja Wogrin.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Cheema, Prasad. « Machine Learning for Inverse Structural-Dynamical Problems : From Bayesian Non-Parametrics, to Variational Inference, and Chaos Surrogates ». Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24139.

Texte intégral
Résumé :
To ensure that the design of a structure is both robust and efficient, engineers often investigate inverse dynamical modeling problems. In particular, there are three archetypal inverse modeling problems which arise in the context of structural engineering. These are respectively: (i) The eigenvalue assignment problem, (ii) Bayesian model updating, and (iii) Operational modal analysis. It is the intent of this dissertation to investigate all three aforementioned inverse dynamical problems within the broader context of modern machine learning advancements. Firstly, the inverse eigenvalue assignment problem will be investigated via performing eigenvalue placement with respect to several different mass-spring systems. It will be shown that flexible, and robust inverse design analysis is possible by appealing to black box variational methods. Secondly, stochastic model updating will be explored via an in-house, physical T-tail structure. This will be addressed through the careful consideration of polynomial chaos theory, and Bayesian model updating, as a means to rapidly quantify structural uncertainties, and perform model updating between a finite element simulation, and the physical structure. Finally, the monitoring phase of a structure often represents an important and unique challenge for engineers. This dissertation will explore the notion of operational modal analysis for a cable-stayed bridge, by building upon a Bayesian non-parametric approach. This will be shown to circumvent the need for many classic thresholds, factors, and parameters which have often hindered analysis in this area. Ultimately, this dissertation is written with the express purpose of critically assessing modern machine learning algorithms in the context of some archetypal inverse dynamical modeling problems. It is therefore hoped that this dissertation will act as a point of reference, and inspiration for further work, and future engineers in this area.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Carrillo, Oscar Javier Begambre. « Detecção de dano a partir da resposta dinâmica da estrutura : estudo analítico com aplicação a estruturas do tipo viga ». Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-05042016-135235/.

Texte intégral
Résumé :
O objetivo deste trabalho é estudar métodos dinâmicos de detecção de dano em vigas, em especial os métodos baseados na variação da flexibilidade medida dinamicamente. Os métodos revisados formam parte das técnicas de Detecção de Dano Não Destrutivas (DDND). Nas técnicas DDND o dano é localizado por comparação entre o estado sadio e o danificado da estrutura. Neste trabalho, o problema de vibração inverso é apresentado e a matriz de flexibilidade estática da estrutura é determinada a partir de seus parâmetros modais. Com ajuda de um Modelo de Elementos Finitos (MEF) são mostrados os diferentes padrões de variação da matriz de flexibilidade produzidos pela presença do dano. Baseando-se nestes padrões é possível identificar a posição do dano dentro da estrutura, como indicado pelos diversos exemplos apresentados.
The purpose of this work is to study dynamic methods for damage detection in beam structures. The attention is devoted to the methods based on dynamically measured flexibility. The reviewed methods are part of Nondestructive Damage Detection techniques (NDD). In the NDD techniques the damage is determined through the comparison between the undamaged and damaged state of the structure. In this work the inverse vibration problem is presented and the structure\'s flexibility matrix calculated from his modal parameters. The Finite Elements Model (FEM) is employed to show that a clear pattern exist for the changes in the flexibility matrix produced due to the presence of damage. The flexibility matrix changes is used to identify and locate damage as indicated by the several examples presented.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Radwan, Samir F. « Numerical solution of the three-dimensional boundary layer equations in the inverse mode using finite differences ». Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/12029.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Burak, Senad A. « Modelling and identification of dynamic systems using modal and spectral data / ». Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phb945.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Lamus, Garcia Herreros Camilo. « Models and algorithms of brain connectivity, spatial sparsity, and temporal dynamics for the MEG/EEG inverse problem ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/103160.

Texte intégral
Résumé :
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 123-131).
Magnetoencephalography (MEG) and electroencephalography (EEG) are noninvasive functional neuroimaging techniques that provide high temporal resolution recordings of brain activity, offering a unique means to study fast neural dynamics in humans. Localizing the sources of brain activity from MEG/EEG is an ill-posed inverse problem, with no unique solution in the absence of additional information. In this dissertation I analyze how solutions to the MEG/EEG inverse problem can be improved by including information about temporal dynamics of brain activity and connectivity within and among brain regions. The contributions of my thesis are: 1) I develop a dynamic algorithm for source localization that uses local connectivity information and Empirical Bayes estimates to improve source localization performance (Chapter 1). This result led me to investigate the underlying theoretical principles that might explain the performance improvement observed in simulations and by analyzing experimental data. In my analysis, 2) I demonstrate theoretically how the inclusion of local connectivity information and basic source dynamics can greatly increase the number of sources that can be recovered from MEG/EEG data (Chapter 2). Finally, in order to include long distance connectivity information, 3) I develop a fast multi-scale dynamic source estimation algorithm based on the Subspace Pursuit and Kalman Filter algorithms that incorporates brain connectivity information derived from diffusion MRI (Chapter 3). Overall, I illustrate how dynamic models informed by neurophysiology and neuroanatomy can be used alongside advanced statistical and signal processing methods to greatly improve MEG/EEG source localization. More broadly, this work provides an example of how advanced modeling and algorithm development can be used to address difficult problems in neuroscience and neuroimaging.
by Camilo Lamus Garcia Herreros.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Gerken, Thies [Verfasser], Andreas [Akademischer Betreuer] Rieder, Andreas [Gutachter] Rieder et Alfred [Gutachter] Schmidt. « Dynamic Inverse Problems for Wave Phenomena / Thies Gerken ; Gutachter : Andreas Rieder, Alfred Schmidt ; Betreuer : Andreas Rieder ». Bremen : Staats- und Universitätsbibliothek Bremen, 2019. http://d-nb.info/1199003697/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

El, Kalioubi Ismail. « Développement de la technique de scattérométrie neuronale dynamique ». Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT033/document.

Texte intégral
Résumé :
Avec une réduction de la taille des composants en constante progression, le domaine de la microélectronique, et d'une manière plus globale, le domaine de la nanofabrication se doit de posséder des outils de métrologie dimensionnelle performants. L'amélioration de points pertinents comme la rapidité, la précision et la répétabilité devrait permettre un suivi en temps réel de l'évolution des procédés et ainsi améliorer les rendements de production tout en limitant les pertes imputables aux dérives des procédés. Dans ce cadre, la scattérométrie, technique optique de métrologie dimensionnelle basée sur l'analyse de la lumière diffractée, a montré, suivant les cas, des capacités à répondre aux exigences des applications temps réel. Elle se décompose en une phase de mesure, effectuée par un dispositif expérimental (ellipsomètre dans notre cas) et une phase de résolution de problème inverse. La méthode utilisée pour traiter cette dernière phase conditionne la compatibilité avec le temps réel. La méthode des bibliothèques et une méthode utilisant des réseaux de neurones artificiels présentent les qualités requises. La première a déjà été validée pour le suivi d'un procédé de gravure en microélectronique et la seconde a été testée uniquement en statique à la suite d'une étape technologique. Cette thèse a pour but d'évaluer l'apport des réseaux de neurones en scattérométrie dynamique. Basée sur des critères qualitatifs et quantitatifs, cette étude souligne également la difficulté de comparer avec objectivité les différentes techniques de métrologie. Ces travaux dressent également une comparaison minutieuse de ces deux méthodes adaptées au temps réel afin d'en dégager les spécificités de fonctionnement. Enfin, la scattérométrie par l'approche des réseaux de neurones est étudiée dans le cas de la gravure de résine par plasma. En effet, il s'agit d'un procédé de fabrication en microélectronique pour lequel le contrôle in-situ est un enjeu important dans le futur
The decrease of the components size has been widely witnessed in the past decades. Hence, microelectronic field, and more generally speaking, nanofabrication requires very efficient dimensional metrology tools. The improvement of relevant points like the speed, the accuracy and the repeatability of the tool will allow real time process monitoring and thus enhance the production yield while restricting the waste due to process drift. In this framework, scatterometry, an optical dimensional metrology technique based on the analysis of the diffracted light, has proven its ability to meet real time applications requirements. It is composed of a measuring phase, done by an experimental setup (ellipsometer in our case) and an inverse problem resolution phase. The chosen method used in order to process this last step determines the compatibility with real time. Library method and a method based on artificial neural networks possess the required qualifications. The first one has already been validated for etching process monitoring in microelectronics and the second one has been validated only on static cases after a technological step. This PhD involves assessing neural networks for dynamic scatterometry. Based on qualitative and quantitative criteria, this study underlines the difficulty of comparing different metrology techniques objectively. This work draws up a meticulous comparison of these two real time adapted methods in order to bring out their working specifications. Finally, scatterometry using neural networks is studied on a resist etching plasma case. In fact, this is a microelectronic fabrication process for which in-situ control is of an important concern in the future
Styles APA, Harvard, Vancouver, ISO, etc.
19

Montagud, Santiago. « Simulation temps réel en dynamique non linéaire : application à la robotique souple ». Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0384/document.

Texte intégral
Résumé :
L’intégration des méthodes numériques dans les procédés industriels à son origine à l’apparition des ordinateurs, et est de plus en plus intégré parallèlement au développement de la technologie. Dans le cadre des procédés industriels où interviennent des structures en mouvement, il est intéressant d’avoir de méthodes de calcul rapide pour de problèmes non linéaires, comme par exemple, la manipulation de matériaux souples par robots. La résolution de ce type de problème reste encore comme un défi pour l’ingénierie. Malgré l’existence de nombreuses méthodes pour résoudre les problèmes dynamiques, aucun n’est adaptée à la simulation en temps réel. Pour la façon de résolution, nous avons divisé le problème dynamique en deux sous-problèmes : le problème direct, qui consiste à calculer les déplacements en fonction de la force appliqué, et le problème inverse, dans lequel on calcule la force en fonction des déplacements appliqués
Integration of numeric methodes in industrial procedures starts with the development of the computers, and its being integrated as its grows the technology. In the industrial procedures where moving structrues are involved, its necessary the hability of fast computing in non lineare problems, for example, material manipulation by soft robots. The solution of this kind of problems is still a challenge for the engineering. Despite the existance of numerous methodes to solve the dynamic problem, non of them is adapted to real time simulation. By the way of facing the problem, we have divised the dynamic problem in two subproblems: the direct problem, where displacements are computed when an external force is applied, and the inverse problem, where the external force is computed from the displacements
Styles APA, Harvard, Vancouver, ISO, etc.
20

Sousa, Cristiano Benevides de. « Inversão Geométrica Aplicada à Resolução dos Problemas de Apolônio ». Universidade Federal da Paraíba, 2014. http://tede.biblioteca.ufpb.br:8080/handle/tede/7570.

Texte intégral
Résumé :
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2015-11-09T15:46:19Z No. of bitstreams: 1 arquivototal.pdf: 1967459 bytes, checksum: a691ca832c8c32056bec4fdb8b24f47a (MD5)
Approved for entry into archive by Maria Suzana Diniz (msuzanad@hotmail.com) on 2015-11-10T11:33:54Z (GMT) No. of bitstreams: 1 arquivototal.pdf: 1967459 bytes, checksum: a691ca832c8c32056bec4fdb8b24f47a (MD5)
Made available in DSpace on 2015-11-10T11:33:54Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1967459 bytes, checksum: a691ca832c8c32056bec4fdb8b24f47a (MD5) Previous issue date: 2014-09-15
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This work was developed with the aim of presenting a new approach within the Geometry, the Inversion. The Inversive Geometry is a non-Euclidean geometry that has several applications, mainly related to problems of tangency. This new Geometry is presented throughout this work in order to solve the ten problems of Apollonius. All constructions are carried out with the aid of a Dynamic Geometry software, Geogebra. Since the work is directed to teachers and students of basic education, then there is a proposed roadmap for the reader to participate in the construction of the solutions of these problems process, which will enable the development of creativity, logical thinking, reasoning and practice of geometric constructions.
O presente trabalho foi desenvolvido com o objetivo de apresentar uma nova abordagem dentro da Geometria; a Inversão. A Geometria Inversiva é uma Geometria não Euclidiana que possui inúmeras aplicações, principalmente relacionada a problemas de tangência. Essa nova Geometria é apresentada ao longo desse trabalho com o objetivo de solucionar os dez problemas de Apolônio. Todas as construções são realizadas com o auxílio de um software de Geometria Dinâmica; o Geogebra. Como o trabalho é direcionado para professores e alunos do ensino básico, então há uma proposta de roteiro para que o leitor possa participar do processo de construção das soluções dos referidos problemas, o que possibilitará o desenvolvimento da criatividade, do pensamento lógico, da argumentação e da prática em construções geométricas.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Steinke, Gustav Karl. « What the Power Spectrum of Field Potentials Reveals about Functional Brain Connectivity ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1283881427.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Lestoille, Nicolas. « Stochastic model of high-speed train dynamics for the prediction of long-time evolution of the track irregularities ». Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1094/document.

Texte intégral
Résumé :
Les voies ferrées sont de plus en plus sollicitées: le nombre de trains à grande vitesse, leur vitesse et leur charge ne cessent d'augmenter, ce qui contribue à la formation de défauts de géométrie sur la voie. En retour, ces défauts de géométrie influencent la réponse dynamique du train et dégradent les conditions de confort. Pour garantir de bonnes conditions de confort, les entreprises ferroviaires réalisent des opérations de maintenance de la voie, qui sont très coûteuses. Ces entreprises ont donc intérêt à prévoir l'évolution temporelle des défauts de géométrie de la voie pour anticiper les opérations de maintenance, et ainsi réduire les coûts de maintenance et améliorer les conditions de transport. Dans cette thèse, on analyse l'évolution temporelle d'une portion de voie par un indicateur vectoriel sur la dynamique du train. Pour la portion de voie choisie, on construit un modèle stochastique local des défauts de géométrie de la voie à partir d'un modèle global des défauts de géométrie et de big data de défauts mesurés par un train de mesure. Ce modèle stochastique local prend en compte la variabilité des défauts de géométrie de la voie et permet de générer des réalisations des défauts pour chaque temps de mesure. Après avoir validé le modèle numérique de la dynamique du train, les réponses dynamiques du train sur la portion de voie mesurée sont simulées numériquement en utilisant le modèle stochastique local des défauts de géométrie. Un indicateur dynamique, vectoriel et aléatoire, est introduit pour caractériser la réponse dynamique du train sur la portion de voie. Cet indicateur dynamique est construit de manière à prendre en compte les incertitudes de modèle dans le modèle numérique de la dynamique du train. Pour identifier le modèle stochastique des défauts de géométrie et pour caractériser les incertitudes de modèle, des méthodes stochastiques avancées, comme par exemple la décomposition en chaos polynomial ou le maximum de vraisemblance multidimensionnel, sont appliquées à des champs aléatoires non gaussiens et non stationnaires. Enfin, un modèle stochastique de prédiction est proposé pour prédire les quantités statistiques de l'indicateur dynamique, ce qui permet d'anticiper le besoin en maintenance. Ce modèle est construit en utilisant les résultats de la simulation de la dynamique du train et consiste à utiliser un modèle non stationnaire de type filtre de Kalman avec une condition initiale non gaussienne
Railways tracks are subjected to more and more constraints, because the number of high-speed trains using the high-speed lines, the trains speed, and the trains load keep increasing. These solicitations contribute to produce track irregularities. In return, track irregularities influence the train dynamic responses, inducing degradation of the comfort. To guarantee good conditions of comfort in the train, railways companies perform maintenance operations of the track, which are very costly. Consequently, there is a great interest for the railways companies to predict the long-time evolution of the track irregularities for a given track portion, in order to be able to anticipate the start off of the maintenance operations, and therefore to reduce the maintenance costs and to improve the running conditions. In this thesis, the long-time evolution of a given track portion is analyzed through a vector-valued indicator on the train dynamics. For this given track portion, a local stochastic model of the track irregularities is constructed using a global stochastic model of the track irregularities and using big data made up of experimental measurements of the track irregularities performed by a measuring train. This local stochastic model takes into account the variability of the track irregularities and allows for generating realizations of the track irregularities at each long time. After validating the computational model of the train dynamics, the train dynamic responses on the measured track portion are numerically simulated using the local stochastic model of the track irregularities. A vector-valued random dynamic indicator is defined to characterize the train dynamic responses on the given track portion. This dynamic indicator is constructed such that it takes into account the model uncertainties in the train dynamics computational model. For the identification of the track irregularities stochastic model and the characterization of the model uncertainties, advanced stochastic methods such as the polynomial chaos expansion and the multivariate maximum likelihood are applied to non-Gaussian and non-stationary random fields. Finally, a stochastic predictive model is proposed for predicting the statistical quantities of the random dynamic indicator, which allows for anticipating the need for track maintenance. This modeling is constructed using the results of the train dynamics simulation and consists in using a non-stationary Kalman-filter type model with a non-Gaussian initial condition. The proposed model is validated using experimental data for the French railways network for the high-speed trains
Styles APA, Harvard, Vancouver, ISO, etc.
23

Masmoudi, Florent. « Nonintrusive reduced order models ». Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30363.

Texte intégral
Résumé :
Le but de cette thèse est de construire des modèles réduits permettant de se substituer à des logiciels de simulation de systèmes physiques complexes. Ces modèles réduits devront être rapides à interroger et seront identifiés sur la base d’un nombre limité de calculs issus du logiciel de simulation. Ce travail s’inscrira ainsi dans le monde des méthodes d’apprentissage. Les modèles ainsi créés devront pouvoir être utilisés de manière autonome, sans avoir recours au logiciel de simulation. On s’intéressera principalement à deux types de physiques. Dans une première partie nous définirons une structure de modèle réduit adaptée à la mécanique linéaire. Dans une seconde partie nous travaillerons sur un modèle réduit dynamique développé pour la mécanique des fluides
The objective of this thesis is to build fast reduced order models able to replace a computationally intensive complex system simulation software. Those reduced order models will be identified using a reasonable amount of computations issued from the simulation software. This work enters therefore the field of learning methods. Once the models are built they should be usable in an autonomous way and should not rely on the simulation software. We will consider two kinds of physics. In a first chapter, we will address problems involving linear elasticity and develop an adequate reduced order model structure. In a second chapter, we will do the same work in the field of fluid dynamics
Styles APA, Harvard, Vancouver, ISO, etc.
24

Lundvall, Johan. « Data Assimilation in Fluid Dynamics using Adjoint Optimization ». Doctoral thesis, Linköping : Matematiska institutionen, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9684.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Perrin, Guillaume. « Random fields and associated statistical inverse problems for uncertainty quantification : application to railway track geometries for high-speed trains dynamical responses and risk assessment ». Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-01001045.

Texte intégral
Résumé :
Les nouvelles attentes vis-à-vis des nouveaux trains à grande vitesse sont nombreuses: on les voudrait plus rapides, plus confortables, plus stables, tout en étant moins consommateur d'énergie, moins agressif vis-à-vis des voies, moins bruyants... Afin d'optimiser la conception de ces trains du futur, il est alors nécessaire de pouvoir se baser sur une connaissance précise de l'ensemble des conditions de circulations qu'ils sont susceptibles de rencontrer au cours de leur cycle de vie. Afin de relever ces défis, la simulation a un très grand rôle à jouer. Pour que la simulation puisse être utilisée dans des perspectives de conception, de certification et d'optimisation de la maintenance, elle doit alors être tout à fait représentative de l'ensemble des comportements physiques mis en jeu. Le modèle du train, du contact entre les roues et le rail, doivent ainsi être validés avec attention, et les simulations doivent être lancées sur des ensembles d'excitations qui sont réalistes et représentatifs de ces défauts de géométrie. En ce qui concerne la dynamique, la géométrie de la voie, et plus particulièrement les défauts de géométrie, représentent une des principales sources d'excitation du train, qui est un système mécanique fortement non linéaire. A partir de mesures de la géométrie d'un réseau ferroviaire, un paramétrage complet de la géométrie de la voie et de sa variabilité semblent alors nécessaires, afin d'analyser au mieux le lien entre la réponse dynamique du train et les propriétés physiques et statistiques de la géométrie de la voie. Dans ce contexte, une approche pertinente pour modéliser cette géométrie de la voie, est de la considérer comme un champ aléatoire multivarié, dont les propriétés sont a priori inconnues. En raison des interactions spécifiques entre le train et la voie, il s'avère que ce champ aléatoire n'est ni Gaussien ni stationnaire. Ce travail de thèse s'est alors particulièrement concentré sur le développement de méthodes numériques permettant l'identification en inverse, à partir de mesures expérimentales, de champs aléatoires non Gaussiens et non stationnaires. Le comportement du train étant très non linéaire, ainsi que très sensible vis-à-vis de la géométrie de la voie, la caractérisation du champ aléatoire correspondant aux défauts de géométrie doit être extrêmement fine, tant du point de vue fréquentiel que statistique. La dimension des espaces statistiques considérés est alors très importante. De ce fait, une attention toute particulière a été portée dans ces travaux aux méthodes de réduction statistique, ainsi qu'aux méthodes pouvant être généralisées à la très grande dimension. Une fois la variabilité de la géométrie de la voie caractérisée à partir de données expérimentales, elle doit ensuite être propagée au sein du modèle numérique ferroviaire. A cette fin, les propriétés mécaniques d'un modèle numérique de train à grande vitesse ont été identifiées à partir de mesures expérimentales. La réponse dynamique stochastique de ce train, soumis à un très grand nombre de conditions de circulation réalistes et représentatives générées à partir du modèle stochastique de la voie ferrée, a été ainsi évaluée. Enfin, afin d'illustrer les possibilités apportées par un tel couplage entre la variabilité de la géométrie de la voie et la réponse dynamique du train, ce travail de thèse aborde trois applications
Styles APA, Harvard, Vancouver, ISO, etc.
26

Alsoy-akgun, Nagehan. « The Dual Reciprocity Boundary Element Solutions Of Helmholtz-type Equations In Fluid Dynamics ». Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615728/index.pdf.

Texte intégral
Résumé :
In this thesis, the two-dimensional, unsteady, laminar and incompressible fluid flow problems governed by partial differential equations are solved by using dual reciprocity boundary element method (DRBEM). First, the governing equations are transformed to the inhomogeneous modified Helmholtz equations, and then the fundamental solution of modified Helmholtz equation is used for obtaining boundary element method (BEM) formulation. Thus, all the terms in the equation except the modified Helmholtz operator are considered as inhomogeneity. All the inhomogeneity terms are approximated by using suitable radial basis functions, and corresponding particular solutions are derived by using the annihilator method. Transforming time dependent partial differential equations to the form of inhomogeneous modified Helmholtz equations in DRBEM application enables us to use more information from the original governing equation. These are the main original parts of the thesis. In order to obtain modified Helmholtz equation for the time dependent partial differential equations, the time derivatives are approximated at two time levels by using forward finite difference method. This also eliminates the need of another time integration scheme, and diminishes stability problems. Stream function-vorticity formulations are adopted in physical fluid dynamics problems in DRBEM by using constant elements. First, the procedure is applied to the lid-driven cavity flow and results are obtained for Reynolds number values up to $2000.$ The natural convection flow is solved for Rayleigh numbers between $10^3$ to $10^6$ when the energy equation is added to the Navier-Stokes equations. Then, double diffusive mixed convection flow problem defined in three different physical domains is solved by using the same procedure. Results are obtained for various values of Richardson and Reynolds numbers, and buoyancy ratios. Behind these, DRBEM is used for the solution of natural convection flow under a magnetic field by using two different radial basis functions for both vorticity transport and energy equations. The same problem is also solved with differential quadrature method using the form of Poisson type stream function and modified Helmholtz type vorticity and energy equations. DRBEM and DQM results are obtained for the values of Rayleigh and Hartmann numbers up to $10^6$ and $300,$ respectively, and are compared in terms of accuracy and computational cost. Finally, DRBEM is used for the solution of inverse natural convection flow under a magnetic field using the results of direct problem for the missing boundary conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Alsoy-akgun, Nagehan. « The Dual Reciprocity Boundary Element Solution Of Helmholtz-type Equations In Fluid Dynamics ». Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615729/index.pdf.

Texte intégral
Résumé :
In this thesis, the two-dimensional, unsteady, laminar and incompressible fluid flow problems governed by partial differential equations are solved by using dual reciprocity boundary element method (DRBEM). First, the governing equations are transformed to the inhomogeneous modified Helmholtz equations, and then the fundamental solution of modified Helmholtz equation is used for obtaining boundary element method (BEM) formulation. Thus, all the terms in the equation except the modified Helmholtz operator are considered as inhomogeneity. All the inhomogeneity terms are approximated by using suitable radial basis functions, and corresponding particular solutions are derived by using the annihilator method. Transforming time dependent partial differential equations to the form of inhomogeneous modified Helmholtz equations in DRBEM application enables us to use more information from the original governing equation. These are the main original parts of the thesis. In order to obtain modified Helmholtz equation for the time dependent partial differential equations, the time derivatives are approximated at two time levels by using forward finite difference method. This also eliminates the need of another time integration scheme, and diminishes stability problems. Stream function-vorticity formulations are adopted in physical fluid dynamics problems in DRBEM by using constant elements. First, the procedure is applied to the lid-driven cavity flow and results are obtained for Reynolds number values up to 2000. The natural convection flow is solved for Rayleigh numbers between 10^3 to 10^6 when the energy equation is added to the Navier-Stokes equations. Then, double diffusive mixed convection flow problem defined in three different physical domains is solved by using the same procedure. Results are obtained for various values of Richardson and Reynolds numbers, and buoyancy ratios. Behind these, DRBEM is used for the solution of natural convection flow under a magnetic field by using two different radial basis functions for both vorticity transport and energy equations. The same problem is also solved with differential quadrature method using the form of Poisson type stream function and modified Helmholtz type vorticity and energy equations. DRBEM and DQM results are obtained for the values of Rayleigh and Hartmann numbers up to 10^6 and 300, respectively, and are compared in terms of accuracy and computational cost. Finally, DRBEM is used for the solution of inverse natural convection flow under a magnetic field using the results of direct problem for the missing boundary conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Ygouf, Marie. « Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes ». Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00843202.

Texte intégral
Résumé :
Ce travail de thèse porte sur l'imagerie multispectrale à haut contraste pour la détection et la caractérisation directe d'exoplanètes. Dans ce contexte, le développement de méthodes innovantes de traitement d'images est indispensable afin d'éliminer les tavelures quasi-statiques dans l'image finale qui restent à ce jour, la principale limitation pour le haut contraste. Bien que les aberrations résiduelles instrumentales soient à l'origine de ces tavelures, aucune méthode de réduction de données n'utilise de modèle de formation d'image coronographique qui prend ces aberrations comme paramètres. L'approche adoptée dans cette thèse comprend le développement, dans un cadre bayésien, d'une méthode d'inversion fondée sur un modèle analytique d'imagerie coronographique. Cette méthode estime conjointement les aberrations instrumentales et l'objet d'intérêt, à savoir les exoplanètes, afin de séparer correctement ces deux contributions. L'étape d'estimation des aberrations à partir des images plan focal (ou phase retrieval en anglais), est la plus difficile car le modèle de réponse instrumentale sur l'axe dont elle dépend est fortement non-linéaire. Le développement et l'étude d'un modèle approché d'imagerie coronographique plus simple se sont donc révélés très utiles pour la compréhension du problème et m'ont inspiré des stratégies de minimisation. J'ai finalement pu tester ma méthode et d'estimer ses performances en terme de robustesse et de détection d'exoplanètes. Pour cela, je l'ai appliquée sur des images simulées et j'ai notamment étudié l'effet des différents paramètres du modèle d'imagerie utilisé. J'ai ainsi démontré que cette nouvelle méthode, associée à un schéma d'optimisation fondé sur une bonne connaissance du problème, peut fonctionner de manière relativement robuste, en dépit des difficultés de l'étape de phase retrieval. En particulier, elle permet de détecter des exoplanètes dans le cas d'images simulées avec un niveau de détection conforme à l'objectif de l'instrument SPHERE. Ce travail débouche sur de nombreuses perspectives dont celle de démontrer l'utilité de cette méthode sur des images simulées avec des coronographes plus réalistes et sur des images réelles de l'instrument SPHERE. De plus, l'extension de la méthode pour la caractérisation des exoplanètes est relativement aisée, tout comme son extension à l'étude d'objets plus étendus tels que les disques circumstellaire. Enfin, les résultats de ces études apporteront des enseignements importants pour le développement des futurs instruments. En particulier, les Extremely Large Telescopes soulèvent d'ores et déjà des défis techniques pour la nouvelle génération d'imageurs de planètes. Ces challenges pourront très probablement être relevés en partie grâce à des méthodes de traitement d'image fondées sur un modèle direct d'imagerie.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Boualem, Abdelbassit. « Estimation de distribution de tailles de particules par techniques d'inférence bayésienne ». Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2030/document.

Texte intégral
Résumé :
Ce travail de recherche traite le problème inverse d’estimation de la distribution de tailles de particules (DTP) à partir des données de la diffusion dynamique de lumière (DLS). Les méthodes actuelles d’estimation souffrent de la mauvaise répétabilité des résultats d’estimation et de la faible capacité à séparer les composantes d’un échantillon multimodal de particules. L’objectif de cette thèse est de développer de nouvelles méthodes plus performantes basées sur les techniques d’inférence bayésienne et cela en exploitant la diversité angulaire des données de la DLS. Nous avons proposé tout d’abord une méthode non paramétrique utilisant un modèle « free-form » mais qui nécessite une connaissance a priori du support de la DTP. Pour éviter ce problème, nous avons ensuite proposé une méthode paramétrique fondée sur la modélisation de la DTP en utilisant un modèle de mélange de distributions gaussiennes. Les deux méthodes bayésiennes proposées utilisent des algorithmes de simulation de Monte-Carlo par chaînes de Markov. Les résultats d’analyse de données simulées et réelles montrent la capacité des méthodes proposées à estimer des DTPs multimodales avec une haute résolution et une très bonne répétabilité. Nous avons aussi calculé les bornes de Cramér-Rao du modèle de mélange de distributions gaussiennes. Les résultats montrent qu’il existe des valeurs d’angles privilégiées garantissant des erreurs minimales sur l’estimation de la DTP
This research work treats the inverse problem of particle size distribution (PSD) estimation from dynamic light scattering (DLS) data. The current DLS data analysis methods have bad estimation results repeatability and poor ability to separate the components (resolution) of a multimodal sample of particles. This thesis aims to develop new and more efficient estimation methods based on Bayesian inference techniques by taking advantage of the angular diversity of the DLS data. First, we proposed a non-parametric method based on a free-form model with the disadvantage of requiring a priori knowledge of the PSD support. To avoid this problem, we then proposed a parametric method based on modelling the PSD using a Gaussian mixture model. The two proposed Bayesian methods use Markov chain Monte Carlo simulation algorithms. The obtained results, on simulated and real DLS data, show the capability of the proposed methods to estimate multimodal PSDs with high resolution and better repeatability. We also computed the Cramér-Rao bounds of the Gaussian mixture model. The results show that there are preferred angle values ensuring minimum error on the PSD estimation
Styles APA, Harvard, Vancouver, ISO, etc.
30

Coban, Sophia. « Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomography ». Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/practical-approaches-to-reconstruction-and-analysis-for-3d-and-dynamic-3d-computed-tomography(f34a2617-09f9-4c4e-9669-f86f6cf2bce5).html.

Texte intégral
Résumé :
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Casadiego, Bastidas Jose Luis Verfasser], Marc [Akademischer Betreuer] Timme, Reiner [Gutachter] Kree, Ulrich [Gutachter] Parlitz, Stephan [Gutachter] [Herminghaus, Theo [Gutachter] Geisel et Patrick [Gutachter] Cramer. « Network Dynamics as an Inverse Problem : Reconstruction, Design and Optimality / Jose Luis Casadiego Bastidas ; Gutachter : Reiner Kree, Ulrich Parlitz, Stephan Herminghaus, Theo Geisel, Patrick Cramer ; Betreuer : Marc Timme ». Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2016. http://d-nb.info/1122348762/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Robin, Frédérique. « Modeling and analysis of cell population dynamics : application to the early development of ovarian follicles ». Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS344.

Texte intégral
Résumé :
Cette thèse vise à concevoir et analyser des modèles de dynamique des populations dédiés à la dynamique des cellules somatiques durant les premiers stades de la croissance du follicule ovarien. Les comportements des modèles sont analysés par des approches théoriques et numériques, et les valeurs des paramètres sont calibrées en proposant des stratégies de maximum de vraisemblance adaptées à notre jeu de données spécifique. Un modèle stochastique non linéaire, qui tient compte de la dynamique conjointe entre deux types cellulaires (précurseur et prolifératif), est dédié à l'activation de la croissance folliculaire. Une approche rigoureuse de projection par états finis est mise en œuvre pour caractériser l'état du système à l'extinction et calculer le temps d'extinction des cellules précurseurs. Un modèle linéaire multi-type structuré en âge, appliquée à la population de cellules prolifératives, est dédié à la croissance folliculaire précoce. Les différents types correspondent ici aux positions spatiales des cellules. Ce modèle est de type décomposable ; les transitions sont unidirectionnelles du premier vers le dernier type. Nous prouvons la convergence en temps long du modèle stochastique de Bellman-Harris et de l'équation de McKendrick-VonFoerster multi-types. Nous adaptons les résultats existants dans le cas où le théorème de Perron-Frobenius ne s'applique pas, et nous obtenons des formules analytiques explicites pour les moments asymptotiques des nombres de cellules et de la distribution stationnaire en âge. Nous étudions également le caractère bien posé du problème inverse associé au modèle déterministe
This thesis aims to design and analyze population dynamics models dedicated to the dynamics of somatic cells during the early stages of ovarian follicle growth. The model behaviors are analyzed through theoretical and numerical approaches, and the calibration of parameters is performed by proposing maximum likelihood strategies adapted to our specific dataset. A non-linear stochastic model, that accounts for the joint dynamics of two cell types (precursors and proliferative), is dedicated to the activation of follicular growth. In particular, we compute the extinction time of precursor cells. A rigorous finite state projection approach is implemented to characterize the system state at extinction. A linear multitype age-structured model for the proliferative cell population is dedicated to the early follicle growth. The different types correspond here to the spatial cell positions. This model is of decomposable kind; the transitions are unidirectional from the first to the last spatial type. We prove the long-term convergence for both the stochastic Bellman-Harris model and the multi-type McKendrick-VonFoerster equation. We adapt existing results in a context where the Perron-Frobenius theorem does not apply, and obtain explicit analytical formulas for the asymptotic moments of cell numbers and stable age distribution. We also study the well-posedness of the inverse problem associated with the deterministic model
Styles APA, Harvard, Vancouver, ISO, etc.
33

Momey, Fabien. « Reconstruction en tomographie dynamique par approche inverse sans compensation de mouvement ». Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-00842572.

Texte intégral
Résumé :
La tomographie est la discipline qui cherche à reconstruire une donnée physique dans son volume, à partir de l'information indirecte de projections intégrées de l'objet, à différents angles de vue. L'une de ses applications les plus répandues, et qui constitue le cadre de cette thèse, est l'imagerie scanner par rayons X pour le médical. Or, les mouvements inhérents à tout être vivant, typiquement le mouvement respiratoire et les battements cardiaques, posent de sérieux problèmes dans une reconstruction classique. Il est donc impératif d'en tenir compte, i.e. de reconstruire le sujet imagé comme une séquence spatio-temporelle traduisant son "évolution anatomique" au cours du temps : c'est la tomographie dynamique. Élaborer une méthode de reconstruction spécifique à ce problème est un enjeu majeur en radiothérapie, où la localisation précise de la tumeur dans le temps est un prérequis afin d'irradier les cellules cancéreuses en protégeant au mieux les tissus sains environnants. Des méthodes usuelles de reconstruction augmentent le nombre de projections acquises, permettant des reconstructions indépendantes de plusieurs phases de la séquence échantillonnée en temps. D'autres compensent directement le mouvement dans la reconstruction, en modélisant ce dernier comme un champ de déformation, estimé à partir d'un jeu de données d'acquisition antérieur. Nous proposons dans ce travail de thèse une approche nouvelle ; se basant sur la théorie des problèmes inverses, nous affranchissons la reconstruction dynamique du besoin d'accroissement de la quantité de données, ainsi que de la recherche explicite du mouvement, elle aussi consommatrice d'un surplus d'information. Nous reconstruisons la séquence dynamique à partir du seul jeu de projections courant, avec pour seules hypothèses a priori la continuité et la périodicité du mouvement. Le problème inverse est alors traité rigoureusement comme la minimisation d'un terme d'attache aux données et d'une régularisation. Nos contributions portent sur la mise au point d'une méthode de reconstruction adaptée à l'extraction optimale de l'information compte tenu de la parcimonie des données -- un aspect typique du problème dynamique -- en utilisant notamment la variation totale (TV) comme régularisation. Nous élaborons un nouveau modèle de projection tomographique précis et compétitif en temps de calcul, basé sur des fonctions B-splines séparables, permettant de repousser encore la limite de reconstruction imposée par la parcimonie. Ces développements sont ensuite insérés dans un schéma de reconstruction dynamique cohérent, appliquant notamment une régularisation TV spatio-temporelle efficace. Notre méthode exploite ainsi de façon optimale la seule information courante à disposition ; de plus sa mise en oeuvre fait preuve d'une grande simplicité. Nous faisons premièrement la démonstration de la force de notre approche sur des reconstructions 2-D+t à partir de données simulées numériquement. La faisabilité pratique de notre méthode est ensuite établie sur des reconstructions 2-D et 3-D+t à partir de données physiques "réelles", acquises sur un fantôme mécanique et sur un patient
Styles APA, Harvard, Vancouver, ISO, etc.
34

Cioaca, Alexandru George. « A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation ». Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51795.

Texte intégral
Résumé :
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
35

片峯, 英次, Eiji Katamine, 秀幸 畔上, Hideyuki Azegami, 正太郎 山口 et Syohtaroh Yamaguchi. « ポテンシャル流れ場の形状同定解析(圧力分布規定問題と力法による解法) ». 日本機械学会, 1998. http://hdl.handle.net/2237/7255.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

片峯, 英次, Eiji Katamine, 秀幸 畔上 et Hideyuki Azegami. « ポテンシャル流れ場の領域最適化解析 ». 日本機械学会, 1995. http://hdl.handle.net/2237/7253.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Шеремет, Олексій Іванович. « Синтез електромеханічних систем на базі дискретного часового еквалайзера ». Thesis, Дніпровський державний технічний університет, 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/35776.

Texte intégral
Résumé :
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.09.03 – електротехнічні комплекси і системи. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2018. Дисертація присвячена створенню єдиної методології синтезу систем керування електроприводами, виходячи з умови забезпечення бажаних динамічних властивостей за вихідною координатою, що задаються графічно у вигляді перехідної функції або набору точок. Дискретний регулятор, який забезпечує можливість налаштування на квантовані бажані перехідні функції кінцевої тривалості, названо дискретним часовим еквалайзером. Запропоновано математичний апарат, що дозволяє виконувати синтез систем автоматичного керування на базі дискретного часового еквалайзера як за умови повної компенсації динамічних властивостей об'єкта керування (ідеалізований варіант), так і при використанні модифікованого принципу симетрії структурних схем. Інтегруюча ланка використовується як блок модифікації зворотного перетворення. Для компенсації параметричних та координатних збурень розроблено структуру комбінованого керування, що включає до свого складу два дискретних часових еквалайзера: основний та компенсуючий. Виконано синтез електромеханічних систем з двигунами постійного струму на базі дискретного часового еквалайзера (слідкуючої системи, взаємозв'язаної, зі зниженням впливу люфта, системи двозонного регулювання швидкості) та проведено дослідження одержаних результатів шляхом математичного моделювання. Здійснено синтез системи векторного керування електроприводом змінного струму на базі дискретного часового еквалайзера та проведено дослідження одержаних результатів шляхом математичного моделювання. Отримані узагальнені передатні функції для об'єктів керування у контурах реактивної та активної потужностей, а також передатні функції їхніх обернених моделей дозволяють виконувати синтез систем векторного керування електроприводами змінного струму на базі дискретного часового еквалайзера у відносних одиницях для широкого спектру промислових асинхронних двигунів. Для виконання експериментальних досліджень створено лабораторно-дослідний стенд, який дозволив технічно реалізувати систему автоматичного керування швидкістю двигуна постійного струму, синтезовану на базі дискретного часового еквалайзера. Запропонована методика завдання базисних величин та бажаної перехідної функції у відносних координатах дозволяє користувачеві вводити дані у відповідності до заздалегідь визначеного шаблону, який є зручним для виконання подальшої перевірки на можливість фізичної реалізації за допомогою метода опорних векторів.
Dissertation for the degree of Doctor of Technical Science on Specialty 05.09.03 – Electrical Engineering Complexes and Systems. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2018. The dissertation is devoted to the creation of unified methodology for the synthesis of control systems for electric drives, based on the condition of providing the desired dynamic properties on the output coordinate, given graphically in the form of transition function or a set of points. The discrete regulator, which provides the ability to tuning up on quantizeddesired transition functions of finite duration, is called a discrete time equalizer. A mathematical apparatus is proposed that allows performing synthesis of automatic control systems based on discrete time equalizer both with full compensation for control object dynamic properties (idealized variant) or with using the modified structural schemes principle of symmetry. The integrating unit is used as inverse transform modification block. To compensate for parametric and coordinate disturbances, a combined control structure has been developed, which includes two discrete time equalizers: main and compensating. The synthesis of electromechanical systems with direct current (DC) motors on the basis of discrete time equalizer (servosystem, interconnected system, system with decreased influence of the backlash, two-zone speed regulation system) was performed and a study of the obtained results was carried out by means of mathematical modeling. Synthesis of the vector control system of alternating current (AC) electric drive based on discrete time equalizer was realized and a study of the obtained results was carried out by means of mathematical modeling. Obtained generalized transfer functions for control objects in the reactive and active loops and the transfer functions of their inverse models allow synthesizing the systems of vector control of AC electric drives on the base of discrete time equalizer in relative units for a wide range of industrial asynchronous motors. To carry out experimental research, a laboratory test bench was created, which allowed the technical implementation of the of DC motor speed automatic control system, synthesized on the base of discrete time equalizer. Proposed technics of the basic values assignment and the desired transition function presentation in relative coordinates allows user to input data in accordance with predefined template, which is convenient for performing further verification for the possibility of physical implementation by support vector machine (SVM) method.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Шеремет, Олексій Іванович. « Синтез електромеханічних систем на базі дискретного часового еквалайзера ». Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/35771.

Texte intégral
Résumé :
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.09.03 – електротехнічні комплекси і системи. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2018. Дисертація присвячена створенню єдиної методології синтезу систем керування електроприводами, виходячи з умови забезпечення бажаних динамічних властивостей за вихідною координатою, що задаються графічно у вигляді перехідної функції або набору точок. Дискретний регулятор, який забезпечує можливість налаштування на квантовані бажані перехідні функції кінцевої тривалості, названо дискретним часовим еквалайзером. Запропоновано математичний апарат, що дозволяє виконувати синтез систем автоматичного керування на базі дискретного часового еквалайзера як за умови повної компенсації динамічних властивостей об'єкта керування (ідеалізований варіант), так і при використанні модифікованого принципу симетрії структурних схем. Інтегруюча ланка використовується як блок модифікації зворотного перетворення. Для компенсації параметричних та координатних збурень розроблено структуру комбінованого керування, що включає до свого складу два дискретних часових еквалайзера: основний та компенсуючий. Виконано синтез електромеханічних систем з двигунами постійного струму на базі дискретного часового еквалайзера (слідкуючої системи, взаємозв'язаної, зі зниженням впливу люфта, системи двозонного регулювання швидкості) та проведено дослідження одержаних результатів шляхом математичного моделювання. Здійснено синтез системи векторного керування електроприводом змінного струму на базі дискретного часового еквалайзера та проведено дослідження одержаних результатів шляхом математичного моделювання. Отримані узагальнені передатні функції для об'єктів керування у контурах реактивної та активної потужностей, а також передатні функції їхніх обернених моделей дозволяють виконувати синтез систем векторного керування електроприводами змінного струму на базі дискретного часового еквалайзера у відносних одиницях для широкого спектру промислових асинхронних двигунів. Для виконання експериментальних досліджень створено лабораторно-дослідний стенд, який дозволив технічно реалізувати систему автоматичного керування швидкістю двигуна постійного струму, синтезовану на базі дискретного часового еквалайзера. Запропонована методика завдання базисних величин та бажаної перехідної функції у відносних координатах дозволяє користувачеві вводити дані у відповідності до заздалегідь визначеного шаблону, який є зручним для виконання подальшої перевірки на можливість фізичної реалізації за допомогою метода опорних векторів.
Dissertation for the degree of Doctor of Technical Science on Specialty 05.09.03 – Electrical Engineering Complexes and Systems. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2018. The dissertation is devoted to the creation of unified methodology for the synthesis of control systems for electric drives, based on the condition of providing the desired dynamic properties on the output coordinate, given graphically in the form of transition function or a set of points. The discrete regulator, which provides the ability to tuning up on quantizeddesired transition functions of finite duration, is called a discrete time equalizer. A mathematical apparatus is proposed that allows performing synthesis of automatic control systems based on discrete time equalizer both with full compensation for control object dynamic properties (idealized variant) or with using the modified structural schemes principle of symmetry. The integrating unit is used as inverse transform modification block. To compensate for parametric and coordinate disturbances, a combined control structure has been developed, which includes two discrete time equalizers: main and compensating. The synthesis of electromechanical systems with direct current (DC) motors on the basis of discrete time equalizer (servosystem, interconnected system, system with decreased influence of the backlash, two-zone speed regulation system) was performed and a study of the obtained results was carried out by means of mathematical modeling. Synthesis of the vector control system of alternating current (AC) electric drive based on discrete time equalizer was realized and a study of the obtained results was carried out by means of mathematical modeling. Obtained generalized transfer functions for control objects in the reactive and active loops and the transfer functions of their inverse models allow synthesizing the systems of vector control of AC electric drives on the base of discrete time equalizer in relative units for a wide range of industrial asynchronous motors. To carry out experimental research, a laboratory test bench was created, which allowed the technical implementation of the of DC motor speed automatic control system, synthesized on the base of discrete time equalizer. Proposed technics of the basic values assignment and the desired transition function presentation in relative coordinates allows user to input data in accordance with predefined template, which is convenient for performing further verification for the possibility of physical implementation by support vector machine (SVM) method.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Moine, Pascal. « Recalage de modèles éléments finis avec amortissement ». Châtenay-Malabry, Ecole centrale de Paris, 1997. http://www.theses.fr/1997ECAP0537.

Texte intégral
Résumé :
Cette thèse est consacrée au recalage de modèles éléments finis avec amortissement visqueux non proportionnel sur les caractéristiques modales des structures modélisées. Le document est scindé en trois parties. - la première partie consiste en un rappel succinct des différents types d'amortissement couramment utilisés, un exposé de la problématique du recalage et une présentation des méthodes de recalage disponibles. - la seconde partie rassemble les nouveaux développements réalisés par l'auteur. Trois méthodes de recalage de modèles sur les caractéristiques modales des structures correspondantes sont présentées : la méthode de sensibilité inverse des valeurs et vecteurs propres, la méthode d'erreur en valeurs et vecteurs propres et la méthode d'erreur en loi de raideur. Deux méthodes de localisation des erreurs de modélisations sont proposées : la méthode d'erreur en loi de raideur et la méthode d'erreur en loi d'inertie. - la troisième et dernière partie est consacrée à l'utilisation des méthodes figurant dans la seconde partie du document pour le recalage de modèles industriels sur des données issues de mesures. Trois modèles sont recalés : le modèle de la maquette du bâtiment réacteur Hualien, le modèle d'une tuyauterie sur appuis en caoutchouc et le modèle d'une machine tournante.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Woźniak, Mariusz. « CARACTÉRISATION D'AGRÉGATS DE NANOPARTICULES PAR DES TECHNIQUES DE DIFFUSION DE LA LUMIÈRE ». Phd thesis, Aix-Marseille Université, 2012. http://tel.archives-ouvertes.fr/tel-00747711.

Texte intégral
Résumé :
Ce travail de thèse de doctorat propose et évalue différentes solutions pour caractériser, avec des outils optiques et électromagnétiques non intrusifs, les nanoparticules et agrégats observés dans différents systèmes physiques : suspensions colloïdales, aérosols et plasma poussiéreux. Deux types de modèles sont utilisés pour décrire la morphologie: d'agrégats fractals (p. ex. : suies issues de la combustion, de procédés plasma) et agrégats compacts (qualifiés de " Buckyballs " et observés dans des aérosols produits par séchage de nano suspensions). Nous utilisons différentes théories et modèles électromagnétiques (T-Matrice et approximations du type dipôles discrets) pour calculer les diagrammes de diffusion (ou facteur de structure optique) de ces agrégats, de même que leurs spectres d'extinction sur une large gamme spectrale. Ceci, dans le but d'inverser les données expérimentales obtenues en temps réel. Différents outils numériques originaux ont également été mis au point pour parvenir à une analyse morphologique quantitative de clichés de microscopie électronique. La validation expérimentale des outils théoriques et numériques développés au cours de ce travail est focalisée sur la spectrométrie d'extinction appliquée à des nano agrégats de silice, tungstène et silicium.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Casadiego, Bastidas Jose Luis. « Network Dynamics as an Inverse Problem ». Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-002B-7CE4-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Adam, Ihusan. « Structure and collective behaviour : a focus on the inverse problem ». Doctoral thesis, 2021. http://hdl.handle.net/2158/1230776.

Texte intégral
Résumé :
The aim of this thesis is to inform our understanding of the exquisite relationship between function and structure of complex systems with a particular focus on the inverse problem of inferring structure from collective expression. There exists a rich body of work explaining complex collective behaviour through its interdependence on structure and this forms the core subject matter of the field complex networks. However, in many cases of interest, the underlying structure of the observed system is often unknown and can only be studied through limited measurements. The first chapters of this thesis develop and refine a method of inferring the structure of a priori unknown networks by leveraging the celebrated Heterogeneous mean-field approximations. The inverse protocol is first formulated for and rigorously challenged against synthetic simulations of reactive-random-walkers to successfully recover the degree distributions from partial observations of the system. The reconstruction framework developed is powerful enough to be applicable to many real-world systems of great interest. This is demonstrated by the extension of the method to a nonlinear Leaky-Integrate and Fire (LIF) excitatory neuronal model evolving on a directed network support to recover both the in-degree distribution and the distribution of associated current in Chapter 5. In this chapter, this method is also applied to wide-field calcium imaging data from the brains of mice undergoing stroke and rehabilitation, which is presented as a spatiotemporal analysis in Chapter 4. The findings of Chapters 4 and 5 complement each other to showcase two potential non-invasive ways of tracking the post-stroke recovery of these animals. One analysis focuses on the subtle changes in propagation patterns quantified through three novel biomarkers, while the other shifts the attention to the changes in structure and inherent dynamics as seen through the inverse protocol. This reconstruction recipe has also been extended to a more general two species LIF model accounting for both inhibitory and excitatory neurons. in Chapter 6. This was applied to two-photon light-sheet microscopy data from zebrafish brains upon successful validation in silico. Lastly, Chapter 7 studies a particular phenomenon of interest where structure and inherent dynamics affect the function in a different but popular class of networks. A zero-mean noise-like prestrain is used to induce contractions in 1D Elastic Network Models. The analysis shows that the exact solution is difficult to probe analytically, while the mean behaviours of the networks are predictable and controllable by tuning the magnitude of the applied prestrain.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Venugopal, Mamatha. « A Stochastic Search Approach to Inverse Problems ». Thesis, 2016. http://hdl.handle.net/2005/3042.

Texte intégral
Résumé :
The focus of the thesis is on the development of a few stochastic search schemes for inverse problems and their applications in medical imaging. After the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as composed of two parts: while the first part concerns the development of stochastic search algorithms for inverse problems (Chapters 2 and 3), the second part elucidates on the applicability of search schemes to inverse problems of interest in tomographic imaging (Chapters 4 and 5). The chapter-wise contributions of the thesis are summarized below. Chapter 2 proposes a Monte Carlo stochastic filtering algorithm for the recursive estimation of diffusive processes in linear/nonlinear dynamical systems that modulate the instantaneous rates of Poisson measurements. The same scheme is applicable when the set of partial and noisy measurements are of a diffusive nature. A key aspect of our development here is the filter-update scheme, derived from an ensemble approximation of the time-discretized nonlinear Kushner Stratonovich equation, that is modified to account for Poisson-type measurements. Specifically, the additive update through a gain-like correction term, empirically approximated from the innovation integral in the filtering equation, eliminates the problem of particle collapse encountered in many conventional particle filters that adopt weight-based updates. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth, first with application to filtering problems with diffusive or Poisson-type measurements and then to an automatic control problem wherein the exterminations of the associated cost functional is achieved simply by an appropriate redefinition of the innovation process. The aim of one of the numerical examples in Chapter 2 is to minimize the structural response of a duffing oscillator under external forcing. We pose this problem of active control within a filtering framework wherein the goal is to estimate the control force that minimizes an appropriately chosen performance index. We employ the proposed filtering algorithm to estimate the control force and the oscillator displacements and velocities that are minimized as a result of the application of the control force. While Fig. 1 shows the time histories of the uncontrolled and controlled displacements and velocities of the oscillator, a plot of the estimated control force against the external force applied is given in Fig. 2. (a) (b) Fig. 1. A plot of the time histories of the uncontrolled and controlled (a) displacements and (b) velocities. Fig. 2. A plot of the time histories of the external force and the estimated control force Stochastic filtering, despite its numerous applications, amounts only to a directed search and is best suited for inverse problems and optimization problems with unimodal solutions. In view of general optimization problems involving multimodal objective functions with a priori unknown optima, filtering, similar to a regularized Gauss-Newton (GN) method, may only serve as a local (or quasi-local) search. In Chapter 3, therefore, we propose a stochastic search (SS) scheme that whilst maintaining the basic structure of a filtered martingale problem, also incorporates randomization techniques such as scrambling and blending, which are meant to aid in avoiding the so-called local traps. The key contribution of this chapter is the introduction of yet another technique, termed as the state space splitting (3S) which is a paradigm based on the principle of divide-and-conquer. The 3S technique, incorporated within the optimization scheme, offers a better assimilation of measurements and is found to outperform filtering in the context of quantitative photoacoustic tomography (PAT) to recover the optical absorption field from sparsely available PAT data using a bare minimum ensemble. Other than that, the proposed scheme is numerically shown to be better than or at least as good as CMA-ES (covariance matrix adaptation evolution strategies), one of the best performing optimization schemes in minimizing a set of benchmark functions. Table 1 gives the comparative performance of the proposed scheme and CMA-ES in minimizing a set of 40-dimensional functions (F1-F20), all of which have their global minimum at 0, using an ensemble size of 20. Here, 10 5 is the tolerance limit to be attained for the objective function value and MAX is the maximum number of iterations permissible to the optimization scheme to arrive at the global minimum. Table 1. Performance of the SS scheme and Chapter 4 gathers numerical and experimental evidence to support our conjecture in the previous chapters that even a quasi-local search (afforded, for instance, by the filtered martingale problem) is generally superior to a regularized GN method in solving inverse problems. Specifically, in this chapter, we solve the inverse problems of ultrasound modulated optical tomography (UMOT) and diffraction tomography (DT). In UMOT, we perform a spatially resolved recovery of the mean-squared displacements, p r of the scattering centres in a diffusive object by measuring the modulation depth in the decaying autocorrelation of the incident coherent light. This modulation is induced by the input ultrasound focussed to a specific region referred to as the region of interest (ROI) in the object. Since the ultrasound-induced displacements are a measure of the material stiffness, in principle, UMOT can be applied for the early diagnosis of cancer in soft tissues. In DT, on the other hand, we recover the real refractive index distribution, n r of an optical fiber from experimentally acquired transmitted intensity of light traversing through it. In both cases, the filtering step encoded within the optimization scheme recovers superior reconstruction images vis-à-vis the GN method in terms of quantitative accuracies. Fig. 3 gives a comparative cross-sectional plot through the centre of the reference and reconstructed p r images in UMOT when the ROI is at the centre of the object. Here, the anomaly is presented as an increase in the displacements and is at the centre of the ROI. Fig. 4 shows the comparative cross-sectional plot of the reference and reconstructed refractive index distributions, n r of the optical fiber in DT. Fig. 3. Cross-sectional plot through the center of the reference and reconstructed p r images. Fig. 4. Cross-sectional plot through the center of the reference and reconstructed n r distributions. In Chapter 5, the SS scheme is applied to our main application, viz. photoacoustic tomography (PAT) for the recovery of the absorbed energy map, the optical absorption coefficient and the chromophore concentrations in soft tissues. Nevertheless, the main contribution of this chapter is to provide a single-step method for the recovery of the optical absorption field from both simulated and experimental time-domain PAT data. A single-step direct recovery is shown to yield better reconstruction than the generally adopted two-step method for quantitative PAT. Such a quantitative reconstruction maybe converted to a functional image through a linear map. Alternatively, one could also perform a one-step recovery of the chromophore concentrations from the boundary pressure, as shown using simulated data in this chapter. Being a Monte Carlo scheme, the SS scheme is highly parallelizable and the availability of such a machine-ready inversion scheme should finally enable PAT to emerge as a clinical tool in medical diagnostics. Given below in Fig. 5 is a comparison of the optical absorption map of the Shepp-Logan phantom with the reconstruction obtained as a result of a direct (1-step) recovery. Fig. 5. The (a) exact and (b) reconstructed optical absorption maps of the Shepp-Logan phantom. The x- and y-axes are in m and the colormap is in mm-1. Chapter 6 concludes the work with a brief summary of the results obtained and suggestions for future exploration of some of the schemes and applications described in this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
44

MARCHETTI, Luca. « MP Representations of Biological Structures and Dynamics ». Doctoral thesis, 2012. http://hdl.handle.net/11562/405336.

Texte intégral
Résumé :
Il tema principale di questa tesi di dottorato riguarda la soluzione di problemi di dinamica inversa nel campo dei P sistemi metabolici (sistemi MP). I sistemi MP, basati sui P sistemi classici di Paun, sono stati introdotti da Manca nel 2004 per permettere la modellazione di sistemi metabolici per mezzo di grammatiche di riscrittura su multi-insiemi. In questo tipo di grammatiche, le trasformazioni di multi-insiemi di oggetti sono regolate, in modo deterministico, da particulari funzioni di stato chiamate regolatori. Il risultato chiave presentato in questa tesi riguarda la definizione di un algoritmo di regressione, chiamato LGSS (Log-gain Stoichiometric Stepwise regression), il quale implementa un ambiente di lavoro completo per affrontare problemi di dinamica inversa con sistemi MP. In particolare, LGSS è in grado di definire nuovi modelli MP, partendo dalle serie temporali di una dinamica osservata, combinando ed estendendo il principio di log-gain, sviluppato nella teoria dei sistemi MP, con il metodo classico di regressione stepwise, il quale è basato sull'uso di approssimazioni ai minimi quadrati e di test F. Nella parte finale della tesi, sono inoltre presentate tre applicazioni dei sistemi MP nella scoperta della logica di regolazione in fenomeni importanti nel campo della systems biology. Nonostante le evidenti differenze riscontrabili tra i fenomeni considerati, i quali spaziano da metabolismi a reti di regolazione genica, in tutti i casi affrontati è stato possibile definire modelli che esibiscono una buona approssimazione delle serie temporali osservate e che evidenziano fenomeni regolativi totalmente nuovi o che erano stati solamente teorizzati in precedenza.
The main theme of this Ph.D. thesis is focused on the solution of dynamical inverse problems in the context of Metabolic P systems (MP systems). Metabolic P systems, based on Paun's P systems, were introduced by Manca in 2004 for modelling metabolic systems by means of suitable multiset rewriting grammars. In such kind of grammars, multiset transformations are regulated, in a deterministic way, by particular functions called regulators. The key result presented in the thesis is the definition of a regression algorithm, called LGSS (Log-gain Stoichiometric Stepwise regression), which provides a complete statistical regression framework for dealing with inverse dynamical problems in the MP context. In particular, LGSS derives MP models from the time series of observed dynamics by combining and extending the log-gain principle, developed in the MP system theory, with the classical method of Stepwise Regression, which is a statistical regression technique based on least squares approximation and statistical F-tests. In the last part of the thesis, three applications of MP systems are also presented for discovering, by means of LGSS, the internal regulation logic of phenomena relevant in systems biology. Despite the differences between the considered phenomena, which comprise both metabolic and gene regulatory processes, in all the cases a model was found that exhibits good approximation of the observed time series and highlights results which are new or that have been only theorized before.
Styles APA, Harvard, Vancouver, ISO, etc.
45

ZHANG, BING-HENG, et 張秉恆. « Inverse dynamics based dynamic programming methods for optimal control problems with liear dynamics ». Thesis, 1992. http://ndltd.ncl.edu.tw/handle/40523739089997726742.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

YU, DING-ZONG, et 余定宗. « Parameterization and inverse dynamics based dynamic programming methods for optimal robot control problems ». Thesis, 1992. http://ndltd.ncl.edu.tw/handle/70722726481916029093.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Kogit, Megan. « A dynamic finite element framework built towards the inverse problem of soft tissues ». 2008. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.17148.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Li, Jien-Hui, et 李建輝. « Dynamic Optical Probing and Inverses Problem of Liquid Crystal Alignment Structures ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/57181281680069262491.

Texte intégral
Résumé :
碩士
國立交通大學
顯示科技研究所
97
The liquid crystal (LC) director profile is an important property for a variety of LC applications. In this study, we combine simulation and experimental measurement of the optical responses of hybrid alignment liquid crystal cells to demonstrate the functionality of LC director profile retrieval. Our simulation invokes the Q-tensor formalism of liquid crystal director calculation and Berreman matrix method for the optical response of LC. An electron-multiplying charge coupled device and a delay time generator were combined to capture the dynamic optical image of the liquid crystal cells. We discovered that by including a hybrid alignment region into an OCB cell, the warm up time of the LC cell can be effectively eliminated. The relaxation time was unfortunately also increased. We also study the inverse problem of LC to retrieve the director profile of liquid crystal cell directly from the measured optical transmittance data. To retrieve the director profile from the inverse problem technique, we proposed a regularization matrix based on a priori knowledge of LC. We found our method can yield LC director profile reliably from the measured optical data covering a wide range of incident angle and 10% noise level. We further demonstrated the functionality by retrieving the liquid crystal director profiles of LC cells with applied voltage from the experimentally measured data.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Saha, Nilanjan. « Methods For Forward And Inverse Problems In Nonlinear And Stochastic Structural Dynamics ». Thesis, 2007. http://hdl.handle.net/2005/608.

Texte intégral
Résumé :
A main thrust of this thesis is to develop and explore linearization-based numeric-analytic integration techniques in the context of stochastically driven nonlinear oscillators of relevance in structural dynamics. Unfortunately, unlike the case of deterministic oscillators, available numerical or numeric-analytic integration schemes for stochastically driven oscillators, often modelled through stochastic differential equations (SDE-s), have significantly poorer numerical accuracy. These schemes are generally derived through stochastic Taylor expansions and the limited accuracy results from difficulties in evaluating the multiple stochastic integrals. We propose a few higher-order methods based on the stochastic version of transversal linearization and another method of linearizing the nonlinear drift field based on a Girsanov change of measures. When these schemes are implemented within a Monte Carlo framework for computing the response statistics, one typically needs repeated simulations over a large ensemble. The statistical error due to the finiteness of the ensemble (of size N, say)is of order 1/√N, which implies a rather slow convergence as N→∞. Given the prohibitively large computational cost as N increases, a variance reduction strategy that enables computing accurate response statistics for small N is considered useful. This leads us to propose a weak variance reduction strategy. Finally, we use the explicit derivative-free linearization techniques for state and parameter estimations for structural systems using the extended Kalman filter (EKF). A two-stage version of the EKF (2-EKF) is also proposed so as to account for errors due to linearization and unmodelled dynamics. In Chapter 2, we develop higher order locally transversal linearization (LTL) techniques for strong and weak solutions of stochastically driven nonlinear oscillators. For developing the higher-order methods, we expand the non-linear drift and multiplicative diffusion fields based on backward Euler and Newmark expansions while simultaneously satisfying the original vector field at the forward time instant where we intend to find the discretized solution. Since the non-linear vector fields are conditioned on the solution we wish to determine, the methods are implicit. We also report explicit versions of such linearization schemes via simple modifications. Local error estimates are provided for weak solutions. Weak linearized solutions enable faster computation vis-à-vis their strong counterparts. In Chapter 3, we propose another weak linearization method for non-linear oscillators under stochastic excitations based on Girsanov transformation of measures. Here, the non-linear drift vector is appropriately linearized such that the resulting SDE is analytically solvable. In order to account for the error in replacing of non-linear drift terms, the linearized solutions are multiplied by scalar weighting function. The weighting function is the solution of a scalar SDE(i.e.,Radon-Nikodym derivative). Apart from numerically illustrating the method through applications to non-linear oscillators, we also use the Girsanov transformation of measures to correct the truncation errors in lower order discretizations. In order to achieve efficiency in the computation of response statistics via Monte Carlo simulation, we propose in Chapter 4 a weak variance reduction strategy such that the ensemble size is significantly reduced without seriously affecting the accuracy of the predicted expectations of any smooth function of the response vector. The basis of the variance reduction strategy is to appropriately augment the governing system equations and then weakly replace the associated stochastic forcing functions through variance-reduced functions. In the process, the additional computational cost due to system augmentation is generally far less besides the accrued advantages due to a drastically reduced ensemble size. The variance reduction scheme is illustrated through applications to several non-linear oscillators, including a 3-DOF system. Finally, in Chapter 5, we exploit the explicit forms of the LTL techniques for state and parameters estimations of non-linear oscillators of engineering interest using a novel derivative-free EKF and a 2-EKF. In the derivative-free EKF, we use one-term, Euler and Newmark replacements for linearizations of the non-linear drift terms. In the 2-EKF, we use bias terms to account for errors due to lower order linearization and unmodelled dynamics in the mathematical model. Numerical studies establish the relative advantages of EKF-DLL as well as 2-EKF over the conventional forms of EKF. The thesis is concluded in Chapter 6 with an overall summary of the contributions made and suggestions for future research.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Gupta, Saurabh. « Development Of Deterministic And Stochastic Algorithms For Inverse Problems Of Optical Tomography ». Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2608.

Texte intégral
Résumé :
Stable and computationally efficient reconstruction methodologies are developed to solve two important medical imaging problems which use near-infrared (NIR) light as the source of interrogation, namely, diffuse optical tomography (DOT) and one of its variations, ultrasound-modulated optical tomography (UMOT). Since in both these imaging modalities the system matrices are ill-conditioned owing to insufficient and noisy data, the emphasis in this work is to develop robust stochastic filtering algorithms which can handle measurement noise and also account for inaccuracies in forward models through an appropriate assignment of a process noise. However, we start with demonstration of speeding of a Gauss-Newton (GN) algorithm for DOT so that a video-rate reconstruction from data recorded on a CCD camera is rendered feasible. Towards this, a computationally efficient linear iterative scheme is proposed to invert the normal equation of a Gauss-Newton scheme in the context of recovery of absorption coefficient distribution from DOT data, which involved the singular value decomposition (SVD) of the Jacobian matrix appearing in the update equation. This has sufficiently speeded up the inversion that a video rate recovery of time evolving absorption coefficient distribution is demonstrated from experimental data. The SVD-based algorithm has made the number of operations in image reconstruction to be rather than. 2()ONN3()ONN The rest of the algorithms are based on different forms of stochastic filtering wherein we arrive at a mean-square estimate of the parameters through computing their joint probability distributions conditioned on the measurement up to the current instant. Under this, the first algorithm developed uses a Bootstrap particle filter which also uses a quasi-Newton direction within. Since keeping track of the Newton direction necessitates repetitive computation of the Jacobian, for all particle locations and for all time steps, to make the recovery computationally feasible, we devised a faster update of the Jacobian. It is demonstrated, through analytical reasoning and numerical simulations, that the proposed scheme, not only accelerates convergence but also yields substantially reduced sample variance in the estimates vis-à-vis the conventional BS filter. Both accelerated convergence and reduced sample variance in the estimates are demonstrated in DOT optical parameter recovery using simulated and experimental data. In the next demonstration a derivative free variant of the pseudo-dynamic ensemble Kalman filter (PD-EnKF) is developed for DOT wherein the size of the unknown parameter is reduced by representing of the inhomogeneities through simple geometrical shapes. Also the optical parameter fields within the inhomogeneities are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions). The EnKF is then used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the Pseudo-Dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ‘measurement’ equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. In our numerical simulations we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes ( such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as = 0.01 mm-1 and = 1.0 mm-1respectively. We also assume=0.02 mm-1 within the inhomogeneity (for the single inhomogeneity case) and=0.02 and 0.03 mm-1 (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. The superiority of a modified version of the PD-EnKF, which uses an ensemble square root filter, is also demonstrated in the context of UMOT by recovering the distribution of mean-squared amplitude of vibration, related to the Young’s modulus, in the ultrasound focal volume. Since the ability of a coherent light probe to pick-up the overall optical path-length change is limited to modulo an optical wavelength, the individual displacements suffered owing to the US forcing should be very small, say within a few angstroms. The sensitivity of modulation depth to changes in these small displacements could be very small, especially when the ROI is far removed from the source and detector. The contrast recovery of the unknown distribution in such cases could be seriously impaired whilst using a quasi-Newton scheme (e.g. the GN scheme) which crucially makes use of the derivative information. The derivative-free gain-based Monte Carlo filter not only remedies this deficiency, but also provides a regularization insensitive and computationally competitive alternative to the GN scheme. The inherent ability of a stochastic filter in accommodating the model error owing to a diffusion approximation of the correlation transport may be cited as an added advantage in the context of the UMOT inverse problem. Finally to speed up forward solve of the partial differential equation (PDE) modeling photon transport in the context of UMOT for which the PDE has time as a parameter, a spectral decomposition of the PDE operator is demonstrated. This allows the computation of the time dependent forward solution in terms of the eigen functions of the PDE operator which has speeded up the forward solution, which in turn has rendered the UMOT parameter recovery computationally efficient.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie