Thèses sur le sujet « Statistical inversions »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Statistical inversions.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 38 meilleures thèses pour votre recherche sur le sujet « Statistical inversions ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Boucher, Eulalie. « Designing Deep-Learning models for surface and atmospheric retrievals from the IASI infrared sounder ». Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS145.

Texte intégral
Résumé :
L'observation de la Terre est essentielle pour comprendre et surveiller le comportement complexe de notre planète. Les satellites, équipés d'un certain nombre de capteurs sophistiqués, constituent une plateforme clé à cet égard, offrant une opportunité d'observer la Terre à l'échelle globale et de manière continue. Les techniques d'apprentissage automatique (ML) sont utilisées depuis plusieurs décennies, dans la communauté de la télédétection, pour traiter la grande quantité de données générées quotidiennement par les systèmes d'observation de la Terre. La révolution apportée par les nouvelles techniques de Deep Learning (DL) a toutefois ouvert de nouvelles possibilités pour l'exploitation des observations satellitaires. Cette thèse vise à montrer que des techniques de traitement d'images telles que les réseaux neuronaux convolutifs (CNN), à condition qu'elles soient bien maîtrisées, ont le potentiel d'améliorer l'estimation des paramètres atmosphériques et de surface de la Terre. En considérant les observations à l'échelle de l'image plutôt qu'à l'échelle du pixel, les dépendances spatiales peuvent être prises en compte. De telles techniques sont utilisées dans cette thèse pour l'estimation des températures de surface et atmosphériques, ainsi que pour la détection et la classification des nuages à partir des observations de l'Interféromètre Atmosphérique de Sondage dans l'Infrarouge (IASI). IASI, qui est placé à bord des satellites en orbite polaire Metop, est un sondeur hyperspectral collectant des données sur une large gamme de longueurs d'onde dans l'infrarouge. Chacune est adaptée à l'identification des constituants atmosphériques à différents niveaux de l'atmosphère, ou de paramètres de surface. En plus d'améliorer la qualité des restitutions, de telles méthodes d'Intelligence Artificielle (IA) sont capables de traiter des images contenant des données manquantes, de mieux estimer les événements extrêmes (souvent négligés par les techniques statistiques traditionnelles) et d'estimer les incertitudes des restitutions. Cette thèse montre pourquoi les méthodes d'IA, et en particulier les CNN avec convolutions partielles, devraient constituer l'approche privilégiée pour l'exploitation des observations provenant de nouvelles missions satellitaires telles que IASI-NG ou MTG-S IRS
Observing the Earth is vital to comprehend and monitor the complex behaviour of our planet. Satellites, equipped with a number of sophisticated sensors, serve as a key platform for this, offering an opportunity to observe the Earth globally and continuously. Machine Learning (ML) techniques have been used in the remote sensing community for several decades to deal with the vast amount of data generated daily by Earth observation systems. The revolution brought about by novel Deep Learning (DL) techniques has however opened up new possibilities for the exploitation of satellite observations. This research aims to show that image-processing techniques such as Convolutional Neural Networks (CNNs), provided that they are well mastered, have the potential to improve the estimation of the Earth's atmospheric and surface parameters. By looking at the observations at the image scale rather than at the pixel scale, spatial dependencies can be taken into account. Such techniques will be used for the retrieval of surface and atmospheric temperatures, as well as cloud detection and classification from the Infrared Atmospheric Sounding Interferometer (IASI) observations. IASI, onboard the polar orbiting satellites Metop, is a hyperspectral sounder gathering data across a broad range of infrared wavelengths that are suitable to identify atmospheric constituents for a range of atmospheric vertical levels, as well as surface parameters. In addition to improving the quality of the retrievals, such Artificial Intelligence (AI) methods are capable of dealing with images that contain missing data, better estimating extreme events (often overlooked by traditional ML techniques) and estimating retrieval uncertainties. This thesis shows why AI methods should be the preferred approach for the exploitation of observations coming from new satellite missions such as IASI-NG or MTG-S IRS
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ladrón, de Guevara Cortés Rogelio. « Techniques For Estimating the Generative Multifactor Model of Returns in a Statistical Approach to the Arbitrage Pricing Theory. Evidence from the Mexican Stock Exchange ». Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/386545.

Texte intégral
Résumé :
This dissertation focuses on the estimation of the generative multifactor model of returns on equities, under a statistical approach of the Arbitrage Pricing Theory (APT), in the context of the Mexican Stock Exchange. Therefore, this research takes as frameworks two main issues: (i) the multifactor asset pricing models, specially the statistical risk factors approach, and (ii) the dimension reduction or feature extraction techniques: Principal Component Analysis, Factor Analysis, Independent Component Analysis and Non-linear Principal Component Analysis, utilized to extract the underlying systematic risk factors. The models estimated are tested using two methodologies: (i) capability of reproduction of the observed returns using the estimated generative multifactor model, and (ii) results of the econometric contrast of the APT using the extracted systematic risk factors. Finally, a comparative study among techniques is carried on based on their theoretical properties and the empirical results. According to the above stated and as far as we concerned, this dissertation contributes to financial research by providing empirical evidence of the estimation of the generative multifactor model of returns on equities, extracting statistical underlying risk factors via classic and alternative dimension reduction or feature extraction techniques in the field of finance, in order to test the APT as an asset pricing model, in the context of an emerging financial market such as the Mexican Stock Exchange. In addition, this work presents an unprecedented theoretical and empirical comparative study among Principal Component Analysis, Factor Analysis, Independent Component Analysis and Neural Networks Principal Component Analysis, as techniques to extract systematic risk factors from a stock exchange, analyzing the level of sensitivity of the results in function of the technique carried on. In addition, this dissertation represents a mainly empirical exhaustive study where objective evidence about the Mexican stock market is provided by way of the application of four different techniques for extraction of systematic risk factors, to four datasets, in a test window that ranged from two to nine factors.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Degenhardt, Sheldon. « Weighted-inversion statistics and their symmetry groups / ». The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487941504293867.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Fletcher, R. P. « Statistical inversion of surface parameters from ATSR-2 satellite observations ». Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267415.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Boberg, Jonas. « Counting Double-Descents and Double-Inversions in Permutations ». Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54431.

Texte intégral
Résumé :
In this paper, new variations of some well-known permutation statistics are introduced and studied. Firstly, a double-descent of a permutation π is defined as a position i where πi ≥ 2πi+1. By proofs by induction and direct proofs, recursive and explicit expressions for the number of n-permutations with k double-descents are presented. Also, an expression for the total number of double-descents in all n-permutations is presented. Secondly, a double-inversion of a permutation π is defined as a pair (πi,πj) where i<j but πi ≥ 2πj. The total number of double-inversions in all n-permutations is presented.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Fu, Shuai. « Inversion probabiliste bayésienne en analyse d'incertitude ». Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00766341.

Texte intégral
Résumé :
Ce travail de recherche propose une solution aux problèmes inverses probabilistes avec des outils de la statistique bayésienne. Le problème inverse considéré est d'estimer la distribution d'une variable aléatoire non observée X a partir d'observations bruitées Y suivant un modèle physique coûteux H. En général, de tels problèmes inverses sont rencontrés dans le traitement des incertitudes. Le cadre bayésien nous permet de prendre en compte les connaissances préalables d'experts surtout avec peu de données disponibles. Un algorithme de Metropolis-Hastings-within-Gibbs est proposé pour approcher la distribution a posteriori des paramètres de X avec un processus d'augmentation des données. A cause d'un nombre élevé d'appels, la fonction coûteuse H est remplacée par un émulateur de krigeage (méta-modèle) H chapeau. Cette approche implique plusieurs erreurs de nature différente et, dans ce travail, nous nous attachons a estimer et réduire l'impact de ces erreurs. Le critère DAC a été proposé pour évaluer la pertinence du plan d'expérience (design) et le choix de la loi a priori, en tenant compte des observations. Une autre contribution est la construction du design adaptatif adapté a notre objectif particulier dans le cadre bayésien. La principale méthodologie présentée dans ce travail a été appliquée a un cas d' étude d'ingénierie hydraulique.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Chebikin, Denis. « Polytopes, generating functions, and new statistics related to descents and inversions in permutations ». Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43793.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2008.
Includes bibliographical references (p. 75-76).
We study new statistics on permutations that are variations on the descent and the inversion statistics. In particular, we consider the alternating descent set of a permutation [sigma] = [sigma] 1 [sigma] 2 an defined as the set of indices i such that either i is odd and ai > ui+l, or i is even and au < au+l. We show that this statistic is equidistributed with the 3-descent set statistic on permutations [sigma] = [sigma] 1 [sigma] 2 ... [sigma] n+1 with al = 1, defined to be the set of indices i such that the triple [sigma] i [sigma] i + [sigma] i +2 forms an odd permutation of size 3. We then introduce Mahonian inversion statistics corresponding to the two new variations of descents and show that the joint distributions of the resulting descent-inversion pairs are the same. We examine the generating functions involving alternating Eulerian polynomials, defined by analogy with the classical Eulerian polynomials ... using alternating descents. By looking at the number of alternating inversions in alternating (down-up) permutations, we obtain a new qanalog of the Euler number En and show how it emerges in a q-analog of an identity expressing E, as a weighted sum of Dyck paths. Other parts of this thesis are devoted to polytopes relevant to the descent statistic. One such polytope is a "signed" version of the Pitman-Stanley parking function polytope, which can be viewed as a generalization of the chain polytope of the zigzag poset. We also discuss the family of descent polytopes, also known as order polytopes of ribbon posets, giving ways to compute their f-vectors and looking further into their combinatorial structure.
by Denis Chebikin.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ars, Sébastien. « Caractérisation des émissions de méthane à l'échelle locale à l'aide d'une méthode d'inversion statistique basée sur un modèle gaussien paramétré avec les données d'un gaz traceur ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV030/document.

Texte intégral
Résumé :
L'augmentation des concentrations de méthane dans l'atmosphère, directement imputable aux activités anthropiques, induit une accentuation de l'effet de serre et une dégradation de la qualité de l'air. Il existe encore à l'heure actuelle de grandes incertitudes concernant les estimations des émissions des dfférentes sources de méthane à l'échellelocale. Une meilleure caractérisation de ces sources permettrait de mettre en place des politiques d'adaptation et d'att énuation efficaces afin de réduire ces émissions. Nous avons développé une nouvelle méthode de quantificationdes émissions de méthane à l'échelle locale basée sur la combinaison de mesures atmosphériques mobiles et d'un modèle gaussien dans le cadre d'une inversion statistique. Les concentrations atmosphériques du méthane sont mesuréesainsi que celles d'un gaz traceur émis à un flux connu. Ces concentrations en gaz traceur sont utilisées pour sélectionnerla classe de stabilité représentant le mieux les conditions atmosphériques dans le modèle gaussien ainsi qu'à paramétrerl'erreur associée aux mesures et au modèle dans l'inversion statistique. Dans un premier temps, cette nouvelle méthoded'estimation des émissions de méthane a été testée grâce à des émissions contrôlées de traceur et de méthane dontles sources ont été positionnées suivant différentes configurations. J'ai ensuite appliqué cette méthode à deux sites réels connus pour leurs émissions de méthane, une exploitation agricole et une installation de distribution de gaz, afin de tester son applicabilité et sa robustesse dans des conditions plus complexes de répartition des sources de méthane. Cette méthode a permis d'obtenir des estimations des émissions totales des sites robustes prenant en compte la localisation du traceur par rapport aux sources de méthane. L'estimation séparéedes émissions des différentes sources d'un site s'est révélée fortement dépendante des conditions météorologiques durant les mesures. Je me suis ensuite focalisé sur les émissions de méthane associées au secteur des déchets en réalisant un certain nombre de campagnes de mesures au sein d'installations de stockagedes déchets non dangereux et de stations d'épuration. Les résultats obtenus pour ces différents sites montrent la grandevariabilité des émissions de méthane dans le secteur des déchets
The increase of atmospheric methane concentrations since the beginning of the industrial era is directly linked to anthropogenic activities. This increase is partly responsible for the enhancement of the greenhouse effect leading to a rise of Earth's surface temperatures and a degradation of air quality. There are still considerable uncertainties regarding methane emissions estimates from many sources at local scale. A better characterization of these sources would help the implementation of effective adaptation and mitigation policies to reduce these emissions.To do so, we have developed a new method to quantify methane emissions from local sites based on the combination of mobile atmospheric measurements, a Gaussian model and a statistical inversion. These atmospheric measurements are carried out within the framework of the tracer method, which consists in emitting a gas co-located with the methane source at a known flow. An estimate of methane emissions can be given by measuring the tracer and methane concentrations through the emission plume coming from the site. This method presents some limitations especially when several sources and/or extended sources can be found on the studied site. In these conditions, the colocation of the tracer and methane sources is difficult. The Gaussian model enables to take into account this bad collocation. It also gives a separate estimate of each source of a site when the classical tracer release method only gives an estimate of its total emissions. The statistical inversion enables to take into account the uncertainties associated with the model and the measurements.The method is based on the use of the measured tracer gas concentrations to choose the stability class of the Gaussian model that best represents the atmospheric conditions during the measurements. These tracer data are also used to parameterize the error associated with the measurements and the model in the statistical inversion. We first tested this new method with controlled emissions of tracer and methane. The tracer and methane sources were positioned in different configurations in order to better understand the contributions of this method compared to the traditional tracer method. These tests have demonstrated that the statistical inversion parameterized by the tracer gas data gives better estimates of methane emissions when the tracer and methane sources are not perfectly collocated or when there are several sources of methane.In a second time, I applied this method to two sites known for their methane emissions, namely a farm and a gas distribution facility. These measurements enabled us to test the applicability and robustness of the method under more complex methane source distribution conditions and gave us better estimates of the total methane emissions of these sites that take into account the location of the tracer regarding methane sources. Separate estimates of every source within the site are highly dependent on the meteorological conditions during the measurements. The analysis of the correlations on the posterior uncertainties between the different sources gives a diagnostic of the separability of the sources.Finally I focused on methane emissions associated with the waste sector. To do so, I carried out several measurement campaigns in landfills and wastewater treatment plants and I also used data collected on this type of sites during other projects. I selected the most suitable method to estimate methane emissions of each site and the obtained estimates for each one of these sites show the variability of methane emissions in the waste sector
Styles APA, Harvard, Vancouver, ISO, etc.
9

Goto, Isao. « Word Reordering for Statistical Machine Translation via Modeling Structural Differences between Languages ». 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/189374.

Texte intégral
Résumé :
2015-05-27に本文を差替
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第18481号
情博第532号
新制||情||94(附属図書館)
31359
京都大学大学院情報学研究科知能情報学専攻
(主査)教授 黒橋 禎夫, 教授 田中 克己, 教授 河原 達也
学位規則第4条第1項該当
Styles APA, Harvard, Vancouver, ISO, etc.
10

Konečný, Zdeněk. « Statistická analýza složených rozdělení ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229488.

Texte intégral
Résumé :
The probability distribution of a random variable created by summing a random number of the independent and identically distributed random variables is called a compound probability distribution. In this work is described a compound distribution as well as a calculation of its characteristics. Especially, the thesis is focused on studying a special case of compound distribution where each addend has the log-normal distribution and their number has the negative binomial distribution. Here are also described some approaches to estimate the parameters of LN and NB distribution. Further, the impact of these estimates on the final compound distribution is analyzed.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Roininen, L. (Lassi). « Discretisation-invariant and computationally efficient correlation priors for Bayesian inversion ». Doctoral thesis, University of Oulu, 2015. http://urn.fi/urn:isbn:9789526207544.

Texte intégral
Résumé :
Abstract We are interested in studying Gaussian Markov random fields as correlation priors for Bayesian inversion. We construct the correlation priors to be discretisation-invariant, which means, loosely speaking, that the discrete priors converge to continuous priors at the discretisation limit. We construct the priors with stochastic partial differential equations, which guarantees computational efficiency via sparse matrix approximations. The stationary correlation priors have a clear statistical interpretation through the autocorrelation function. We also consider how to make structural model of an unknown object with anisotropic and inhomogeneous Gaussian Markov random fields. Finally we consider these fields on unstructured meshes, which are needed on complex domains. The publications in this thesis contain fundamental mathematical and computational results of correlation priors. We have considered one application in this thesis, the electrical impedance tomography. These fundamental results and application provide a platform for engineers and researchers to use correlation priors in other inverse problem applications.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Lee, Se Il. « Statistical thermodynamics of virus assembly ». Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33900.

Texte intégral
Résumé :
Experiments show that MgSO4 salt has a non-monotonic effect as a function of MgSO4 concentration on the ejection of DNA from bacteriophage lambda. There is a concentration, N0, at which the minimum amount of DNA is ejected. At lower or higher concentrations, more DNA is ejected. We propose that this non-monotonic behavior is due to the overcharging of DNA at high concentration of Mg⁺² counterions. As the Mg⁺² concentration increases from zero, the net charge of ejected DNA changes its sign from negative to positive. N0 corresponds to the concentration at which DNA is neutral. Our theory fits experimental data well. The DNA-DNA electrostatic attraction is found to be -0.004 kBT/nucleotide. Simulations of DNA-DNA interaction of a hexagonal DNA bundle support our theory. They also show the non-monotonic DNA-DNA interaction and reentrant behavior of DNA condensation by divalent counterions. Three problems in understanding the capsid assembly for a retrovirus are studied: First, the way in which the viral membrane affects the structure of in vivo assembled HIV-1 capsid is studied. We show that conical and cylindrical capsids have similar energy at high surface tension of the viral membrane, which leads to the various shapes of HIV-1 capsids. Secondly, the problem of RNA genome packaging inside spherical viruses is studied using RNA condensation theory. For weak adsorption strength of capsid protein, most RNA genomes are located at the center of the capsid. For strong adsorption strength, RNA genomes peak near the capsid surface and the amount of RNA packaged is proportional to the capsid area instead its volume. Theory fits experimental data reasonably well. Thirdly, the condensation of RNA molecules by nucleocapsid (NC) protein is studied. The interaction between RNA molecules and NC proteins is important for the reverse transcription of viral RNA which relates to the viral infectivity. For strong adsorption strength of the NC protein, there is a screening effect by RNA molecules around a single NC protein.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Klempner, Scott. « Statistical modeling of radiometric error propagation in support of hyperspectral imaging inversion and optimized ground sensor network design / ». Online version of thesis, 2008. http://hdl.handle.net/1850/7859.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Ars, Sébastien. « Caractérisation des émissions de méthane à l'échelle locale à l'aide d'une méthode d'inversion statistique basée sur un modèle gaussien paramétré avec les données d'un gaz traceur ». Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV030.

Texte intégral
Résumé :
L'augmentation des concentrations de méthane dans l'atmosphère, directement imputable aux activités anthropiques, induit une accentuation de l'effet de serre et une dégradation de la qualité de l'air. Il existe encore à l'heure actuelle de grandes incertitudes concernant les estimations des émissions des dfférentes sources de méthane à l'échellelocale. Une meilleure caractérisation de ces sources permettrait de mettre en place des politiques d'adaptation et d'att énuation efficaces afin de réduire ces émissions. Nous avons développé une nouvelle méthode de quantificationdes émissions de méthane à l'échelle locale basée sur la combinaison de mesures atmosphériques mobiles et d'un modèle gaussien dans le cadre d'une inversion statistique. Les concentrations atmosphériques du méthane sont mesuréesainsi que celles d'un gaz traceur émis à un flux connu. Ces concentrations en gaz traceur sont utilisées pour sélectionnerla classe de stabilité représentant le mieux les conditions atmosphériques dans le modèle gaussien ainsi qu'à paramétrerl'erreur associée aux mesures et au modèle dans l'inversion statistique. Dans un premier temps, cette nouvelle méthoded'estimation des émissions de méthane a été testée grâce à des émissions contrôlées de traceur et de méthane dontles sources ont été positionnées suivant différentes configurations. J'ai ensuite appliqué cette méthode à deux sites réels connus pour leurs émissions de méthane, une exploitation agricole et une installation de distribution de gaz, afin de tester son applicabilité et sa robustesse dans des conditions plus complexes de répartition des sources de méthane. Cette méthode a permis d'obtenir des estimations des émissions totales des sites robustes prenant en compte la localisation du traceur par rapport aux sources de méthane. L'estimation séparéedes émissions des différentes sources d'un site s'est révélée fortement dépendante des conditions météorologiques durant les mesures. Je me suis ensuite focalisé sur les émissions de méthane associées au secteur des déchets en réalisant un certain nombre de campagnes de mesures au sein d'installations de stockagedes déchets non dangereux et de stations d'épuration. Les résultats obtenus pour ces différents sites montrent la grandevariabilité des émissions de méthane dans le secteur des déchets
The increase of atmospheric methane concentrations since the beginning of the industrial era is directly linked to anthropogenic activities. This increase is partly responsible for the enhancement of the greenhouse effect leading to a rise of Earth's surface temperatures and a degradation of air quality. There are still considerable uncertainties regarding methane emissions estimates from many sources at local scale. A better characterization of these sources would help the implementation of effective adaptation and mitigation policies to reduce these emissions.To do so, we have developed a new method to quantify methane emissions from local sites based on the combination of mobile atmospheric measurements, a Gaussian model and a statistical inversion. These atmospheric measurements are carried out within the framework of the tracer method, which consists in emitting a gas co-located with the methane source at a known flow. An estimate of methane emissions can be given by measuring the tracer and methane concentrations through the emission plume coming from the site. This method presents some limitations especially when several sources and/or extended sources can be found on the studied site. In these conditions, the colocation of the tracer and methane sources is difficult. The Gaussian model enables to take into account this bad collocation. It also gives a separate estimate of each source of a site when the classical tracer release method only gives an estimate of its total emissions. The statistical inversion enables to take into account the uncertainties associated with the model and the measurements.The method is based on the use of the measured tracer gas concentrations to choose the stability class of the Gaussian model that best represents the atmospheric conditions during the measurements. These tracer data are also used to parameterize the error associated with the measurements and the model in the statistical inversion. We first tested this new method with controlled emissions of tracer and methane. The tracer and methane sources were positioned in different configurations in order to better understand the contributions of this method compared to the traditional tracer method. These tests have demonstrated that the statistical inversion parameterized by the tracer gas data gives better estimates of methane emissions when the tracer and methane sources are not perfectly collocated or when there are several sources of methane.In a second time, I applied this method to two sites known for their methane emissions, namely a farm and a gas distribution facility. These measurements enabled us to test the applicability and robustness of the method under more complex methane source distribution conditions and gave us better estimates of the total methane emissions of these sites that take into account the location of the tracer regarding methane sources. Separate estimates of every source within the site are highly dependent on the meteorological conditions during the measurements. The analysis of the correlations on the posterior uncertainties between the different sources gives a diagnostic of the separability of the sources.Finally I focused on methane emissions associated with the waste sector. To do so, I carried out several measurement campaigns in landfills and wastewater treatment plants and I also used data collected on this type of sites during other projects. I selected the most suitable method to estimate methane emissions of each site and the obtained estimates for each one of these sites show the variability of methane emissions in the waste sector
Styles APA, Harvard, Vancouver, ISO, etc.
15

Cheng, Ching-Chung. « Investigations into Green's function as inversion-free solution of the Kriging equation, with Geodetic applications ». Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1095792962.

Texte intégral
Résumé :
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains ix, 125 p.; also includes graphics (some col.). Includes bibliographical references (p. 101-103).
Styles APA, Harvard, Vancouver, ISO, etc.
16

Han, Bin. « Gamma positivity in enumerative combinatorics ». Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1115/document.

Texte intégral
Résumé :
La positivité gamma d’une suite combinatoire unifie à la fois l’unimodalité et la symétrie de cette suite. Trouver des nouvelles familles d’objets dont les polynômes énumératives ont une positivité gamma est un défi et un sujet important en combinatoire et géométrie. Il a attiré beaucoup d’attention ces derniers temps en raison de la conjecture de Gal, qui affirme que le gamma-vecteur a des coefficients positifs pour n’importe quel polytope simple. Souvent, le h-polynôme pour les polytopes simpliciaux de signification combinatoire peut être donné en tant que fonction génératrice sur un ensemble d’objets combinatoires apparentés par rapport à une statistique telle que le nombre des descentes, dont les polynômes énumératifs sur les permutations sont des polynômes Eulériens. Ce travail traite des propriétés gamma de plusieurs polynômes énumératifs de permutations tels que les polynômes Eulériens et les polynômes de Narayana. Cette thèse contient cinq chapitres
The gamma positivity of a combinatorial sequence unifies both unimodality and symmetry. Finding new family of objets whose enumerative sequences have gamma positivity is a challenge and important topic in recent years. it has received considerable attention in recent times because of Gal’s conjecture, which asserts that the gamma-vector has nonnegative entries for any flag simple polytope. Often times, the h-polynomial for simplicial polytopes of combinatorial signification can be given as a generating function over a related set of combinatorial objects with respect to some statistic like the descent numbers, whose enumerative polynomials on permutations are Eulerian polynomials.This work deals with the gamma properties of several enumerative polynomials of permutation such as Eulerian polynomials and Narayana polynomials. This thesis contains five chapters
Styles APA, Harvard, Vancouver, ISO, etc.
17

Krometis, Justin. « A Bayesian Approach to Estimating Background Flows from a Passive Scalar ». Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83783.

Texte intégral
Résumé :
We consider the statistical inverse problem of estimating a background flow field (e.g., of air or water) from the partial and noisy observation of a passive scalar (e.g., the concentration of a pollutant). Here the unknown is a vector field that is specified by large or infinite number of degrees of freedom. We show that the inverse problem is ill-posed, i.e., there may be many or no background flows that match a given set of observations. We therefore adopt a Bayesian approach, incorporating prior knowledge of background flows and models of the observation error to develop probabilistic estimates of the fluid flow. In doing so, we leverage frameworks developed in recent years for infinite-dimensional Bayesian inference. We provide conditions under which the inference is consistent, i.e., the posterior measure converges to a Dirac measure on the true background flow as the number of observations of the solute concentration grows large. We also define several computationally-efficient algorithms adapted to the problem. One is an adjoint method for computation of the gradient of the log likelihood, a key ingredient in many numerical methods. A second is a particle method that allows direct computation of point observations of the solute concentration, leveraging the structure of the inverse problem to avoid approximation of the full infinite-dimensional scalar field. Finally, we identify two interesting example problems with very different posterior structures, which we use to conduct a large-scale benchmark of the convergence of several Markov Chain Monte Carlo methods that have been developed in recent years for infinite-dimensional settings.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Line, Michael R., Kevin B. Stevenson, Jacob Bean, Jean-Michel Desert, Jonathan J. Fortney, Laura Kreidberg, Nikku Madhusudhan, Adam P. Showman et Hannah Diamond-Lowe. « NO THERMAL INVERSION AND A SOLAR WATER ABUNDANCE FOR THE HOT JUPITER HD 209458B FROM HST /WFC3 SPECTROSCOPY ». IOP PUBLISHING LTD, 2016. http://hdl.handle.net/10150/622434.

Texte intégral
Résumé :
The nature of the thermal structure of hot Jupiter atmospheres is one of the key questions raised by the characterization of transiting exoplanets over the past decade. There have been claims that many hot Jupiters exhibit atmospheric thermal inversions. However, these claims have been based on broadband photometry rather than the unambiguous identification of emission features with spectroscopy, and the chemical species that could cause the thermal inversions by absorbing stellar irradiation at high altitudes have not been identified despite extensive theoretical and observational effort. Here we present high-precision Hubble Space Telescope WFC3 observations of the dayside thermal emission spectrum of the hot Jupiter HD 209458b, which was the first exoplanet suggested to have a thermal inversion. In contrast to previous results for this planet, our observations detect water in absorption at 6.2 sigma confidence. When combined with Spitzer photometry, the data are indicative of a monotonically decreasing temperature with pressure over the range of 1-0.001 bars at 7.7 sigma confidence. We test the robustness of our results by exploring a variety of model assumptions, including the temperature profile parameterization, presence of a cloud, and choice of Spitzer data reduction. We also introduce a new analysis method to determine the elemental abundances from the spectrally retrieved mixing ratios with thermochemical self-consistency and find plausible abundances consistent with solar metallicity (0.06-10 x solar) and carbon-to oxygen ratios less than unity. This work suggests that high-precision spectrophotometric results are required to robustly infer thermal structures and compositions of extrasolar planet atmospheres and to perform comparative exoplanetology.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Saers, Markus. « Translation as Linear Transduction : Models and Algorithms for Efficient Learning in Statistical Machine Translation ». Doctoral thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-135704.

Texte intégral
Résumé :
Automatic translation has seen tremendous progress in recent years, mainly thanks to statistical methods applied to large parallel corpora. Transductions represent a principled approach to modeling translation, but existing transduction classes are either not expressive enough to capture structural regularities between natural languages or too complex to support efficient statistical induction on a large scale. A common approach is to severely prune search over a relatively unrestricted space of transduction grammars. These restrictions are often applied at different stages in a pipeline, with the obvious drawback of committing to irrevocable decisions that should not have been made. In this thesis we will instead restrict the space of transduction grammars to a space that is less expressive, but can be efficiently searched. First, the class of linear transductions is defined and characterized. They are generated by linear transduction grammars, which represent the natural bilingual case of linear grammars, as well as the natural linear case of inversion transduction grammars (and higher order syntax-directed transduction grammars). They are recognized by zipper finite-state transducers, which are equivalent to finite-state automata with four tapes. By allowing this extra dimensionality, linear transductions can represent alignments that finite-state transductions cannot, and by keeping the mechanism free of auxiliary storage, they become much more efficient than inversion transductions. Secondly, we present an algorithm for parsing with linear transduction grammars that allows pruning. The pruning scheme imposes no restrictions a priori, but guides the search to potentially interesting parts of the search space in an informed and dynamic way. Being able to parse efficiently allows learning of stochastic linear transduction grammars through expectation maximization. All the above work would be for naught if linear transductions were too poor a reflection of the actual transduction between natural languages. We test this empirically by building systems based on the alignments imposed by the learned grammars. The conclusion is that stochastic linear inversion transduction grammars learned from observed data stand up well to the state of the art.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ben, Youssef Atef. « Contrôle de têtes parlantes par inversion acoustico-articulatoire pour l’apprentissage et la réhabilitation du langage ». Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT088/document.

Texte intégral
Résumé :
Les sons de parole peuvent être complétés par l'affichage des articulateurs sur un écran d'ordinateur pour produire de la parole augmentée, un signal potentiellement utile dans tous les cas où le son lui-même peut être difficile à comprendre, pour des raisons physiques ou perceptuelles. Dans cette thèse, nous présentons un système appelé retour articulatoire visuel, dans lequel les articulateurs visibles et non visibles d'une tête parlante sont contrôlés à partir de la voix du locuteur. La motivation de cette thèse était de développer un tel système qui pourrait être appliqué à l'aide à l'apprentissage de la prononciation pour les langues étrangères, ou dans le domaine de l'orthophonie. Nous avons basé notre approche de ce problème d'inversion sur des modèles statistiques construits à partir de données acoustiques et articulatoires enregistrées sur un locuteur français à l'aide d'un articulographe électromagnétique (EMA). Notre approche avec les modèles de Markov cachés (HMMs) combine des techniques de reconnaissance automatique de la parole et de synthèse articulatoire pour estimer les trajectoires articulatoires à partir du signal acoustique. D'un autre côté, les modèles de mélanges gaussiens (GMMs) estiment directement les trajectoires articulatoires à partir du signal acoustique sans faire intervenir d'information phonétique. Nous avons basé notre évaluation des améliorations apportées à ces modèles sur différents critères : l'erreur quadratique moyenne (RMSE) entre les coordonnées EMA originales et reconstruites, le coefficient de corrélation de Pearson, l'affichage des espaces et des trajectoires articulatoires, aussi bien que les taux de reconnaissance acoustique et articulatoire. Les expériences montrent que l'utilisation d'états liés et de multi-gaussiennes pour les états des HMMs acoustiques améliore l'étage de reconnaissance acoustique des phones, et que la minimisation de l'erreur générée (MGE) dans la phase d'apprentissage des HMMs articulatoires donne des résultats plus précis par rapport à l'utilisation du critère plus conventionnel de maximisation de vraisemblance (MLE). En outre, l'utilisation du critère MLE au niveau de mapping direct de l'acoustique vers l'articulatoire par GMMs est plus efficace que le critère de minimisation de l'erreur quadratique moyenne (MMSE). Nous constatons également trouvé que le système d'inversion par HMMs est plus précis celui basé sur les GMMs. Par ailleurs, des expériences utilisant les mêmes méthodes statistiques et les mêmes données ont montré que le problème de reconstruction des mouvements de la langue à partir des mouvements du visage et des lèvres ne peut pas être résolu dans le cas général, et est impossible pour certaines classes phonétiques. Afin de généraliser notre système basé sur un locuteur unique à un système d'inversion de parole multi-locuteur, nous avons implémenté une méthode d'adaptation du locuteur basée sur la maximisation de la vraisemblance par régression linéaire (MLLR). Dans cette méthode MLLR, la transformation basée sur la régression linéaire qui adapte les HMMs acoustiques originaux à ceux du nouveau locuteur est calculée de manière à maximiser la vraisemblance des données d'adaptation. Finalement, cet étage d'adaptation du locuteur a été évalué en utilisant un système de reconnaissance automatique des classes phonétique de l'articulation, dans la mesure où les données articulatoires originales du nouveau locuteur n'existent pas. Finalement, en utilisant cette procédure d'adaptation, nous avons développé un démonstrateur complet de retour articulatoire visuel, qui peut être utilisé par un locuteur quelconque. Ce système devra être évalué de manière perceptive dans des conditions réalistes
Speech sounds may be complemented by displaying speech articulators shapes on a computer screen, hence producing augmented speech, a signal that is potentially useful in all instances where the sound itself might be difficult to understand, for physical or perceptual reasons. In this thesis, we introduce a system called visual articulatory feedback, in which the visible and hidden articulators of a talking head are controlled from the speaker's speech sound. The motivation of this research was to develop such a system that could be applied to Computer Aided Pronunciation Training (CAPT) for learning of foreign languages, or in the domain of speech therapy. We have based our approach to this mapping problem on statistical models build from acoustic and articulatory data. In this thesis we have developed and evaluated two statistical learning methods trained on parallel synchronous acoustic and articulatory data recorded on a French speaker by means of an electromagnetic articulograph. Our Hidden Markov models (HMMs) approach combines HMM-based acoustic recognition and HMM-based articulatory synthesis techniques to estimate the articulatory trajectories from the acoustic signal. Gaussian mixture models (GMMs) estimate articulatory features directly from the acoustic ones. We have based our evaluation of the improvement results brought to these models on several criteria: the Root Mean Square Error between the original and recovered EMA coordinates, the Pearson Product-Moment Correlation Coefficient, displays of the articulatory spaces and articulatory trajectories, as well as some acoustic or articulatory recognition rates. Experiments indicate that the use of states tying and multi-Gaussian per state in the acoustic HMM improves the recognition stage, and that the minimum generation error (MGE) articulatory HMMs parameter updating results in a more accurate inversion than the conventional maximum likelihood estimation (MLE) training. In addition, the GMM mapping using MLE criteria is more efficient than using minimum mean square error (MMSE) criteria. In conclusion, we have found that the HMM inversion system has a greater accuracy compared with the GMM one. Beside, experiments using the same statistical methods and data have shown that the face-to-tongue inversion problem, i.e. predicting tongue shapes from face and lip shapes cannot be solved in a general way, and that it is impossible for some phonetic classes. In order to extend our system based on a single speaker to a multi-speaker speech inversion system, we have implemented a speaker adaptation method based on the maximum likelihood linear regression (MLLR). In MLLR, a linear regression-based transform that adapts the original acoustic HMMs to those of the new speaker was calculated to maximise the likelihood of adaptation data. Finally, this speaker adaptation stage has been evaluated using an articulatory phonetic recognition system, as there are not original articulatory data available for the new speakers. Finally, using this adaptation procedure, we have developed a complete articulatory feedback demonstrator, which can work for any speaker. This system should be assessed by perceptual tests in realistic conditions
Styles APA, Harvard, Vancouver, ISO, etc.
21

Baker, John C. III. « Application of the Fisher Dimer Model to DNA Condensation ». VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4791.

Texte intégral
Résumé :
This paper considers the statistical mechanics occupation of the edge of a single helix of DNA by simple polymers. Using Fisher's exact closed form solution for dimers on a two-dimensional lattice, a one-dimensional lattice is created mathematically that is occupied by dimers, monomers, and holes. The free energy, entropy, average occupation, and total charge on the lattice are found through the usual statistical methods. The results demonstrate the charge inversion required for a DNA helix to undergo DNA condensation.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Harmouzi, Ouassima. « Reconnaissance détaillée de la partie nord-est du Bassin de Saïss (Maroc) : interprétation de sondages électriques verticaux par combinaison des méthodes statistique, géostatistique et d'inversion ». Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14030/document.

Texte intégral
Résumé :
La prospection géoélectrique est largement utilisée au Maroc pour des reconnaissances hydrogéologique. Le but de ce travail et de proposer de nouvelles techniques d’interprétation des sondages électriques verticaux en un temps réduit, et aussi de bien exploiter une base de données de sondages électriques, par l’établissement entre autre des images 2D horizontales et verticales de l’estimation de la distribution des résistivités électriques apparentes (modélisation géostatistique, inversion, etc.). Dans le but de caractériser électriquement le secteur d’étude (nord-est du Bassin de Saïss), une analyse statistique des résistivités apparentes de sondages électriques verticaux a été réalisée. Cette simple analyse descriptive est suivie par une étude statistique multidirectionnelle : analyse en composantes principales (ACP) et par une classification hiérarchique ascendante (CHA). (...) Les résultats des analyses statistiques et géostatistiques complétés par les inversions des sondages moyens pas classe, ont mis en évidence la fiabilité de ces techniques pour l’interprétation d’un nombre important de sondages électriques au lieu de la méthode ordinaire qui se base sur l’inversion des sondages un par un et les corréler ultérieurement pour construire la structure globale du domaine étudié. Avec les techniques utilisées, dans le cadre de ce travail, des résultats très satisfaisants en un temps plus réduit sont obtenus. Les profils étudiés et inversés à l’aide du logiciel RES2Dinv montrent tous les trois grandes structures définies auparavant (Résistant-Conductrice-Résistant), par contre on note des variations intra formations. De plus, l’organisation spatiale des formations permet de confirmer l’existence de failles cohérentes avec la structure en horst et graben du bassin
The Geoelectric prospection is usually used in Morocco for hydrogeological recognition. The purpose of this work is to propose new techniques for interpreting vertical electric soundings in a reduced time, and also to fully exploit a database of stored electrical soundings by the establishment, amongst other things, of the horizontal and vertical 2D images, estimating the distribution of apparent electrical resistivity (geostatistic modeling, inversion, etc.). In order to characterize electrically the study area (north-east of the Saïss Basin), a statistical analysis of apparent resistivity of vertical electric soundings was performed. This simple descriptive analysis is followed by a statistical analysis (principal component analysis PCA and ascending hierarchical classification HAC.) (...)The results of statistical analysis and geostatistical supplemented by inversion of the average electric sounding per class, highlighted the reliability of these techniques to the interpretation of a large number of electrical soundings instead of the usual method which is based on the inversion of the electrical sounding one by one and correlate them later, to build the global structure of the area studied. With the techniques used in this work, very satisfactory results in a more reduced time, for interpreting vertical electric soundings, are obtained. VIThe studied profiles and inverted using the software RES2Dinv show all three structures defined previously (Resistant – Conductive - resistant), on the other hand, there are variations within the same formation. In addition, the spatial organization of the formation makes it possible to confirm the existence of faults coherent with the structure in horst and graben basin
Styles APA, Harvard, Vancouver, ISO, etc.
23

Bruned, Vianney. « Analyse statistique et interprétation automatique de données diagraphiques pétrolières différées à l’aide du calcul haute performance ». Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS064.

Texte intégral
Résumé :
Dans cette thèse, on s'intéresse à l’automatisation de l’identification et de la caractérisation de strates géologiques à l’aide des diagraphies de puits. Au sein d’un puits, on détermine les strates géologiques grâce à la segmentation des diagraphies assimilables à des séries temporelles multivariées. L’identification des strates de différents puits d’un même champ pétrolier nécessite des méthodes de corrélation de séries temporelles. On propose une nouvelle méthode globale de corrélation de puits utilisant les méthodes d’alignement multiple de séquences issues de la bio-informatique. La détermination de la composition minéralogique et de la proportion des fluides au sein d’une formation géologique se traduit en un problème inverse mal posé. Les méthodes classiques actuelles sont basées sur des choix d’experts consistant à sélectionner une combinaison de minéraux pour une strate donnée. En raison d’un modèle à la vraisemblance non calculable, une approche bayésienne approximée (ABC) aidée d’un algorithme de classification basé sur la densité permet de caractériser la composition minéralogique de la couche géologique. La classification est une étape nécessaire afin de s’affranchir du problème d’identifiabilité des minéraux. Enfin, le déroulement de ces méthodes est testé sur une étude de cas
In this thesis, we investigate the automation of the identification and the characterization of geological strata using well logs. For a single well, geological strata are determined thanks to the segmentation of the logs comparable to multivariate time series. The identification of strata on different wells from the same field requires correlation methods for time series. We propose a new global method of wells correlation using multiple sequence alignment algorithms from bioinformatics. The determination of the mineralogical composition and the percentage of fluids inside a geological stratum results in an ill-posed inverse problem. Current methods are based on experts’ choices: the selection of a subset of mineral for a given stratum. Because of a model with a non-computable likelihood, an approximate Bayesian method (ABC) assisted with a density-based clustering algorithm can characterize the mineral composition of the geological layer. The classification step is necessary to deal with the identifiability issue of the minerals. At last, the workflow is tested on a study case
Styles APA, Harvard, Vancouver, ISO, etc.
24

Dzharayan, Gayk, et Elena Voronova. « Pricing of exotic options under the Kou model by using the Laplace transform ». Thesis, Högskolan i Halmstad, Tillämpad matematik och fysik (MPE-lab), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-16023.

Texte intégral
Résumé :
In this thesis we present the Laplace transform method of option pricing and it's realization, also compare it with another methods. We consider vanilla and exotic options, but more attention we pay to the two-asset correlation options. We chose the one of the modifications of Black-Scholes model, the Kou double exponential jump-diffusion model with the double exponential distribution of jumps, as model of the underlying stock prices development. The computations was done by the Laplace transform and it's inversion by the Euler method. We will present in details proof of finding Laplace transforms of put and call two-asset correlation options, the calculations of the moment generation function of the jump-diffusion by Levy-Khintchine formulae in cases without jumps and with independent jumps, and direct calculation of the risk-neutral expectation by solving double integral. Our work also contains the programme code for two-asset correlation call and put options. We will show the realization of our programme in the real data. As a result we see how our model complies on the NASDAQ OMX Stock-holm Market, considering the two-asset correlation options on three cases by stock prices of Handelsbanken, Ericsson and index OMXS30.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Marchetto, Enrico. « Automatic Speaker Recognition and Characterization by means of Robust Vocal Source Features ». Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3427390.

Texte intégral
Résumé :
Automatic Speaker Recognition is a wide research field, which encompasses many topics: signal processing, human vocal and auditory physiology, statistical modelling, cognitive sciences, and so on. The study of these techniques started about thirty years ago and, since then, the improvement has been dramatic. Nonetheless the field still poses open issues and many active research centers around the world are working towards more reliable and better performing systems. This thesis documents a Philosophiae Doctor project funded by the private held company RT - Radio Trevisan Elettronica Industriale S.p.A. The title of the fellowship is "Automatic speaker recognition with applications to security and intelligence". Part of the work was carried out during a six-month visit in the Speech, Music and Hearing Department of the KTH Royal Institute of Technology, Stockholm. Speaker Recognition research develops techniques to automatically associate a given human voice to a previously recorded version of it. Speaker Recognition is usually further defined as Speaker Identification or Speaker Verification; in the former the identity of a voice has to be found among a (possibly high) number of speaker voices, while in the latter the system is provided with both a voice and a claimed identity, and the association has to be verified as a true/false statement. The recognition systems also provides a confidence score about the found results. The first Part of the thesis reviews the state of the art of Speaker Recognition research. The main components of a recognition system are described: audio features extraction, statistical modelling, and performance assessment. During the years the research community developed a number of Audio Features, use to describe the information carried by the vocal signal in a compact and deterministic way. In every automatic recognition application, even speech or language, the feature extraction process is the first step, in charge of compressing substantially the size of the input data without loosing any important information. The choice of the best fitted features for a specific application, and their tuning, are crucial to obtain satisfactory recognition results; moreover the definition of innovative features is a lively research direction because it is generally recognized that existing features are still far from the exploitation of the whole information load carried by the vocal signal. There are audio features which during the years have proved to perform better than other; some of them are described in Part I: Mel-Frequency Cepstral Coefficients and Linear Prediction Coefficients. More refined and experimental features are also introduced, and will be explained in Part III. Statistical modelling is introduced, particularly by discussing the Gaussian Mixture Models structure and their training through the EM algorithm; specific modelling techniques for recognition, such as Universal Background Model, are described. Scoring is the last phase of a Speaker Recognition process and involves a number of normalizations; it compensates for different recording conditions or model issues. Part I continues presenting a number of audio databases that are commonly used in the literature as benchmark databases to compare results or recognition systems, in particular TIMIT and NIST Speaker Recognition Evaluation - SRE 2004. A recognition prototype system has been built during the PhD project, and it is detailed in Part II. The first Chapter describes the proposed application, referring to intelligence and security. The application fulfils specific requirements of the Authorities when investigations involve phone wiretapping or environmental interceptions. In these cases Authorities have to listen to a large amount of recordings, most of which are not related to the investigations. The application idea is to automatically detect and label speakers, giving the possibility to search for a specific speaker through the recording collection. This can avoid time wasting, resulting in an economical advantage. Many difficulties arises from the phone lines, which are known to degrade the speech signal and cause a reduction of the recognition performances; main issues are the narrow audio bandwidth, the additive noises and the convolution noise, the last resulting in phase distortion. The second Chapter in Part II describes in detail the developed Speaker Recognition system; a number of design choices are discussed. During the development the research scope of the system has been crucial: a lot of effort has been put to obtain a system with good performances and still easily and deeply modifiable. The assessment of results on different databases posed further challenges, which has been solved with a unified interface to the databases. The fundamental components of a speaker recognition system have been developed, with also some speed-up improvements. Lastly, the whole software can run on a cluster computer without any reconfiguration, a crucial characteristic in order to assess performance on big database in reasonable times. During the three-years project some works have been developed which are related to the Speaker Recognition, although not directly involved with it. These developments are described in Part II as extensions of the prototype. First a Voice Activity Detector suitable for noisy recordings is explained. The first step of feature extraction is to find and select, from a given record, only the segments containing voice; this is not a trivial task when the record is noisy and a simple "energy threshold" approach fails. The developed VAD is based on advanced features, computed from Wavelet Transforms, which are further processed using an adaptive threshold. One second developed application is Speaker Diarization: it permits to automatically segment an audio recording when it contains different speakers. The outputs of the diarization are a segmentation and a speaker label for each segment, resulting in a "who speaks when" answer. The third and last collateral work is a Noise Reduction system for voice applications, developed on a hardware DSP. The noise reduction algorithm adaptively detects the noise and reduces it, keeping only the voice; it works in real time using only a slight portion of the DSP computing power. Lastly, Part III discusses innovative audio features, which are the main novel contribution of this thesis. The features are obtained from the glottal flow, therefore the first Chapter in this Part describes the anatomy of the vocal folds and of the vocal tract. The working principle of the phonation apparatus is described and the importance of the vocal folds physics is pointed out. The glottal flow is an input air flow for the vocal tract, which acts as a filter; an open-source toolkit for the inversion of the vocal tract filter is introduced: it permits to estimate the glottal flow from speech records. A description of some methods used to give a numerical characterization to the glottal flow is given. In the subsequent Chapter, a definition of the novel glottal features is presented. The glottal flow estimates are not always reliable, so a first step detects and deletes unlikely flows. A numerical procedure then groups and sorts the flow estimates, preparing them for a statistical modelling. Performance measures are then discussed, comparing the novel features against the standard ones, applied on the reference databases TIMIT and SRE 2004. A Chapter is dedicated to a different research work, related with glottal flow characterization. A physical model of the vocal folds is presented, with a number of control rules, able to describe the vocal folds dynamic. The rules permit to translate a specific pharyngeal muscular set-up in mechanical parameters of the model, which results in a specific glottal flow (obtained after a computer simulation of the model). The so-called Inverse Problem is defined in this way: given a glottal flow it has to be found the muscular set-up which, used to drive a model simulation, can obtain the same glottal flow as the given one. The inverse problem has a number of difficulties in it, such as the non-univocity of the inversion and the sensitivity to slight variations in the input flow. An optimization control technique has been developed and is explained. The final Chapter summarizes the achievements of the thesis. Along with this discussion, a roadmap for the future improvements to the features is sketched. In the end, a resume of the published and submitted articles for both conferences and journals is presented.
Il Riconoscimento Automatico del Parlatore rappresenta un campo di ricerca esteso, che comprende molti argomenti: elaborazione del segnale, fisiologia vocale e dell'apparato uditivo, strumenti di modellazione statistica, studio del linguaggio, ecc. Lo studio di queste tecniche è iniziato circa trenta anni fa e, da allora, ci sono stati grandi miglioramenti. Nondimeno, il campo di ricerca continua a porre questioni e, in tutto il mondo, gruppi di ricerca continuano a lavorare per ottenere sistemi di riconoscimento più affidabili e con prestazioni migliori. La presente tesi documenta un progetto di Philosophiae Doctor finanziato dall'Azienda privata RT - Radio Trevisan Elettronica Industriale S.p.A. Il titolo della borsa di studio è "Riconoscimento automatico del parlatore con applicazioni alla sicurezza e all'intelligence". Parte del lavoro ha avuto luogo durante una visita, durata sei mesi, presso lo Speech, Music and Hearing Department del KTH - Royal Institute of Technology di Stoccolma. La ricerca inerente il Riconoscimento del Parlatore sviluppa tecnologie per associare automaticamente una data voce umana ad una versione precedentemente registrata della stessa. Il Riconoscimento del Parlatore (Speaker Recognition) viene solitamente meglio definito in termini di Verifica o di Identificazione del Parlatore (in letteratura Speaker Verification o Speaker Identification, rispettivamente). L'Identificazione consiste nel recupero dell'identità di una voce fra un numero (anche alto) di voci modellate dal sistema; nella Verifica invece, date una voce ed una identità, si chiede al sistema di verificare l'associazione tra le due. I sistemi di riconoscimento producono anche un punteggio (Score) che attesta l'attendibilità della risposta fornita. La prima Parte della tesi propone una revisione dello stato dell'arte circa il Riconoscimento del Parlatore. Vengono descritte le componenti principali di un prototipo per il riconoscimento: estrazione di Features audio, modellazione statistica e verifica delle prestazioni. Nel tempo, la comunità di ricerca ha sviluppato una quantità di Features Acustiche: si tratta di tecniche per descrivere numericamente il segnale vocale in modo compatto e deterministico. In ogni applicazione di riconoscimento, anche per le parole o il linguaggio (Speech o Language Recognition), l'estrazione di Features è il primo passo: ha lo scopo di ridurre drasticamente la dimensione dei dati di ingresso, ma senza perdere alcuna informazione significativa. La scelta delle Features più idonee ad una specifica applicazione, e la loro taratura, sono cruciali per ottenere buoni risultati di riconoscimento; inoltre, la definizione di nuove features costituisce un attivo campo di ricerca perché la comunità scientifica ritiene che le features esistenti siano ancora lontane dallo sfruttamento dell'intera informazione portata dal segnale vocale. Alcune Features si sono affermate nel tempo per le loro migliori prestazioni: Coefficienti Cepstrali in scala Mel (Mel-Frequency Cepstral Coefficients) e Coefficienti di Predizione Lineare (Linear Prediction Coefficients); tali Features vengono descritte nella Parte I. Viene introdotta anche la modellazione statistica, spiegando la struttura dei Modelli a Misture di Gaussiane (Gaussian Mixture Models) ed il relativo algoritmo di addestramento (Expectation-Maximization). Tecniche di modellazione specifiche, quali Universal Background Model, completano poi la descrizione degli strumenti statistici usati per il riconoscimento. Lo Scoring rappresenta, infine, la fase di produzione dei risultati da parte del sistema di riconoscimento; comprende diverse procedure di normalizzazione che compensano, ad esempio, i problemi di modellazione o le diverse condizioni acustiche con cui i dati audio sono stati registrati. La Parte I prosegue poi presentando alcuni database audio usati comunemente in letteratura quali riferimento per il confronto delle prestazioni dei sistemi di riconoscimento; in particolare, vengono presentati TIMIT e NIST Speaker Recognition Evaluation (SRE) 2004. Tali database sono adatti alla valutazione delle prestazioni su audio di natura telefonica, di interesse per la presente tesi; tale argomento verrà ulteriormente discusso nella Parte II. Durante il progetto di PhD è stato progettato e realizzato un prototipo di sistema di riconoscimento, discusso nella Parte II. Il primo Capitolo descrive l'applicazione di riconoscimento proposta; la tecnologia per Riconoscimento del Parlatore viene applicate alle linee telefoniche, con riferimento alla sicurezza e all'intelligence. L'applicazione risponde a una specifica necessità delle Autorità quando le investigazioni coinvolgono intercettazioni telefoniche. In questi casi le Autorità devono ascoltare grandi quantità di dati telefonici, la maggior parte dei quali risulta essere inutile ai fini investigativi. L'idea applicativa consiste nell'identificazione e nell'etichettatura automatiche dei parlatori presenti nelle intercettazioni, permettendo così la ricerca di uno specifico parlatore presente nella collezione di registrazioni. Questo potrebbe ridurre gli sprechi di tempo, ottenendo così vantaggi economici. L'audio proveniente da linee telefoniche pone difficoltà al riconoscimento automatico, perché degrada significativamente il segnale e peggiora quindi le prestazioni. Vengono generalmente riconosciute alcune problematiche del segnale audio telefonico: banda ridotta, rumore additivo e rumore convolutivo; quest'ultimo causa distorsione di fase, che altera la forma d'onda del segnale. Il secondo Capitolo della Parte II descrive in dettaglio il sistema di Riconoscimento del Parlatore sviluppato; vengono discusse le diverse scelte di progettazione. Sono state sviluppate le componenti fondamentali di un sistema di riconoscimento, con alcune migliorie per contenere il carico computazionale. Durante lo sviluppo si è ritenuto primario lo scopo di ricerca del software da realizzare: è stato profuso molto impegno per ottenere un sistema con buone prestazioni, che però rimanesse semplice da modificare anche in profondità. La necessità (ed opportunità) di verificare le prestazioni del prototipo ha posto ulteriori requisiti allo sviluppo, che sono stati soddisfatti mediante l'adozione di un'interfaccia comune ai diversi database. Infine, tutti i moduli del software sviluppato possono essere eseguiti su un Cluster di Calcolo (calcolatore ad altre prestazioni per il calcolo parallelo); questa caratteristica del prototipo è stata cruciale per permettere una approfondita valutazione delle prestazioni del software in tempi ragionevoli. Durante il lavoro svolto per il progetto di Dottorato sono stati condotti studi affini al Riconoscimento del Parlatore, ma non direttamente correlati ad esso. Questi sviluppi vengono descritti nella Parte II quali estensioni del prototipo. Viene innanzitutto presentato un Rilevatore di Parlato (Voice Activity Detector) adatto all'impiego in presenza di rumore. Questo componente assume particolare importanza quale primo passo dell'estrazione delle Features: è necessario infatti selezionare e mantenere solo i segmenti audio che contengono effettivamente segnale vocale. In situazioni con rilevante rumore di fondo i semplici approcci a "soglia di energia" falliscono. Il Rilevatore realizzato è basato su Features avanzate, ottenute mediante le Trasformate Wavelet, ulteriormente elaborate mediante una sogliatura adattiva. Una seconda applicazione consiste in un prototipo per la Speaker Diarization, ovvero l'etichettatura automatica di registrazioni audio contenenti diversi parlatori. Il risultato del procedimento consiste nella segmentazione dell'audio ed in una serie di etichette, una per ciascun segmento; il sistema fornisce una risposta del tipo "chi parla quando". Il terzo ed ultimo studio collaterale al Riconoscimento del Parlatore consiste nello sviluppo di un sistema di Riduzione del Rumore (Noise Reduction) su piattaforma hardware DSP dedicata. L'algoritmo di Riduzione individua il rumore in modo adattivo e lo riduce, cercando di mantenere solo il segnale vocale; l'elaborazione avviene in tempo reale, pur usando solo una parte molto limitata delle risorse di calcolo del DSP. La Parte III della tesi introduce, infine, Features audio innovative, che costituiscono il principale contributo innovativo della tesi. Tali Features sono ottenute dal flusso glottale, quindi il primo Capitolo della Parte discute l'anatomia del tratto e delle corde vocali. Viene descritto il principio di funzionamento della fonazione e l'importanza della fisica delle corde vocali. Il flusso glottale costituisce un ingresso per il tratto vocale, che agisce come un filtro. Viene descritto uno strumento software open-source per l'inversione del tratto vocale: esso permette la stima del flusso glottale a partire da semplici registrazioni vocali. Alcuni dei metodi usati per caratterizzare numericamente il flusso glottale vengono infine esposti. Nel Capitolo successivo viene presentata la definizione delle nuove Features glottali. Le stime del flusso glottale non sono sempre affidabili quindi, durante l'estrazione delle nuove Features, il primo passo individua ed esclude i flussi giudicati non attendibili. Una procedure numerica provvede poi a raggruppare ed ordinare le stime dei flussi, preparandoli per la modellazione statistica. Le Features glottali, applicate al Riconoscimento del Parlatore sui database TIMIT e NIST SRE 2004, vengono comparate alle Features standard. Il Capitolo finale della Parte III è dedicato ad un diverso lavoro di ricerca, comunque correlato alla caratterizzazione del flusso glottale. Viene presentato un modello fisico delle corde vocali, controllato da alcune regole numeriche, in grado di descrivere la dinamica delle corde stesse. Le regole permettono di tradurre una specifica impostazione dei muscoli glottali nei parametri meccanici del modello, che portano ad un preciso flusso glottale (ottenuto dopo una simulazione al computer del modello). Il cosiddetto Problema Inverso è definito nel seguente modo: dato un flusso glottale si chiede di trovare una impostazione dei muscoli glottali che, usata per guidare il modello fisico, permetta la risintesi di un segnale glottale il più possibile simile a quello dato. Il problema inverso comporta una serie di difficoltà, quali la non-univocità dell'inversione e la sensitività alle variazioni, anche piccole, del flusso di ingresso. E' stata sviluppata una tecnica di ottimizzazione del controllo, che viene descritta. Il capitolo conclusivo della tesi riassume i risultati ottenuti. A fianco di questa discussione è presentata un piano di lavoro per lo sviluppo delle Features introdotte. Vengono infine presentate le pubblicazioni prodotte.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Martínez, Buixeda Raül. « Hedge Funds : Inferencia del riesgo en un escenario real de estrés severo ». Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/386569.

Texte intégral
Résumé :
La expansión de la industria de los hedge funds a lo largo del presente siglo ha sido extraordinaria, especialmente hasta el estallido de la crisis subprime. Según estimaciones de HFR publicadas por CAIA en 2014 los activos bajo gestión de single hedge funds y fund of hedge funds pasaron de 456,43 millardos de USD en 1999 a 1,87 billones de USD en 2007. La exposición de los hedge funds a factores de riesgo no convencionales, con distribuciones de rentabilidad poco habituales en la industria de la gestión tradicional, y su alto grado de opacidad, han propiciado un aumento sustancial de la tecnicidad del portfolio risk management y la proliferación de opiniones poco rigurosas en la materia. El escenario de estrés proporcionado por la crisis subprime (agosto 2007 -septiembre 2009), permite contrastar y completar el verdadero trade-off rentabilidad-riesgo del universo de los hedge funds. En este sentido, para inferir el comportamiento del riesgo de las distintas estrategias hedge a escenarios de estrés similares al acaecido durante la crisis subprime, la presente tesis propone: 1) ajustar a las distribuciones empíricas de rentabilidad una mixtura de dos Normales estimada a partir del método de los momentos, y 2) analizar el comportamiento del riesgo extremo y el riesgo absolute return. Dado que las muestras vinculadas al periodo subprime, el cual es acotado por la evolución de las componentes del TED spread, son relativamente pequeñas y en ocasiones están distribuidas de forma "extrema", se hace imprescindible la estimación no sesgada de los momentos muestrales (incluido el momento central de quinto orden). Además, para poder completar el proceso de inferencia, ha sido necesario iniciar desde múltiples orígenes el proceso iterativo de búsqueda de soluciones, discriminar las soluciones obtenidas a partir de propiedades de la combinación lineal convexa de las mixturas, y agrupar las estrategias en función del riesgo utilizando la metodología K-means. Por último, la tesis aborda el análisis de la dinámica del riesgo hedge donde se examinan las diferencias de comportamiento del riesgo extremo y del riesgo absolute return entre el periodo pre-subprime (referencia del periodo en ausencia de estrés severo) y el periodo subprime.
The hedge funds industry expansion during the present century has been extraordinary, at least until the subprime crisis explosion. HFR estimations of single hedge funds and fund of hedge funds published by CAIA in 2014, exhibit 456,43 billion USD in assets under management in 1999 and 1,87 trillion USD in 2007. The hedge funds exposure to non-conventional risk factors, which have unusual return distributions, and a high degree of opacity, have caused an increase of portfolio risk management technicality and a proliferation of opinions with a very low level of accuracy. The stress scenario of the subprime crisis, covering August 2007 to September 2009, allows contrasting and completing the real risk-return's trade-off of the hedge funds universe. In this context, the author proposes a two Normal distribution mixture fitting with the method of the moments and an analysis of the extreme risk and absolute-return risk, in order to infer the behavior of the hedge funds strategies in similar stress scenarios. Given the fact that the empiric return samples of the hedge funds strategies during the subprime period are small and sometimes extremely distributed, it is convenient to estimate the unbiased first five moments (the average and the central moments of 2nd, 3rd, 4th and 5th order). To complete the inference process, the following steps are also required: 1) to start the searching process from different origins, 2) to choose between solutions using properties of the mixture's convex combination, and 3) to cluster the strategies according to the risk using the k-means methodology. Finally, in the last chapter, the author analyzes the different risk dynamics between the pre-subprime and the subprime periods.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Štys, Jiří. « Implementace statistických kompresních metod ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413295.

Texte intégral
Résumé :
This thesis describes Burrow-Wheeler compression algorithm. It focuses on each part of Burrow-Wheeler algorithm, most of all on and entropic coders. In section are described methods like move to front, inverse frequences, interval coding, etc. Among the described entropy coders are Huffman, arithmetic and Rice-Golomg coders. In conclusion there is testing of described methods of global structure transformation and entropic coders. Best combinations are compared with the most common compress algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Pagliarani, Stefano. « Portfolio optimization and option pricing under defaultable Lévy driven models ». Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423519.

Texte intégral
Résumé :
In this thesis we study some portfolio optimization and option pricing problems in market models where the dynamics of one or more risky assets are driven by Lévy processes, and it is divided in four independent parts. In the first part we study the portfolio optimization problem, for the logarithmic terminal utility and the logarithmic consumption utility, in a multi-defaultable Lévy driven model. In the second part we introduce a novel technique to price European defaultable claims when the pre-defaultable dynamics of the underlying asset follows an exponential Lévy process. In the third part we develop a novel methodology to obtain analytical expansions for the prices of European derivatives, under stochastic and/or local volatility models driven by Lévy processes, by analytically expanding the integro-differential operator associated to the pricing problem. In the fourth part we present an extension of the latter technique which allows for obtaining analytical expansion in option pricing when dealing with path-dependent Asian-style derivatives.
In questa tesi studiamo alcuni problemi di portfolio optimization e di option pricing in modelli di mercato dove le dinamiche di uno o più titoli rischiosi sono guidate da processi di Lévy. La tesi é divisa in quattro parti indipendenti. Nella prima parte studiamo il problema di ottimizzare un portafoglio, inteso come massimizzazione di un’utilità logaritmica della ricchezza finale e di un’utilità logaritmica del consumo, in un modello guidato da processi di Lévy e in presenza di fallimenti simultanei. Nella seconda parte introduciamo una nuova tecnica per il prezzaggio di opzioni europee soggette a fallimento, i cui titoli sottostanti seguono dinamiche che prima del fallimento sono rappresentate da processi di Lévy esponenziali. Nella terza parte sviluppiamo un nuovo metodo per ottenere espansioni analitiche per i prezzi di derivati europei, sotto modelli a volatilità stocastica e locale guidati da processi di Lévy, espandendo analiticamente l’operatore integro-differenziale associato al problema di prezzaggio. Nella quarta, e ultima parte, presentiamo un estensione della tecnica precedente che consente di ottenere espansioni analitiche per i prezzi di opzioni asiatiche, ovvero particolari tipi di opzioni il cui payoff dipende da tutta la traiettoria del titolo sottostante.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Kasraoui, Anisse. « Études combinatoires sur les permutations et partitions d'ensemble ». Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00393631.

Texte intégral
Résumé :
Cette thèse regroupe plusieurs travaux de combinatoire énumérative sur les permutations et permutations d'ensemble. Elle comporte 4 parties.Dans la première partie, nous répondons aux conjectures de Steingrimsson sur les partitions ordonnées d'ensemble. Plus précisément, nous montrons que les statistiques de Steingrimsson sur les partitions ordonnées d'ensemble ont la distribution euler-mahonienne. Dans la deuxième partie, nous introduisons et étudions une nouvelle classe de statistiques sur les mots : les statistiques "maj-inv". Ces dernières sont des interpolations graphiques des célèbres statistiques "indice majeur" et "nombre d'inversions". Dans la troisième partie, nous montrons que la distribution conjointe des statistiques"nombre de croisements" et "nombre d'imbrications" sur les partitions d'ensemble est symétrique. Nous étendrons aussi ce dernier résultat dans le cadre beaucoup plus large des 01-remplissages de "polyominoes lunaires".La quatrième et dernière partie est consacrée à l'étude combinatoire des q-polynômes de Laguerre d'Al-Salam-Chihara. Nous donnerons une interprétation combinatoire de la suite de moments et des coefficients de linéarisations de ces polynômes.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Herbei, Radu. « Quasi-3D statistical inversion of oceanographic tracer data ». 2006. http://etd.lib.fsu.edu/theses/available/07102006-131014.

Texte intégral
Résumé :
Thesis (Ph. D.)--Florida State University, 2006.
Advisors: Kevin Speer, Martin Wegkamp, Florida State University, College of Arts and Sciences, Dept. of Statistics. Title and description from dissertation home page (viewed Sept. 20, 2006). Document formatted into pages; contains x, 48 pages. Includes bibliographical references.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Conway, Dennis. « Advances in Magnetotelluric Modelling : Time-Lapse Inversion, Bayesian Inversion and Machine Learning ». Thesis, 2018. http://hdl.handle.net/2440/120299.

Texte intégral
Résumé :
This thesis presents advancements to the area of magnetotelluric (MT) modelling. There are three main aims to this work. The first aim is to implement an inversion to model time-lapse MT data in a temporal dimension. The algorithm considers the entire dataset at once, with penalisations for model roughness in both the spatial and temporal dimensions. The inversion is tested on synthetic data, as well as a case-study from a coal-seam gas dewatering survey. Second is to explore the problem of nonuniqueness in MT data inversion by implementing a 1D Bayesian inversion using an efficient sampler. The implemented model includes a novel way of regularising MT inversion by allowing the strength of smoothing to vary between different models. The Bayesian inversion is tested on synthetic and case-study datasets with results matching known data. The third aim is to implement a proxy function for the 3D MT forward function based on artificial neural networks. This allows for rapid evaluation of the forward function and the use of evolutionary algorithms to invert for resistivity structures. The evolutionary search algorithm is tested on synthetic data sets and a case-study data set from the Curnamona Province, South Australia. Together, these three novel algorithms and software implementations represent a contribution to the toolkit of MT modelling.
Thesis (Ph.D.) -- University of Adelaide, School of Physical Sciences, 2018
Styles APA, Harvard, Vancouver, ISO, etc.
32

« A multiscale, statistically-based inversion scheme for linearized inverse scattering problems ». Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1994. http://hdl.handle.net/1721.1/3387.

Texte intégral
Résumé :
Eric L. Miller, Alan S. Willsky.
Includes bibliographical references (p. 34-36).
Supported by the Office of Naval Research. N00014-91-J-1004 Supported by the Air Force Office of Scientific Research. AFOSR-92-J-0002 Supported by the Advanced Research Project Agency under an Air Force grant. F49620-93-1-0604 Supported in part by a US Air Force Laboratory Graduate Fellowship.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Herrmann, Felix J. « Phase transitions in explorations seismology : statistical mechanics meets information theory ». 2007. http://hdl.handle.net/2429/606.

Texte intégral
Résumé :
n this paper, two different applications of phase transitions to exploration seismology will be discussed. The first application concerns a phase diagram ruling the recovery conditions for seismic data volumes from incomplete and noisy data while the second phase transition describes the behavior of bi-compositional mixtures as a function of the volume fraction. In both cases, the phase transitions are the result of randomness in large system of equations in combination with nonlinearity. The seismic recovery problem from incomplete data involves the inversion of a rectangular matrix. Recent results from the field of "compressive sensing" provide the conditions for a successful recovery of functions that are sparse in some basis (wavelet) or frame (curvelet) representation, by means of a sparsity ($\ell_1$-norm) promoting nonlinear program. The conditions for a successful recovery depend on a certain randomness of the matrix and on two parameters that express the matrix' aspect ratio and the ratio of the number of nonzero entries in the coefficient vector for the sparse signal representation over the number of measurements. It appears that the ensemble average for the success rate for the recovery of the sparse transformed data vector by a nonlinear sparsity promoting program, can be described by a phase transition, demarcating the regions for the two ratios for which recovery of the sparse entries is likely to be successful or likely to fail. Consistent with other phase transition phenomena, the larger the system the sharper the transition. The randomness in this example is related to the construction of the matrix, which for the recovery of spike trains corresponds to the randomly restricted Fourier matrix. It is shown, that these ideas can be extended to the curvelet recovery by sparsity-promoting inversion (CRSI) . The second application of phase transitions in exploration seismology concerns the upscaling problem. To counter the intrinsic smoothing of singularities by conventional equivalent medium upscaling theory, a percolation-based nonlinear switch model is proposed. In this model, the transport properties of bi-compositional mixture models for rocks undergo a sudden change in the macroscopic transport properties as soon as the volume fraction of the stronger material reaches a critical point. At this critical point, the stronger material forms a connected cluster, which leads to the creation of a cusp-like singularity in the elastic moduli, which in turn give rise to specular reflections. In this model, the reflectivity is no longer explicitly due to singularities in the rocks composition. Instead, singularities are created whenever the volume fraction exceeds the critical point. We will show that this concept can be used for a singularity-preserved lithological upscaling.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Martin, James Robert Ph D. « A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion ». Thesis, 2015. http://hdl.handle.net/2152/31374.

Texte intégral
Résumé :
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Myers, Timothy F. « Proposed implementation of a near-far resistant multiuser detector without matrix inversion using Delta-Sigma modulation ». Thesis, 1992. http://hdl.handle.net/1957/37132.

Texte intégral
Résumé :
A new algorithm is proposed which provides a sub-optimum near-far resistant pattern for correlation with a known signal in a spread-spectrum multiple access environment with additive white gaussian noise (AWGN). Only the patterns and respective delays of the K-1 interfering users are required. The technique does not require the inversion of a cross-correlation matrix. The technique can be easily extended to as many users as desired using a simple recursion equation. The computational complexity is O(K²) for each user to be decoded. It is shown that this method provides the same results as the "one-shot" method proposed by Verdu and Lupas. Also shown is a new array architecture for implementing this new solution using delta-sigma modulation and a correlator for non-binary patterns that takes advantage of the digitized Al: signals. Simulation results are presented which show the algorithm and correlator to be implementable in VLSI technology. This approach allows processing of the received signal in real-time with a delay of O(.K) bit periods per user. A modification of the algorithm is examined which allows further reduction of complexity at the expense of reduced performance.
Graduation date: 1992
Styles APA, Harvard, Vancouver, ISO, etc.
36

Sagar, Stephen. « Inversion of remote sensing data in a shallow water environment using a trans-dimensional probabilistic framework ». Phd thesis, 2015. http://hdl.handle.net/1885/150201.

Texte intégral
Résumé :
Image data from remote sensing platforms offer an opportunity to observe and monitor the physical environment at a scale and precision unavailable to previous generations. The ability to estimate environmental parameters in a range of terrestrial and aquatic scenarios, often in remote and inaccessible areas, is a key benefit of using remote sensing data. Estimating physical parameters from remote sensing data takes the form of an inverse problem, predominately tackled using single solution optimisation approaches applied on a pixel-by-pixel basis. These types of inversion methods are poorly suited to the non-uniqueness that characterises many of the physical models in remote sensing, and often require some form of subjective regularisation to produce sensible estimates and parameter combinations. In this thesis the inversion of remote sensing data is cast in a probabilistic framework, with the first application of a trans-dimensional sampling algorithm to this form of problem. Probabilistic sampling techniques offer considerable benefits in term of encompassing uncertainties in both the model and the data, and provide an ensemble of solutions from which parameter estimates and uncertainties can be inferred. However, probabilistic sampling has not been widely applied to remote sensing image data, primarily due to the high dimension of the data and inverse problem. Using a physical model for a shallow water environment, we demonstrate the application of a spatially partitioned reverse jump Markov chain Monte Carlo (rj-McMC) algorithm, previously developed for geophysical applications where the dimension of the inverse problem is treated as unknown. To effectively deal with the increased dimension of the remote sensing problem, a new version of the algorithm is developed in this thesis, utilising image segmentation techniques to guide the dimensional changes in the rj-McMC sampling process. Synthetic data experiments show that the segment guided component of the algorithm is essential to sample the high dimensions of a complex spatial environment such as a coral reef. As the complexity of the data and forward model is increased, further innovations to the algorithm are introduced. These include, enabling the estimation of data noise as part of the inversion process, and the use of the segmentation to develop informed starting points for the initial parameters in the model. The original algorithm developed in this thesis is then applied to a hyperspectral remote sensing problem in the coral waters of Lee Stocking Island, Bahamas. The algorithm is shown to produce an estimated depth model of increased accuracy in comparison to a range of optimisation inversion methods applied at this study site. The self-regularisation of the partition modelling approach proves effective in minimising the pixel-to-pixel variation in the depth solution, whilst maintaining the fine-scale spatial discontinuities of the coral reef environment. Importantly, inferring a depth model from an ensemble of solutions, rather than a single solution, enables uncertainty to be attributed to the model parameters. This is a crucial step in integrating bathymetry information estimated from remote sensing data with other traditional surveying methods.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Chevalier, Clément. « Fast uncertainty reduction strategies relying on Gaussian process models ». Phd thesis, 2013. http://tel.archives-ouvertes.fr/tel-00879082.

Texte intégral
Résumé :
Cette thèse traite de stratégies d'évaluation séquentielle et batch-séquentielle de fonctions à valeurs réelles sous un budget d'évaluation limité, à l'aide de modèles à processus Gaussiens. Des stratégies optimales de réduction séquentielle d'incertitude (SUR) sont étudiées pour deux problèmes différents, motivés par des cas d'application en sûreté nucléaire. Tout d'abord, nous traitons le problème d'identification d'un ensemble d'excursion au dessus d'un seuil T d'une fonction f à valeurs réelles. Ensuite, nous étudions le problème d'identification de l'ensemble des configurations "robustes, contrôlées", c'est à dire l'ensemble des inputs contrôlés où la fonction demeure sous T quelle que soit la valeur des différents inputs non-contrôlés. De nouvelles stratégies SUR sont présentés. Nous donnons aussi des procédures efficientes et des formules permettant d'utiliser ces stratégies sur des applications concrètes. L'utilisation de formules rapides pour recalculer rapidement le posterior de la moyenne ou de la fonction de covariance d'un processus Gaussien (les "formules d'update de krigeage") ne fournit pas uniquement une économie computationnelle importante. Elles sont aussi l'un des ingrédient clé pour obtenir des formules fermées permettant l'utilisation en pratique de stratégies d'évaluation coûteuses en temps de calcul. Une contribution en optimisation batch-séquentielle utilisant le Multi-points Expected Improvement est également présentée.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kuponiyi, Ayodeji Paul. « Imaging major Canadian sedimentary basins and their adjacent structures using ambient seismic noise (and other applications of seismic noise) ». Thesis, 2021. http://hdl.handle.net/1828/12947.

Texte intégral
Résumé :
Over a decade ago, it was discovered that the earth’s natural seismic wavefields, propagating as seismic noise, can be processed using correlation methods to produce surface waves, similar to those generated by earthquakes. This discovery represents a paradigm shift in seismology and has led to several tomographic studies of earth structures, at different scales and resolutions, in previously difficult-to-study areas around the world. This PhD dissertation presents research results on multi-scale and multi-purpose applications of ambient seismic noise wavefields under three topics: (1) Imaging of sedimentary basins and sub-basin structures in eastern and western Canada using ambient seismic noise, (2) Combining measurements from ambient seismic noise with earthquake datasets for imaging crustal and mantle structures, and (3) Temporal variation in cultural seismic noise and noise correlation functions (NCFs) during the COVID-19 lockdown in Canada. The first topic involved imaging the sedimentary basins in eastern and western Canada using shear wave velocities derived from ambient noise group velocities. The results show that the basins are characterized by varying depths, with maximums along the studied cross-sections in excess of 10 km, in eastern and western Canada. Characteristics of accreted terranes in eastern and western Canada are also revealed in the results. A seismically distinct basement is imaged in eastern Canada and is interpreted to be a vestige of the western African crust trapped beneath eastern Canada at the opening of the Atlantic Ocean. In western Canada, the 3D variation of the Moho and sedimentary basin depths is imaged. The thickest sediments in eastern Canada are found beneath the Queen Charlotte, Williston and the Alberta Deep basins, while the Moho is the deepest beneath the Williston basin and parts of Alberta basin and northern British Columbia. For the second topic, I worked on improving the seismological methodology to construct broadband (period from 2 to 220 s) dispersion curves by combining the dispersion measurements derived from ambient seismic noise with those from earthquakes. The broadband dispersion curves allow for imaging earth structures spanning the shallow crust to the upper mantle. For the third topic, I used ambient seismic data from the earlier stages of the COVID-19 pandemic to study the temporal variation of seismic power spectra and the potential impacts of COVID-19 lockdown on ambient NCFs in four cities in eastern and western Canada. The results show mean seismic power drops of 24% and 17% during the lockdown in eastern Canada, near Montreal and Ottawa respectively and reductions of 27% and 17% near Victoria and Sidney respectively. NCF signal quality within the secondary microseism band reached maximum before the lockdown, minimum during lockdown and at intermediate levels during the gradual reopening phase for the western Canada station pair.
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie