Dissertations / Theses on the topic 'Inversion uncertainty'

To see the other types of publications on this topic, follow the link: Inversion uncertainty.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Inversion uncertainty.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

White, Jeremy. "Computer Model Inversion and Uncertainty Quantification in the Geosciences." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5329.

Full text
Abstract:
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.
APA, Harvard, Vancouver, ISO, and other styles
2

Jokinen, J. (Jarkko). "Uncertainty analysis and inversion of geothermal conductive models using random simulation methods." Doctoral thesis, University of Oulu, 2000. http://urn.fi/urn:isbn:9514255909.

Full text
Abstract:
Abstract Knowledge of the thermal conditions in the lithosphere is based on theoretical models of heat transfer constrained by geological and geophysical data. The present dissertation focuses on the uncertainties of calculated temperature and heat flow density results and on how they depend on the uncertainties of thermal properties of rocks, as well as on the relevant boundary conditions. Due to the high number of involved variables of typical models, the random simulation technique was chosen as the applied tool in the analysis. Further, the random simulation technique was applied in inverse Monte Carlo solutions of geothermal models. In addition to modelling technique development, new measurements on thermal conductivity and diffusivity of middle and lower crustal rocks in elevated pressure and temperature were carried out. In the uncertainty analysis it was found that a temperature uncertainty of 50 K at the Moho level, which is at a 50 km's depth in the layered model, is produced by an uncertainty of only 0.5 W m-1 K-1 in thermal conductivity values or 0.2 orders of magnitude uncertainty in heat production rate (mW m-3). Similar uncertainties are obtained in Moho temperature, given that the lower boundary condition varies by ± 115 K in temperature (nominal value 1373 K) or ± 1.7 mW m-2 in mantle heat-flow density (nominal value 13.2 mW m-2). Temperature and pressure dependencies of thermal conductivity are minor in comparison to the previous effects. The inversion results indicated that the Monte Carlo technique is a powerful tool in geothermal modelling. When only surface heat-flow density data are used as a fitting object, temperatures at the depth of 200 km can be inverted with an uncertainty of 120 - 170 K. When petrological temperature-depth (pressure) data on kimberlite-hosted mantle xenoliths were used also as a fitting object, the uncertainty was reduced to 60 - 130 K. The inversion does not remove the ambiguity of the models completely, but it reduces significantly the uncertainty of the temperature results.
APA, Harvard, Vancouver, ISO, and other styles
3

Thurin, Julien. "Uncertainties estimation in Full Waveform Inversion using Ensemble methods." Thesis, Université Grenoble Alpes, 2020. https://tel.archives-ouvertes.fr/tel-02570602.

Full text
Abstract:
L'inversion de forme d'onde complète (FWI) est une méthode d'inversion non-linéaire qui a pour but l'obtention de modèles précis des propriétés physiques du sous-sol terrestre. Ces modèles, véritables cartes de propriétés physiques, sont indispensables pour l'exploration et l'étude des structures internes de la Terre.Généralement formulée sous la forme d'un schéma d'optimisation par la méthode des moindres carrés, la FWI compare des enregistrements sismiques observés en surface, avec des données synthétiques calculées à partir d'un modèle numérique de sous-sol. Alors qu'une infinité de modèles peut potentiellement expliquer les observations, la FWI, du fait de sa formulation, ne permet d'obtenir qu'un seul modèle du sous-sol fortement conditionné par le choix de modèle de départ. À cette ambiguïté s'ajoute la difficulté d'estimer l'incertitude de la solution, à cause du coût de calcul prohibitif de la FWI. La non-unicité de la solution et le manque de moyens d'estimation d'incertitude rend l'exploitation des modèles de FWI compliquée.Dans cette thèse, nous proposons une méthode non conventionnelle et abordable, intégrant l’estimation d’incertitude au coeur de la solution de FWI. Notre méthode combine la FWI conventionnelle et l’assimilation de données par méthodes d’ensemble. De ce fait, elle tire avantage de la vitesse de convergence de la FWI conventionnelle, ainsi que des capacités d'estimation d'incertitude du Filtre de Kálmán d'Ensemble dit "Transform" (ETKF). Cette combinaison est permise par les fondements théoriques communs aux problèmes d'optimisation en FWI conventionnelle et au filtrage bayésien de l'ETKF. Nous utilisons ce schéma, l’ETKF-FWI, afin de transposer le problème de FWI dans le cadre de l'inférence Bayésienne locale. Au lieu d’une unique solution, l’ETKF-FWI retourne un ensemble de modèles qui permet à la fois de calculer la meilleure solution au sens des moindres carrés, mais aussi l'information d’incertitude et de résolution associée à chaque paramètre. Cette estimation d’incertitude est rendue possible par l’approximation de bas-rang de la matrice de covariance a posteriori, calculée à partir de l’ensemble. Les valeurs de variance permettent d’évaluer le degré de variabilité de la solution au sein de l’ensemble. La résolution est quant à elle, donnée par les termes hors diagonaux de la matrice de corrélation, qui est préférée à la matrice de covariance pour sa nature adimensionnelle.L'application de l'ETKF-FWI à deux cas d'études (un test synthétique et une application sur données de terrain) nous permet d'évaluer la faisabilité, ainsi que les limites de notre technique. Malgré le coût de calcul important lié à la représentation d’ensemble, cette stratégie permet une implémentation complètement parallèle, la rendant avantageuse au regard des solutions existant dans la littérature.Ces tests nous permettent d’évaluer l’influence de la taille de l’ensemble sur l’estimation de la variance, en caractérisant le biais de sous-échantillonnage associé aux petits ensembles. Bien que ce biais soit classiquement corrigé grâce aux méthodes d’inflation d’ensemble, celles-ci ne semblent pas adaptées à l’ETKF-FWI, limitant l’estimation d’incertitude à des évaluations qualitatives. De plus, la complexité de l’application sur données de terrain impacte la création de l’ensemble initial, ce qui influence directement les capacités de l’ETKF-FWI à produire une estimation quantitative de l’incertitude.Nous terminons par l’application de l'ETKF-FWI à une inversion de plusieurs paramètres physique (vitesse des ondes P et densité), considéré comme un défi majeur en FWI conventionnelle. Ce test nous permet d’évaluer qualitativement les liens de corrélation et d'ambiguïté entre vitesse et densité, ainsi que leurs incertitude et résolution respectives. De plus, le modèle moyen issu de l’ETKF-FWI semble être de qualité supérieure, ce qui laisse supposer d’un possible effet de préconditionnement fourni par la covariance
Full Waveform Inversion (FWI) is an ill-posed non-linear inverse problem, aiming at recovering detailed pictures of subsurface physical properties, which are crucial to explore and understand Earth structures.Classically formulated as a least-squares optimization scheme, FWI yields a single subsurface model amongst an infinite possibility of solutions. With the general lack of systematic and scalable uncertainty estimation, this formulation makes interpretation of FWI's outcomes complex.In this thesis, we propose an unconventional, scalable way of tackling the lack of uncertainty estimation in FWI, thanks to data assimilation ensemble methods. We develop a scheme combining both classical FWI and the Ensemble Transform Kalman Filter, that we call ETKF-FWI, and which is successfully applied on two 2-D test cases. This scheme takes advantage of the theoretical common-ground between least-squares optimization problems and Bayesian filtering. We use it to recast FWI in a local Bayesian inference framework, thanks to the ensemble representation. The ETKF-FWI provides high-resolution subsurface tomographic models and yields a low-rank approximation of the posterior covariance, holding the uncertainty and resolution information of the proposed solution. We show how the ETKF-FWI can be applied to qualitatively evaluate uncertainty and resolution of the solution. Instead of providing a single solution, the filter yields an ensemble of models, from which statistical information can be inferred.Uncertainty is evaluated from the ensemble's variance, which relates to the diversity of solution amongst the ensemble members for each parameter. We show that lines of the correlation matrix are ideal to evaluate qualitatively parameters resolution, thanks to their adimentionality. While the methodology is computationally intensive, it has the benefit of being fully scalable. Its applicability is demonstrated on a synthetic benchmark. This preliminary test allows us to assess the sensitivity of the ensemble representation to the common undersampling bias encountered in ensemble data assimilation. While undersampling does not affect the image reconstruction in any way, it results in variance underestimation, which makes the whole exercise of quantitative uncertainty assessment complicated. Ensemble inflation has been used to mitigate this bias, but does not seems to be a practical solution.A field data experiment is also discussed in this thesis. It makes it possible to test the sensitivity of the ETKF-FWI to complex noise structure and realistic physics. As it stands, the complexity of the problem reduces flexibility in the ensemble generation, and hence on the uncertainty estimate. Despite these limitations, results are consistent with the synthetic benchmark, and we are able to provide a qualitative uncertainty assessment. The field data case also allows us to evaluate the possibilities to use the ETKF-FWI on multiparameter inversion, which is still regarded as a challenging topic in FWI. The ETKF-FWI multiparameter inversion yields improved models compared with conventional ones. More importantly, it makes it possible to assess the uncertainty associated with parameters cross-talks
APA, Harvard, Vancouver, ISO, and other styles
4

El, Amri Mohamed. "Analyse d'incertitudes et de robustesse pour les modèles à entrées et sorties fonctionnelles." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM015.

Full text
Abstract:
L'objectif de cette thèse est de résoudre un problème d'inversion sous incertitudes de fonctions coûteuses à évaluer dans le cadre du paramétrage du contrôle d'un système de dépollution de véhicules.L'effet de ces incertitudes est pris en compte au travers de l'espérance de la grandeur d'intérêt. Une difficulté réside dans le fait que l'incertitude est en partie due à une entrée fonctionnelle connue à travers d'un échantillon donné. Nous proposons deux approches basées sur une approximation du code coûteux par processus gaussiens et une réduction de dimension de la variable fonctionnelle par une méthode de Karhunen-Loève.La première approche consiste à appliquer une méthode d'inversion de type SUR (Stepwise Uncertainty Reduction) sur l'espérance de la grandeur d'intérêt. En chaque point d'évaluation dans l'espace de contrôle, l'espérance est estimée par une méthode de quantification fonctionnelle gloutonne qui fournit une représentation discrète de la variable fonctionnelle et une estimation séquentielle efficace à partir de l'échantillon donné de la variable fonctionnelle.La deuxième approche consiste à appliquer la méthode SUR directement sur la grandeur d'intérêt dans l'espace joint des variables de contrôle et des variables incertaines. Une stratégie d'enrichissement du plan d'expériences dédiée à l'inversion sous incertitudes fonctionnelles et exploitant les propriétés des processus gaussiens est proposée.Ces deux approches sont comparées sur des fonctions jouets et sont appliquées à un cas industriel de post-traitement des gaz d'échappement d'un véhicule. La problématique est de déterminer les réglages du contrôle du système permettant le respect des normes de dépollution en présence d'incertitudes, sur le cycle de conduite
This thesis deals with the inversion problem under uncertainty of expensive-to-evaluate functions in the context of the tuning of the control unit of a vehicule depollution system.The effect of these uncertainties is taken into account through the expectation of the quantity of interest. The problem lies in the fact that the uncertainty is partly due to a functional variable only known through a given sample. We propose two approaches to solve the inversion problem, both methods are based on Gaussian Process modelling for expensive-to-evaluate functions and a dimension reduction of the functional variable by the Karhunen-Loève expansion.The first methodology consists in applying a Stepwise Uncertainty Reduction (SUR) method on the expectation of the quantity of interest. At each evaluation point in the control space, the expectation is estimated by a greedy functional quantification method that provides a discrete representation of the functional variable and an effective sequential estimate from the given sample.The second approach consists in applying the SUR method directly to the quantity of interest in the joint space. Devoted to inversion under functional uncertainties, a strategy for enriching the experimental design exploiting the properties of Gaussian processes is proposed.These two approaches are compared on toy analytical examples and are applied to an industrial application for an exhaust gas post-treatment system of a vehicle. The objective is to identify the set of control parameters that leads to meet the pollutant emission norms under uncertainties on the driving cycle
APA, Harvard, Vancouver, ISO, and other styles
5

Kozlovskaya, E. (Elena). "Theory and application of joint interpretation of multimethod geophysical data." Doctoral thesis, University of Oulu, 2001. http://urn.fi/urn:isbn:9514259602.

Full text
Abstract:
Abstract This work is devoted to the theory of joint interpretation of multimethod geophysical data and its application to the solution of real geophysical inverse problems. The targets of such joint interpretation can be geological bodies with an established dependence between various physical properties that cause anomalies in several geophysical fields (geophysical multiresponse). The establishing of the relationship connecting the various physical properties is therefore a necessary first step in any joint interpretation procedure. Bodies for which the established relationship between physical properties is violated (single-response bodies) can be targets of separate interpretations. The probabilistic (Bayesian) approach provides the necessary formalism for addressing the problem of the joint inversion of multimethod geophysical data, which can be non-linear and have a non-unique solution. Analysis of the lower limit of resolution of the non-linear problem of joint inversion using the definition of e-entropy demonstrates that joint inversion of multimethod geophysical data can reduce non-uniqueness in real geophysical inverse problems. The question can be formulated as a multiobjective optimisation problem (MOP), enabling the numerical methods of this theory to be employed for the purpose of geophysical data inversion and for developing computer algorithms capable of solving highly non-linear problems. An example of such a problem is magnetotelluric impedance tensor inversion with the aim of obtaining a 3-D resistivity distribution. An additional area of application for multiobjective optimisation can be the combination of various types of uncertain information (probabilistic and non-probabilistic) in a common inversion scheme applicable to geophysical inverse problems. It is demonstrated how the relationship between seismic velocity and density can be used to construct an algorithm for the joint interpretation of gravity and seismic wide-angle reflection and refraction data. The relationship between the elastic and electrical properties of rocks, which is a necessary condition for the joint inversion of data obtained by seismic and electromagnetic methods, can be established for solid- liquid rock mixtures using theoretical modelling of the elastic and electrical properties of rocks with a fractal microstructure and from analyses of petrophysical data and borehole log data.
APA, Harvard, Vancouver, ISO, and other styles
6

Sanchez, Reyes Hugo Samuel. "Inversion cinématique progressive linéaire de la source sismique et ses perspectives dans la quantification des incertitudes associées." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAU026/document.

Full text
Abstract:
La caractérisation des tremblements de terre est un domaine de recherche primordial en sismologie, où l'objectif final est de fournir des estimations précises d'attributs de la source sismique. Dans ce domaine, certaines questions émergent, par exemple : quand un tremblement de terre s’est-il produit? quelle était sa taille? ou quelle était son évolution dans le temps et l'espace? On pourrait se poser d'autres questions plus complexes comme: pourquoi le tremblement s'est produit? quand sera le prochain dans une certaine région? Afin de répondre aux premières questions, une représentation physique du phénomène est nécessaire. La construction de ce modèle est l'objectif scientifique de ce travail doctoral qui est réalisé dans le cadre de la modélisation cinématique. Pour effectuer cette caractérisation, les modèles cinématiques de la source sismique sont un des outils utilisés par les sismologues. Il s’agit de comprendre la source sismique comme une dislocation en propagation sur la géométrie d’une faille active. Les modèles de sources cinématiques sont une représentation physique de l’histoire temporelle et spatiale d’une telle rupture en propagation. Cette modélisation est dite approche cinématique car les histoires de la rupture inférées par ce type de technique sont obtenues sans tenir compte des forces qui causent l'origine du séisme.Dans cette thèse, je présente une nouvelle méthode d'inversion cinématique capable d'assimiler, hiérarchiquement en temps, les traces de données à travers des fenêtres de temps évolutives. Cette formulation relie la fonction de taux de glissement et les sismogrammes observés, en préservant la positivité de cette fonction et la causalité quand on parcourt l'espace de modèles. Cette approche, profite de la structure creuse de l’histoire spatio-temporelle de la rupture sismique ainsi que de la causalité entre la rupture et chaque enregistrement différé par l'opérateur. Cet opérateur de propagation des ondes connu, est différent pour chaque station. Cette formulation progressive, à la fois sur l’espace de données et sur l’espace de modèle, requiert des hypothèses modérées sur les fonctions de taux de glissement attendues, ainsi que des stratégies de préconditionnement sur le gradient local estimé pour chaque paramètre du taux de glissement. Ces hypothèses sont basées sur de simples modèles physiques de rupture attendus. Les applications réussies de cette méthode aux cas synthétiques (Source Inversion Validation Exercise project) et aux données réelles du séisme de Kumamoto 2016 (Mw=7.0), ont permis d’illustrer les avantages de cette approche alternative d’une inversion cinématique linéaire de la source sismique.L’objectif sous-jacent de cette nouvelle formulation sera la quantification des incertitudes d’un tel modèle. Afin de mettre en évidence les propriétés clés prises en compte dans cette approche linéaire, dans ce travail, j'explore l'application de la stratégie bayésienne connue comme Hamiltonian Monte Carlo (HMC). Cette méthode semble être l’une des possibles stratégies qui peut être appliquée à ce problème linéaire sur-paramétré. Les résultats montrent qu’elle est compatible avec la stratégie linéaire dans le domaine temporel présentée ici. Grâce à une estimation efficace du gradient local de la fonction coût, on peut explorer rapidement l'espace de grande dimension des solutions possibles, tandis que la linéarité est préservée. Dans ce travail, j'explore la performance de la stratégie HMC traitant des cas synthétiques simples, afin de permettre une meilleure compréhension de tous les concepts et ajustements nécessaires pour une exploration correcte de l'espace de modèles probables. Les résultats de cette investigation préliminaire sont encourageants et ouvrent une nouvelle façon d'aborder le problème de la modélisation de la reconstruction cinématique de la source sismique, ainsi, que de l’évaluation des incertitudes associées
The earthquake characterization is a fundamental research field in seismology, which final goal is to provide accurate estimations of earthquake attributes. In this study field, various questions may rise such as the following ones: when and where did an earthquake happen? How large was it? What is its evolution in space and time? In addition, more challenging questions can be addressed such as the following ones: why did it occur? What is the next one in a given area? In order to progress in the first list of questions, a physical description, or model, of the event is necessary. The investigation of such model (or image) is the scientific topic I investigate during my PhD in the framework of kinematic source models. Understanding the seismic source as a propagating dislocation that occurs across a given geometry of an active fault, the kinematic source models are the physical representations of the time and space history of such rupture propagation. Such physical representation is said to be a kinematic approach because the inferred rupture histories are obtained without taking into account the forces that might cause the origin of the dislocation.In this PhD dissertation, I present a new hierarchical time kinematic source inversion method able to assimilate data traces through evolutive time windows. A linear time-domain formulation relates the slip-rate function and seismograms, preserving the positivity of this function and the causality when spanning the model space: taking benefit of the time-space sparsity of the rupture model evolution is as essential as considering the causality between rupture and each record delayed by the known propagator operator different for each station. This progressive approach, both on the data space and on the model space, does require mild assumptions on prior slip-rate functions or preconditioning strategies on the slip-rate local gradient estimations. These assumptions are based on simple physical expected rupture models. Successful applications of this method to a well-known benchmark (Source Inversion Validation Exercise 1) and to the recorded data of the 2016 Kumamoto mainshock (Mw=7.0) illustrate the advantages of this alternative approach of a linear kinematic source inversion.The underlying target of this new formulation will be the future uncertainty quantification of such model reconstruction. In order to achieve this goal, as well as to highlight key properties considered in this linear time-domain approach, I explore the Hamiltonian Monte Carlo (HMC) stochastic Bayesian framework, which appears to be one of the possible and very promising strategies that can be applied to this stabilized over-parametrized optimization of a linear forward problem to assess the uncertainties on kinematic source inversions. The HMC technique shows to be compatible with the linear time-domain strategy here presented. This technique, thanks to an efficient estimation of the local gradient of the misfit function, appears to be able to rapidly explore the high-dimensional space of probable solutions, while the linearity between unknowns and observables is preserved. In this work, I investigate the performance of the HMC strategy dealing with simple synthetic cases with almost perfect illumination, in order to provide a better understanding of all the concepts and required tunning to achieve a correct exploration of the model space. The results from this preliminary investigation are promising and open a new way of tackling the kinematic source reconstruction problem and the assessment of the associated uncertainties
APA, Harvard, Vancouver, ISO, and other styles
7

Arubi, Isaac Marcus Tesi. "Multiphase flow measurement using gamma-based techniques." Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/8347.

Full text
Abstract:
The oil and gas industry need for high performing and low cost multiphase meters is ever more justified given the rapid depletion of conventional oil reserves. This has led oil companies to develop smaller/marginal fields and reservoirs in remote locations and deep offshore, thereby placing great demands for compact and more cost effective soluti8ons of on-line continuous multiphase flow measurement. The pattern recognition approach for clamp-on multiphase measurement employed in this research study provides one means for meeting this need. Cont/d.
APA, Harvard, Vancouver, ISO, and other styles
8

Schmidt, Aurora C. "Scalable Sensor Network Field Reconstruction with Robust Basis Pursuit." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/240.

Full text
Abstract:
We study a scalable approach to information fusion for large sensor networks. The algorithm, field inversion by consensus and compressed sensing (FICCS), is a distributed method for detection, localization, and estimation of a propagating field generated by an unknown number of point sources. The approach combines results in the areas of distributed average consensus and compressed sensing to form low dimensional linear projections of all sensor readings throughout the network, allowing each node to reconstruct a global estimate of the field. Compressed sensing is applied to continuous source localization by quantizing the potential locations of sources, transforming the model of sensor observations to a finite discretized linear model. We study the effects of structured modeling errors induced by spatial quantization and the robustness of ℓ1 penalty methods for field inversion. We develop a perturbations method to analyze the effects of spatial quantization error in compressed sensing and provide a model-robust version of noise-aware basis pursuit with an upperbound on the sparse reconstruction error. Numerical simulations illustrate system design considerations by measuring the performance of decentralized field reconstruction, detection performance of point phenomena, comparing trade-offs of quantization parameters, and studying various sparse estimators. The method is extended to time-varying systems using a recursive sparse estimator that incorporates priors into ℓ1 penalized least squares. This thesis presents the advantages of inter-sensor measurement mixing as a means of efficiently spreading information throughout a network, while identifying sparse estimation as an enabling technology for scalable distributed field reconstruction systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Lian, Jinghui. "Understanding how emissions and atmospheric transport control the variations of atmospheric CO2 in the Paris area : insights from laser-based measurements at city scale." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASV010.

Full text
Abstract:
Récemment, plusieurs tentatives ont été faites afin de quantifier les émissions de CO2 à l'échelle de la ville et ainsi d’améliorer les inventaires existants. Les mesures de concentration de CO2 et autres gaz peuvent ainsi être utilisées dans une approche dite “descendante” afin de contraindre les inventaires d’émission qui sont traditionnellement construits via une approche dite “ascendante” à partir d’une quantification des activités.Dans le cadre de cette thèse, nous avons évalué le potentiel d’une nouvelle technique de surveillance du CO2, référencée sous le nom de sous le nom de GreenLite™ (Green Imaging Tomography Experiment). Le système a été déployé pendant un an dans la ville de Paris. Il permet une mesure en continu de la concentration le long de 30 segments horizontaux, proches de la surface. Il a donc une couverture spatiale beaucoup plus large que l'échantillonnage in situ traditionnel et apporte une information dont la représentativité spatiale est plus cohérente avec celle des modèles de transport atmosphérique à résolution à l'échelle kilométrique utilisés pour l'inversion atmosphérique à l'échelle de la ville.J’ai développé un outil de modélisation complet centré sur le modèle à haute résolution WRF avec un couplage (WRF-Chem), et en utilisant des inventaires d'émissions anthropique de CO2, des estimations des flux par la végétation et des conditions aux limites fournies par une simulation à grande échelle. Ce modèle permet d’interpréter les mesures.Le chapitre 1 est une large introduction au sujet tandis que les chapitres 2-4 sont construits autour de trois articles publiés dans la littérature scientifique.Le chapitre 2 évalue si le modèle WRF à une résolution spatiale de 3 km peut reproduire les champs météorologiques dans la région IdF mieux que ne le fait le modèle Européen CEPMMT à 16 km de résolution. Les comparaisons entre les analyses des deux modèles sont faites avec un focus sur trois variables atmosphériques (température de l'air, vent et hauteur de la couche limite) qui sont les plus pertinentes en ce qui concerne le transport du CO2 atmosphérique dans un environnement urbain. Les résultats ont permis de sélection une version du modèle et une option de nudging qui permet la meilleure adéquation entre les simulations numériques et les observations de terrain, et ces options sont utilisés dans la suite du travail.Le chapitre 3 vise à comprendre les variations temporelles et spatiales des concentrations de CO2 dans Paris et ses environs pendant la période de mesure du système GreenLITE™ (Sep 2015 à dec 2016). Les données permettent de démontrer qu’un schéma de canopée urbaine (BEP) est en bien meilleur accord avec la réalité, par rapport à l'autre (UCM), en particulier pendant l’hiver. Pendant cette période, le mélange vertical est réduit ce qui peut conduire à des accumulations du CO2 dans les basses couches de l’atmosphère, qui sont difficiles à modéliser. Cependant, les mesures GreenLITE™ montrent aussi un bruit important et des indications de biais, ce qui limite leur potentiel d’interprétation. De plus, les inadéquations entre modèles et observations dans ce chapitre soulignent clairement la difficulté de modélisation du CO2 dans les zones urbaines en raison des grandes incertitudes tant dans le transport atmosphérique que dans l'inventaire des émissions.Le chapitre 4 vise à étudier en détail les sources d'erreurs principales qui conduisent aux différences entre mesures et résultats de simulation en ce qui concerne le CO2 atmosphérique au-dessus de Paris. Ces sources d’erreur incluent les hypothèses sur les distributions des émissions anthropiques, le transport atmosphérique en particulier le mélange vertical, les flux de CO2 biogéniques, et les conditions aux limites du domaine de simulation
Cities play an important role in tackling climate change as they account for more than 70% of global anthropogenic CO2 emissions. In recent years, several efforts have attempted to quantify city-scale CO2 emissions and establish a high spatially and temporally resolved inventory for supporting urban emission mitigation strategies. The so-called "top-down" inverse estimation of CO2 emissions constrained by independent atmospheric observations could serve to evaluate the consistency of traditional "bottom-up" inventories. A novel CO2 monitoring technique, known as the Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE™) trace gas measurement system, was deployed in central Paris for a 1-year monitoring of near-surface atmospheric CO2 concentrations along 30 horizontal chords. This system has a much wider spatial coverage than traditional in situ sampling and was expected to be more consistent with the spatial representativeness of the kilometer-scale resolution atmospheric transport models used for the city-scale atmospheric inversion.The primary objective of this thesis is to assess the potential contribution of this GreenLITE™ system, in addition to two urban and four peri-urban in situ CO2 measurement stations, for a better understanding of the spatiotemporal variations of CO2 concentrations within Paris and its vicinity. For this objective, I have developed a full modeling framework around the high-resolution Weather Research and Forecasting model (WRF) and its coupling with Chemistry (WRF-Chem), using CO2 emission inventories, estimates of the vegetation fluxes and boundary conditions provided by a large-scale simulation.Chapter 1 is a broad introduction to the subject while chapter 2-4 are built around three separate and publishable papers.Chapter 2 aims at evaluating whether the WRF model running at a 3-km horizontal resolution, with its various configurations, can reproduce the meteorological fields over the IdF region better than the 16-km resolution ECMWF global operational forecasts. The comparisons between WRF and ECMWF forecasts with respect to observations are carried out with a focus on three atmospheric variables (air temperature, wind and PBL height). The results of the sensitivity tests of different physics schemes and nudging options obtained in this chapter are used in subsequent research for the selection of appropriate WRF-Chem model setup in support of atmospheric CO2 transport modeling.Chapter 3 aims at understanding the spatiotemporal variations of CO2 concentrations within Paris and its vicinity during the 1-year GreenLITE™ operating period from September 2015 to December 2016. The analyses are based on CO2 data provided by GreenLITE™ together with six in situ stations and the 1 km-resolution WRF-Chem model coupled with two urban canopy schemes (Urban Canopy Model - UCM; Building Effect Parameterization - BEP). The GreenLITE™ data provide clear information that favors BEP over UCM in the description of vertical mixing and CO2 concentrations during the winter. However, there are indications of measurement noise in summer that limit the usefulness of the data. Furthermore, the model-observation mismatches clearly stress the difficulty of CO2 modeling within urban areas due to the large uncertainties both in the atmospheric transport and the emission inventory.Chapter 4 aims at investigating in detail the critical sources of errors that lead to the model-observation mismatches in the atmospheric CO2 modeling over Paris. These sources of misfit include uncertainties in the assumed distribution of anthropogenic emission, errors in the atmospheric transport, in biogenic CO2 fluxes and in CO2 boundary conditions at the edges of the atmospheric transport model domain. The lessons and insights from this chapter provide requirements and recommendations for the assimilation of CO2 measurements into the atmospheric inversion, when aiming at the quantification of CO2 emissions for the Paris region
APA, Harvard, Vancouver, ISO, and other styles
10

Shin, Yoonghyun. "Neural Network Based Adaptive Control for Nonlinear Dynamic Regimes." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7577.

Full text
Abstract:
Adaptive control designs using neural networks (NNs) based on dynamic inversion are investigated for aerospace vehicles which are operated at highly nonlinear dynamic regimes. NNs play a key role as the principal element of adaptation to approximately cancel the effect of inversion error, which subsequently improves robustness to parametric uncertainty and unmodeled dynamics in nonlinear regimes. An adaptive control scheme previously named composite model reference adaptive control is further developed so that it can be applied to multi-input multi-output output feedback dynamic inversion. It can have adaptive elements in both the dynamic compensator (linear controller) part and/or in the conventional adaptive controller part, also utilizing state estimation information for NN adaptation. This methodology has more flexibility and thus hopefully greater potential than conventional adaptive designs for adaptive flight control in highly nonlinear flight regimes. The stability of the control system is proved through Lyapunov theorems, and validated with simulations. The control designs in this thesis also include the use of pseudo-control hedging techniques which are introduced to prevent the NNs from attempting to adapt to various actuation nonlinearities such as actuator position and rate saturations. Control allocation is introduced for the case of redundant control effectors including thrust vectoring nozzles. A thorough comparison study of conventional and NN-based adaptive designs for a system under a limit cycle, wing-rock, is included in this research, and the NN-based adaptive control designs demonstrate their performances for two highly maneuverable aerial vehicles, NASA F-15 ACTIVE and FQM-117B unmanned aerial vehicle (UAV), operated under various nonlinearities and uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
11

Nosjean, Nicolas. "Management et intégration des risques et incertitudes pour le calcul de volumes de roches et de fluides au sein d’un réservoir, zoom sur quelques techniques clés d’exploration Integrated Post-stack Acoustic Inversion Case Study to Enhance Geological Model Description of Upper Ordovicien Statics : from imaging to interpretation pitfalls and an efficient way to overcome them Improving Upper Ordovician reservoir characterization - an Algerian case study Tracking Fracture Corridors in Tight Gas Reservoirs : An Algerian Case Study Integrated sedimentological case study of glacial Ordovician reservoirs in the Illizi Basin, Algeria A Case Study of a New Time-Depth Conversion Workflow Designed for Optimizing Recovery Proper Systemic Knowledge of Reservoir Volume Uncertainties in Depth Conversion Integration of Fault Location Uncertainty in Time to Depth Conversion Emergence of edge scenarios in uncertainty studies for reservoir trap analysis Enhancing geological model with the use of Spectral Decomposition - A case study of a prolific stratigraphic play in North Viking Graben, Norway Fracture corridor identification through 3D multifocusing to improve well deliverability, an Algerian tight reservoir case study Geological Probability Of Success Assessment for Amplitude-Driven Prospects, A Nile Delta Case Study." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS085.

Full text
Abstract:
En tant que géoscientifique dans le domaine de l’Exploration pétrolière et gazière depuis une vingtaine d’années, mes fonctions professionnelles m’ont permis d’effectuer différents travaux de recherche sur la thématique de la gestion des risques et des incertitudes. Ces travaux de recherche se situent sur l’ensemble de la chaîne d’analyse Exploration, traitant de problématiques liées à l’acquisition et au traitement sismique, jusqu’au placement optimal de forages d’exploration. Un volet plus poussé de mes travaux s’est orienté sur la gestion des incertitudes géophysiques en Exploration pétrolière, là où l’incertitude est la plus importante et paradoxalement la moins travaillée.On peut regrouper mes travaux de recherche en trois grands domaines qui suivent les grandes étapes du processus Exploration : le traitement sismique, leur interprétation, et enfin l'analyse et l'extraction des différentes incertitudes qui vont nous permettre de calculer les volumes d’hydrocarbures en place et récupérables, ainsi que l’analyse de ses risques associés. L’ensemble des travaux de recherche ont été appliqués avec succès sur des cas d’études opérationnelles. Après avoir introduit quelques notions générales et détaillé les grandes étapes du processus Exploration et leur lien direct avec ces problématiques, je présenterai quatre grands projets de recherche sur un cas d’étude algérien
In the last 20 years, I have been conducting various research projects focused on the management of risks and uncertainties in the petroleum exploration domain. The various research projects detailed in this thesis are dealing with problematics located throughout the whole Exploration and Production chain, from seismic acquisition and processing, until the optimal exploration to development wells placement. Focus is made on geophysical risks and uncertainties, where these problematics are the most pronounced and paradoxically the less worked in the industry. We can subdivide my research projects into tree main axes, which are following the hydrocarbon exploration process, namely: seismic processing, seismic interpretation thanks to the integration with various well informations, and eventually the analysis and extraction of key uncertainties, which will be the basis for the optimal calculation of in place and recoverable volumes, in addition to the associated risk analysis on a given target structure. The various research projects that are detailed in this thesis have been applied successfully on operational North Africa and North Sea projects. After introducing risks and uncertainty notions, we will detail the exploration process and the key links with these issues. I will then present four major research projects with their theoretical aspects and applied case study on an Algerian asset
APA, Harvard, Vancouver, ISO, and other styles
12

Morales, Barrera Stephanie Lucero, and Huillca Saori Kiara Kanashiro. "Evaluacion de proyectos mediante opciones reales." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653322.

Full text
Abstract:
Este articulo presenta la gran importancia del uso del método de valorización de opciones reales en proyectos de inversión con alto grado de incertidumbre, y el por qué es una mejor opción a diferencia de los métodos tradicionales que actualmente se conocen, los cuales son difíciles de adaptar a situaciones donde existe un gran índice de incertidumbre. El presente artículo da una mirada general a las opciones reales desde una revisión literaria y el surgimiento de las mismas a base de las opciones financieras, los tipos de opciones reales que existen y las distintas metodologías que se requieren para hallarla el valor de las mismas. Finalmente se muestra seis estudios realizados por distintos autores donde muestra que esta metodología es la más adecuada para la evaluación de proyectos ya que toma en cuenta la flexibilidad que se requiere en proyectos que se encuentran en mercados de alta volatilidad como son el sector de construcción, la minería, biotecnología, entre otros.
This article presents the great importance of the use of the real options valuation method in investment projects with a high degree of uncertainty, and why it is a better option unlike the traditional methods currently known, which are difficult to adapt to. situations where there is a high index of uncertainty. This article gives an overview of real options from a literary review and the emergence of them based on financial options, the types of real options that exist and the different methodologies required to find their value. Finally, six studies carried out by different authors are shown, showing that this methodology is the most appropriate for evaluating projects, since it takes into account the flexibility required in projects that are in highly volatile markets such as the construction sector, mining, biotechnology, among others.
Trabajo de Suficiencia Profesional
APA, Harvard, Vancouver, ISO, and other styles
13

Schnaidt, Sebastian. "Improving uncertainty estimation in geophysical inversion modelling." Thesis, 2015. http://hdl.handle.net/2440/111402.

Full text
Abstract:
Numerical inversion modelling is an integral part of geophysical data interpretation. Growing computational resources are used to invert ever-growing data sets and higher dimensional data. However, models without meaningful uncertainty estimates are difficult to interpret reliably and limited attention has been paid to the advancement of model quality estimation techniques to keep up with the more sophisticated inversion schemes. The employment of meaningful uncertainty estimation methods is often hindered by the complicated implementation of those methods, and inadequate model quality estimators are frequently used. This project was aimed at the advancement of model uncertainty estimation, to enable a more common use. Two different approaches were developed, approaching the problem from different directions: Firstly, a bootstrap resampling approach for the qualitative estimation of model uncertainties is presented. The algorithm is characterised by an easy implementation and the fact that it can provide model quality estimation capabilities to existing inversion algorithms without requiring access to the inversion algorithm's source code. A given data set is repeatedly resampled to create multiple realisations of the data set. Each realisation is individually inverted and the variations between the generated models are analysed and visualised to generate interpretable uncertainty maps. The capabilities of the approach are demonstrated using the example of synthetic and real 2-D magnetotellurics data. Secondly, the multi-objective joint optimisation algorithm MOJO is presented, which aims to remedy the common shortcomings of classical joint inversion approaches. Joint inversion modelling is a powerful tool to improve model results and reduce the effects of data noise and solution nonuniqueness. Nevertheless, the classic joint inversion approaches have a variety of shortcomings, such as a dependency on the choice of data weights, optimising only a single solution resulting in inadequate uncertainty estimates, and the risk of model artefacts being introduced by the accidental joint inversion of incompatible data. MOJO is based on the concept of Pareto-optimality and treats each data set as a separate objective, avoiding data-weighting. The algorithm generates solution ensembles, which are statistically analysed to provide model uncertainty estimates. The shapes and evolutions of the solutions ensemble's distribution in objective space is dependent on the level of compatibility between the data set. The solution distributions are compared against a theoretical solution distribution corresponding to perfectly compatible data to estimate the compatibility state of any given objective-pair, allowing to distinguish between compatible and incompatible data, as well as identify data sets that are neither mutually exclusive nor sensitive to common features. MOJO's effectiveness was demonstrated in extensive feasibility studies on synthetic data as well as real data. The algorithm is adaptive and can be expanded to incorporate a variety of different data types. Additionally, ways were explored to make the communication of the modelling results and the model quality estimates as clear and concise as possible, to allow the user to make an informed decision and avoid misinterpretations.
Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Physical Sciences, 2015.
APA, Harvard, Vancouver, ISO, and other styles
14

Benavente, Bravo Roberto Fabian. "Rapid Finite Fault Inversion for Megathrust Earthquakes." Phd thesis, 2016. http://hdl.handle.net/1885/108615.

Full text
Abstract:
The largest earthquakes take place at subduction zones, and their devastating impact in populated regions is often exacerbated by their ability to excite powerful tsunamis. Today, we understand that large subduction earthquakes, known as megathrust events, are caused by the sudden release of elastic strain energy stored at the plate boundaries where a localized, previously locked, section of the megathrust ruptures. The rupture process can propagate over hundreds of kilometres and slip on the fault can be tens of meters. Using ground motion data to image the spatio-temporal spread of slip over the fault surface is known as finite fault inversion (FFI). Over the past decade FFI has become almost routine, so that results produced by different groups are available within several days or even hours after a large event. However, these results typically require manual processing of the data, and are not accompanied by appraisals of uncertainty. My PhD research has focused on obtaining slip models for such events in near real time. I divided my analysis into three main projects that are discussed in this thesis. First, I evaluated the performance of a long period seismic wave, the W-phase, which arrives between P and S waves, in a classic FFI scheme for the Maule (2010, Mw = 8.8) and Tohoku (2011, Mw = 9.1) events. I found that, despite its long period, the W-phase can resolve first order features of the rupture for both events. Since the W-phase is not very sensitive to 3D structure, the processing of data for the W-phase is generally simpler than it is for the body and surface waves that are commonly used for FFI. In addition, the W-phase is fast and can be obtained soon after the arrival of the P-wave. Second, I improve the classic inversion scheme to increase robustness and rigour for rapid inversions. The most remarkable aspects of this inversion approach are that the faulting surface is constrained to follow the 3D subducting slab geometry and that the smoothness of the rupture is objectively determined. I used this approach for the recent Illapel event (2015, Mw = 8.3) and showed that a meaningful preliminary model can be obtained within 25 minutes from rupture onset. A refined solution can be obtained 1 hour from the origin time, which is still useful for the management of the disaster. Finally, I have developed a novel linearized inversion method that allows slip uncertainties to be estimated during rapid finite fault inversion. This is an intrinsically complex problem as normally positivity constraints are imposed on finite fault models to ensure well behaved solutions. Uncertainties are typically unavailable for FFI results, but they can be crucial for meaningful interpretation of the slip models. To estimate them, I follow a probabilistic Bayesian framework but avoiding the computationally demanding Bayesian sampling. Instead, by using a coordinate transformation, the posterior distribution is approximated and obtained by linearized inversion. This inversion scheme was tested employing both simulated and real W-phase data, showing that meaningful uncertainty estimates can be inferred. Comparison with Bayesian sampling is also performed suggesting that the error of approximating the posterior is small. Including uncertainty estimates in early finite fault models will reduce the risk of working with misleading solutions. The rigour, objectivity and robustness of the inversion techniques devised in this thesis can be a valuable contribution to the FFI community. Since I have utilized mostly open source software and a desktop computer to carry out this research, the tools I have developed can be easily used for early warning in most seismic observatories. I believe that, when facing such disastrous events, the methods developed here can be important to assist authorities with emergency response.
APA, Harvard, Vancouver, ISO, and other styles
15

Stripling, Hayes Franklin. "The Method of Manufactured Universes for Testing Uncertainty Quantification Methods." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8986.

Full text
Abstract:
The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new "experiments" within the manufactured reality. To test further the responses of these UQ methods, we conduct exercises with "experimental" replicates, "measurement" error, and choices of physical inputs that reduce the accuracy of the diffusion model's approximation of our manufactured laws. Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental statistical formulation was not appropriate for our functional data, but that the code allows a knowledgable user to vary parameters within this formulation to tailor its behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop further a calibration method and to characterize the diffusion model discrepancy. Overall, we conclude that an MMU exercise with a properly designed universe (that is, one that is an adequate representation of some real-world problem) will provide the modeler with an added understanding of the interaction between a given UQ method and his/her more complex problem of interest. The modeler can then apply this added understanding and make more informed predictive statements.
APA, Harvard, Vancouver, ISO, and other styles
16

Razafindrakoto, Hoby. "New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion." Diss., 2015. http://hdl.handle.net/10754/554294.

Full text
Abstract:
New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion Hoby Njara Tendrisoa Razafindrakoto Earthquake source inversion is a non-linear problem that leads to non-unique solutions. The aim of this dissertation is to understand the uncertainty and reliability in earthquake source inversion, as well as to quantify variability in earthquake rupture models. The source inversion is performed using a Bayesian inference. This technique augments optimization approaches through its ability to image the entire solution space which is consistent with the data and prior information. In this study, the uncertainty related to the choice of source-time function and crustal structure is investigated. Three predefined analytical source-time functions are analyzed; isosceles triangle, Yoffe with acceleration time of 0.1 and 0.3 s. The use of the isosceles triangle as source-time function is found to bias the finite-fault source inversion results. It accelerates the rupture to propagate faster compared to that of the Yoffe function. Moreover, it generates an artificial linear correlation between parameters that does not exist for the Yoffe source-time functions. The effect of inadequate knowledge of Earth’s crustal structure in earthquake rupture models is subsequently investigated. The results show that one-dimensional structure variability leads to parameters resolution changes, with a broadening of the posterior 5 PDFs and shifts in the peak location. These changes in the PDFs of kinematic parameters are associated with the blurring effect of using incorrect Earth structure. As an application to real earthquake, finite-fault source models for the 2009 L’Aquila earthquake are examined using one- and three-dimensional crustal structures. One- dimensional structure is found to degrade the data fitting. However, there is no significant effect on the rupture parameters aside from differences in the spatial slip extension. Stable features are maintained for both structures. In the last part of this work, a multidimensional scaling method is presented to compare and classify earthquake slip distributions. A similarity scale to rank them are thus formulated. Dissimilarities among slip models (from various parameterizations) are computed using two different distance metrics, normalized squared and gray-scale metrics. Multidimensional scaling is then used to visualize the differences among the models. The analyzes are done for 2 case studies; one based on artificial scenarios with a known answer and another one based on the published rupture models of the 2011 Tohoku earthquake.
APA, Harvard, Vancouver, ISO, and other styles
17

Simpson, Janelle Maree. "Understanding interpretation limitations due to MT inversion variability: examples from the Mount Isa Province, Queensland, Australia." Thesis, 2019. http://hdl.handle.net/2440/122614.

Full text
Abstract:
Exploration undercover presents a significant challenge and relies heavily on the effective interpretation of geophysical data. Magnetotelluric (MT) surveying is an ideal method for characterising these covered terranes because it provides resolution from the shallow cover into the deep earth. Undercover terranes often lack constraining information, creating a significant impediment for translating geophysical features into geological interpretations. This thesis presents advances for understanding MT inversion uncertainty to produce better geological interpretations in data-poor areas. The project area is along strike from major Pb-Zn-Ag deposits at Mount Isa and George Fischer, and includes the location of a proposed suture between the Mount Isa Province and the North Australian Craton. The structure is interpreted from potential field data by previous workers but is not observable in outcrop. The prospective Proterozoic packages are concealed beneath 200-1200 m of Phanerozoic cover and consequently exploration success in this area has been very poor. The project dataset contains 1600 audiomagnetotelluric (AMT; 10-4 to 100 seconds) and broadband MT (BBMT; 10-2 to 103 seconds) sites; with approximate survey dimensions of 90 km north-south with line spacing of 5 km, and 150 km east-west with inter-site spacing of 2 km. The project area has scarce geological and geophysical information, and there is an inadequate understanding of the macro-scale geological structure. Three studies were undertaken with the aim of creating a new geological interpretation for the area. These studies were based on quantifying inversion variability and integration of information during interpretation. One study presents a workflow to objectively assess the variability of models produced during 3D magnetotelluric inversion. The workflow uses a sequential inversion methodology to examine model variability while minimising the computational demand of 3D inversion. The results highlight the high degree of variability permissible in 3D MT inversion models and reinforce the clear impact inversion parameterisation has on the inversion models. Our method allows objective differentiation between well- and poorly-constrained features. The second study integrates the results of 3D magnetotelluric inversion and variability analysis from the previous study, with deep crustal seismic and potential field data to refine our understanding of the southern Mount Isa Province. A new crustal-scale west-dipping feature is identified that is adjacent to a major change in crustal thickness and associated with a major change in crustal resistivity (that extends at least 400 km to the north). There is additionally a conductor located on or just above the interface and significant changes in the potential field response corresponding to both upper crustal and lower crustal depths. The structure is spatially associated with a low-resistivity feature (interpreted to be due to fluid movement or alteration), extends into the shallow crust and represents a possible exploration target. The third study is focused on resolving the depth to basement and basin morphology of the Neoproterozoic-Mesozoic cover basins in the project area. Resolving the depth to basement from MT data is inherently difficult due to the data’s insensitivity to the top of a resistive package (such as crystalline basement rocks). We used a combination of 1D probabilistic inversion, 2D deterministic inversion and synthetic modelling of downhole resistivity data to produce the final interpretation. The interpretation includes the base of the Eromanga Basin, an intra-Georgina Basin low-resistivity layer and depth to basement, all of which have associated error estimates. Understanding variability in geophysical inversion is integral to the construction of a well-supported geological interpretation. This is especially true for areas where constraining information is limited or absent. We demonstrate that an understanding of data resolution and model uncertainty enables interpretation of new, worthwhile geological information from MT inversion even in data-poor greenfield terranes. Our new interpretation de-risks mineral exploration and provide new insights into crustal structures important for exploration targeting.
Thesis (Ph.D.) -- University of Adelaide, School of Physical Sciences, 2019
APA, Harvard, Vancouver, ISO, and other styles
18

Griffiths, Shawn Curtis. "Issues related to site property variability and shear strength in site response analysis." Thesis, 2015. http://hdl.handle.net/2152/31370.

Full text
Abstract:
Nonlinear site response analyses are generally preferred over equivalent linear analyses for soft soil sites subjected to high-intensity input ground motions. However, both nonlinear and equivalent linear analyses often result in large induced shear strains (3-10%) at soft sites, and these large strains may generate unusual characteristics in the predicted surface ground motions. One source of the overestimated shear strains may be attributed to unrealistically low shear strengths implied by commonly used modulus reduction curves. Therefore, modulus reduction and damping curves can be modified at shear strains greater than 0.1% to provide a more realistic soil model for site response. However, even after these modifications, nonlinear and equivalent linear site response analyses still may generate unusual surface acceleration time histories and Fourier amplitude spectra at soft soil sites when subjected to high-intensity input ground motions. As part of this work, equivalent linear and nonlinear 1D site response analyses for the well-known Treasure Island site demonstrate the challenges associated with accurately modeling large shear strains, and subsequent surface response, at soft soil sites. Accounting for the uncertainties associated with the shear wave velocity profile is an important part of a properly executed site response analyses. Surface wave data from Grenoble, France and Mirandola, Italy have been used to determine shear wave velocity (Vs) profiles from inversion of surface wave data. Furthermore, Vs profiles from inversion have been used to determine boundary, median and statistically-based randomly generated profiles. The theoretical dispersion curves from the inversion analyses as well as the boundary, median and randomly generated Vs profiles are compared with experimentally measured surface wave data. It is found that the median theoretical dispersion curve provides a satisfactory fit to the experimental data, but the boundary type theoretical dispersion curves do not. Randomly generated profiles result in some theoretical dispersion curves that fit the experimental data, and many that do not. Site response analyses revealed that the greater variability in the response spectra and amplification factors were determined from the randomly generated Vs profiles than the inversion or boundary Vs profiles.
APA, Harvard, Vancouver, ISO, and other styles
19

Chevalier, Clément. "Fast uncertainty reduction strategies relying on Gaussian process models." Phd thesis, 2013. http://tel.archives-ouvertes.fr/tel-00879082.

Full text
Abstract:
Cette thèse traite de stratégies d'évaluation séquentielle et batch-séquentielle de fonctions à valeurs réelles sous un budget d'évaluation limité, à l'aide de modèles à processus Gaussiens. Des stratégies optimales de réduction séquentielle d'incertitude (SUR) sont étudiées pour deux problèmes différents, motivés par des cas d'application en sûreté nucléaire. Tout d'abord, nous traitons le problème d'identification d'un ensemble d'excursion au dessus d'un seuil T d'une fonction f à valeurs réelles. Ensuite, nous étudions le problème d'identification de l'ensemble des configurations "robustes, contrôlées", c'est à dire l'ensemble des inputs contrôlés où la fonction demeure sous T quelle que soit la valeur des différents inputs non-contrôlés. De nouvelles stratégies SUR sont présentés. Nous donnons aussi des procédures efficientes et des formules permettant d'utiliser ces stratégies sur des applications concrètes. L'utilisation de formules rapides pour recalculer rapidement le posterior de la moyenne ou de la fonction de covariance d'un processus Gaussien (les "formules d'update de krigeage") ne fournit pas uniquement une économie computationnelle importante. Elles sont aussi l'un des ingrédient clé pour obtenir des formules fermées permettant l'utilisation en pratique de stratégies d'évaluation coûteuses en temps de calcul. Une contribution en optimisation batch-séquentielle utilisant le Multi-points Expected Improvement est également présentée.
APA, Harvard, Vancouver, ISO, and other styles
20

Martin, James Robert Ph D. "A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion." Thesis, 2015. http://hdl.handle.net/2152/31374.

Full text
Abstract:
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography