Dissertations / Theses on the topic 'General polynomial chaos expansion'

To see the other types of publications on this topic, follow the link: General polynomial chaos expansion.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'General polynomial chaos expansion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Szepietowska, Katarzyna. "POLYNOMIAL CHAOS EXPANSION IN BIO- AND STRUCTURAL MECHANICS." Thesis, Bourges, INSA Centre Val de Loire, 2018. http://www.theses.fr/2018ISAB0004/document.

Full text
Abstract:
Cette thèse présente une approche probabiliste de la modélisation de la mécanique des matériaux et des structures. Le dimensionnement est influencé par l'incertitude des paramètres d'entrée. Le travail est interdisciplinaire et les méthodes décrites sont appliquées à des exemples de biomécanique et de génie civil. La motivation de ce travail était le besoin d'approches basées sur la mécanique dans la modélisation et la simulation des implants utilisés dans la réparation des hernies ventrales. De nombreuses incertitudes apparaissent dans la modélisation du système implant-paroi abdominale. L'approche probabiliste proposée dans cette thèse permet de propager ces incertitudes et d’étudier leurs influences respectives. La méthode du chaos polynomial basée sur la régression est utilisée dans ce travail. L'exactitude de ce type de méthodes non intrusives dépend du nombre et de l'emplacement des points de calcul choisis. Trouver une méthode universelle pour atteindre un bon équilibre entre l'exactitude et le coût de calcul est encore une question ouverte. Différentes approches sont étudiées dans cette thèse afin de choisir une méthode efficace et adaptée au cas d’étude. L'analyse de sensibilité globale est utilisée pour étudier les influences des incertitudes d'entrée sur les variations des sorties de différents modèles. Les incertitudes sont propagées aux modèles implant-paroi abdominale. Elle permet de tirer des conclusions importantes pour les pratiques chirurgicales. À l'aide de l'expertise acquise à partir de ces modèles biomécaniques, la méthodologie développée est utilisée pour la modélisation de joints de bois historiques et la simulation de leur comportement mécanique. Ce type d’étude facilite en effet la planification efficace des réparations et de la rénovation des bâtiments ayant une valeur historique
This thesis presents a probabilistic approach to modelling the mechanics of materials and structures where the modelled performance is influenced by uncertainty in the input parameters. The work is interdisciplinary and the methods described are applied to medical and civil engineering problems. The motivation for this work was the necessity of mechanics-based approaches in the modelling and simulation of implants used in the repair of ventral hernias. Many uncertainties appear in the modelling of the implant-abdominal wall system. The probabilistic approach proposed in this thesis enables these uncertainties to be propagated to the output of the model and the investigation of their respective influences. The regression-based polynomial chaos expansion method is used here. However, the accuracy of such non-intrusive methods depends on the number and location of sampling points. Finding a universal method to achieve a good balance between accuracy and computational cost is still an open question so different approaches are investigated in this thesis in order to choose an efficient method. Global sensitivity analysis is used to investigate the respective influences of input uncertainties on the variation of the outputs of different models. The uncertainties are propagated to the implant-abdominal wall models in order to draw some conclusions important for further research. Using the expertise acquired from biomechanical models, modelling of historic timber joints and simulations of their mechanical behaviour is undertaken. Such an investigation is important owing to the need for efficient planning of repairs and renovation of buildings of historical value
APA, Harvard, Vancouver, ISO, and other styles
2

Nydestedt, Robin. "Application of Polynomial Chaos Expansion for Climate Economy Assessment." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223985.

Full text
Abstract:
In climate economics integrated assessment models (IAMs) are used to predict economic impacts resulting from climate change. These IAMs attempt to model complex interactions between human and geophysical systems to provide quantifications of economic impact, typically using the Social Cost of Carbon (SCC) which represents the economic cost of a one ton increase in carbon dioxide. Another difficulty that arises in modeling a climate economics system is that both the geophysical and economic submodules are inherently stochastic. Even in frequently cited IAMs, such as DICE and PAGE, there exists a lot of variation in the predictions of the SCC. These differences stem both from the models of the climate and economic modules used, as well as from the choice of probability distributions used for the random variables. Seeing as IAMs often take the form of optimization problems these nondeterministic elements potentially result in heavy computational costs. In this thesis a new IAM, FAIR/DICE, is introduced. FAIR/DICE is a discrete time hybrid of DICE and FAIR providing a potential improvement to DICE as the climate and carbon modules in FAIR take into account feedback coming from the climate module to the carbon module. Additionally uncertainty propagation in FAIR/DICE is analyzed using Polynomial Chaos Expansions (PCEs) which is an alternative to Monte Carlo sampling where the stochastic variables are projected onto stochastic polynomial spaces. PCEs provide better computational efficiency compared to Monte Carlo sampling at the expense of storage requirements as a lot of computations can be stored from the first simulation of the system, and conveniently statistics can be computed from the PCE coefficients without the need for sampling. A PCE overloading of FAIR/DICE is investigated where the equilibrium climate sensitivity, modeled as a four parameter Beta distribution, introduces an uncertainty to the dynamical system. Finally, results in the mean and variance obtained from the PCEs are compared to a Monte Carlo reference and avenues into future work are suggested.
Inom klimatekonomi används integrated assessment models (IAMs) för att förutspå hur klimatförändringar påverkar ekonomin. Dessa IAMs modellerar komplexa interaktioner mellan geofysiska och mänskliga system för att kunna kvantifiera till exempel kostnaden för den ökade koldioxidhalten på planeten, i.e. Social Cost of Carbon (SCC). Detta representerar den ekonomiska kostnaden som motsvaras av utsläppet av ett ton koldioxid. Faktumet att både de geofysiska och ekonomiska submodulerna är stokastiska gör att SCC-uppskattningar varierar mycket även inom väletablerade IAMs som PAGE och DICE. Variationen grundar sig i skillnader inom modellerna men också från att val av sannolikhetsfördelningar för de stokastiska variablerna skiljer sig. Eftersom IAMs ofta är formulerade som optimeringsproblem leder dessutom osäkerheterna till höga beräkningskostnader. I denna uppsats introduceras en ny IAM, FAIR/DICE, som är en diskret tids hybrid av DICE och FAIR. Den utgör en potentiell förbättring av DICE eftersom klimat- och kolmodulerna i FAIR även behandlar återkoppling från klimatmodulen till kolmodulen. FAIR/DICE är analyserad med hjälp av Polynomial Chaos Expansions (PCEs), ett alternativ till Monte Carlo-metoder. Med hjälp av PCEs kan de osäkerheter projiceras på stokastiska polynomrum vilket har fördelen att beräkningskostnader reduceras men nackdelen att lagringskraven ökar. Detta eftersom många av beräkningarna kan sparas från första simuleringen av systemet, dessutom kan statistik extraheras direkt från PCE koefficienterna utan behov av sampling. FAIR/DICE systemet projiceras med hjälp av PCEs där en osäkerhet är introducerad via equilibrium climate sensitivity (ECS), vilket i sig är ett värde på hur känsligt klimatet är för koldioxidförändringar. ECS modelleras med hjälp av en fyra-parameters Beta sannolikhetsfördelning. Avslutningsvis jämförs resultat i medelvärde och varians mellan PCE implementationen av FAIR/DICE och en Monte Carlo-baserad referens, därefter ges förslag på framtida utvecklingsområden.
APA, Harvard, Vancouver, ISO, and other styles
3

Koehring, Andrew. "The application of polynomial response surface and polynomial chaos expansion metamodels within an augmented reality conceptual design environment." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Price, Darryl Brian. "Estimation of Uncertain Vehicle Center of Gravity using Polynomial Chaos Expansions." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/33625.

Full text
Abstract:
The main goal of this study is the use of polynomial chaos expansion (PCE) to analyze the uncertainty in calculating the lateral and longitudinal center of gravity for a vehicle from static load cell measurements. A secondary goal is to use experimental testing as a source of uncertainty and as a method to confirm the results from the PCE simulation. While PCE has often been used as an alternative to Monte Carlo, PCE models have rarely been based on experimental data. The 8-post test rig at the Virginia Institute for Performance Engineering and Research facility at Virginia International Raceway is the experimental test bed used to implement the PCE model. Experimental tests are conducted to define the true distribution for the load measurement systemsâ uncertainty. A method that does not require a new uncertainty distribution experiment for multiple tests with different goals is presented. Moved mass tests confirm the uncertainty analysis using portable scales that provide accurate results. The polynomial chaos model used to find the uncertainty in the center of gravity calculation is derived. Karhunen-Loeve expansions, similar to Fourier series, are used to define the uncertainties to allow for the polynomial chaos expansion. PCE models are typically computed via the collocation method or the Galerkin method. The Galerkin method is chosen as the PCE method in order to formulate a more accurate analytical result. The derivation systematically increases from one uncertain load cell to all four uncertain load cells noting the differences and increased complexity as the uncertainty dimensions increase. For each derivation the PCE model is shown and the solution to the simulation is given. Results are presented comparing the polynomial chaos simulation to the Monte Carlo simulation and to the accurate scales. It is shown that the PCE simulations closely match the Monte Carlo simulations.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Chen [Verfasser], and Vincent [Akademischer Betreuer] Heuveline. "Uncertainty Quantification for a Blood Pump Device with Generalized Polynomial Chaos Expansion / Chen Song ; Betreuer: Vincent Heuveline." Heidelberg : Universitätsbibliothek Heidelberg, 2018. http://d-nb.info/1177252406/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Langewisch, Dustin R. "Application of the polynomial chaos expansion to multiphase CFD : a study of rising bubbles and slug flow." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92097.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 157-167).
Part I of this thesis considers subcooled nucleate boiling on the microscale, focusing on the analysis of heat transfer near the Three-Phase (solid, liquid, and vapor) contact Line (TPL) region. A detailed derivation of one representative TPL model is presented. From this work, it was ultimately concluded that heat transfer in the vicinity of the TPL is rather unimportant in the overall quantification of nucleate boiling heat transfer; despite the extremely high heat fluxes that are attainable, it is limited to a very small region so the net heat transfer from this region is comparatively small. It was further concluded that many of the so-called microlayer heat transfer models appearing in the literature are actually models for TPL heat transfer; these models do not model the experimentally observed microlayer. This portion of the project was terminated early, however, in order to focus on the application of advanced computational uncertainty quantification methods to computational multiphase fluid dynamics (Part II). Part II discusses advanced uncertainty quantification (UQ) methods for long-running numerical models, namely computational multiphase fluid dynamics (CMFD) simulations. We consider the problem of how to efficiently propagate uncertainties in the model inputs (e.g., fluid properties, such as density, viscosity, etc.) through a computationally demanding model. The challenge is chiefly a matter of economics-the long run-time of these simulations limits the number of samples that one can reasonably obtain (i.e., the number of times the simulation can be run). Chapter 2 introduces the generalized Polynomial Chaos (gPC) expansion, which has shown promise for reducing the computational cost of performing UQ for a large class of problems, including heat transfer and single phase, incompressible flow simulations; example applications are demonstrated in Chapter 2. One of main objectives of this research was to ascertain whether this promise extends to realm of CMFD applications, and this is the topic of Chapters 3 and 4; Chapter 3 covers the numerical simulation of a single bubble rising in a quiescent liquid bath. The pertinent quantities from these simulations are the terminal velocity of the bubble and terminal bubble shape. the simulations were performed using the open source gerris flow solver. A handful of test cases were performed to validate the simulation results against available experimental data and numerical results from other authors; the results from gerris were found to compare favorably. Following the validation, we considered two uncertainty quantifications problems. In the first problem, the viscosity of the surrounding liquid is modeled as a uniform random variable and we quantify the resultant uncertainty in the bubbles terminal velocity. The second example is similar, except the bubble's size (diameter) is modeled as a log-normal random variable. In this case, the Hermite expansion is seen to converge almost immediately; a first-order Hermite expansion computed using 3 model evaluations is found to capture the terminal velocity distribution almost exactly. Both examples demonstrate that NISP can be successfully used to efficiently propagate uncertainties through CMFD models. Finally, we describe a simple technique to implement a moving reference frame in gerris. Chapter 4 presents an extensive study of the numerical simulation of capillary slug flow. We review existing correlations for the thickness of the liquid film surrounding a Taylor bubble and the pressure drop across the bubble. Bretherton's lubrication analysis, which yields analytical predictions for these quantities when inertial effects are negligible and Ca[beta] --> o, is considered in detail. In addition, a review is provided of film thickness correlations that are applicable for high Cab or when inertial effects are non-negligible. An extensive computational study was undertaken with gerris to simulate capillary slug flow under a variety of flow conditions; in total, more than two hundred simulations were carried out. The simulations were found to compare favorably with simulations performed previously by other authors using finite elements. The data from our simulations have been used to develop a new correlation for the film thickness and bubble velocity that is generally applicable. While similar in structure to existing film thickness correlations, the present correlation does not require the bubble velocity to be known a priori. We conclude with an application of the gPC expansion to quantify the uncertainty in the pressure drop in a channel in slug flow when the bubble size is described by a probability distribution. It is found that, although the gPC expansion fails to adequately quantify the uncertainty in field quantities (pressure and velocity) near the liquid-vapor interface, it is nevertheless capable of representing the uncertainty in other quantities (e.g., channel pressure drop) that do not depend sensitively on the precise location of the interface.
by Dustin R. Langewisch.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Yadav, Vaibhav. "Novel Computational Methods for Solving High-Dimensional Random Eigenvalue Problems." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4927.

Full text
Abstract:
The primary objective of this study is to develop new computational methods for solving a general random eigenvalue problem (REP) commonly encountered in modeling and simulation of high-dimensional, complex dynamic systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) a rigorous comparison of accuracy, efficiency, and convergence properties of the polynomial chaos expansion (PCE) and PDD methods; (2) development of two novel multiplicative PDD methods for addressing multiplicative structures in REPs; (3) development of a new hybrid PDD method to account for the combined effects of the multiplicative and additive structures in REPs; and (4) development of adaptive and sparse algorithms in conjunction with the PDD methods. The major findings are as follows. First, a rigorous comparison of the PCE and PDD methods indicates that the infinite series from the two expansions are equivalent but their truncations endow contrasting dimensional structures, creating significant difference between the two approximations. When the cooperative effects of input variables on an eigenvalue attenuate rapidly or vanish altogether, the PDD approximation commits smaller error than does the PCE approximation for identical expansion orders. Numerical analysis reveal higher convergence rates and significantly higher efficiency of the PDD approximation than the PCE approximation. Second, two novel multiplicative PDD methods, factorized PDD and logarithmic PDD, were developed to exploit the hidden multiplicative structure of an REP, if it exists. Since a multiplicative PDD recycles the same component functions of the additive PDD, no additional cost is incurred. Numerical results show that indeed both the multiplicative PDD methods are capable of effectively utilizing the multiplicative structure of a random response. Third, a new hybrid PDD method was constructed for uncertainty quantification of high-dimensional complex systems. The method is based on a linear combination of an additive and a multiplicative PDD approximation. Numerical results indicate that the univariate hybrid PDD method, which is slightly more expensive than the univariate additive or multiplicative PDD approximations, yields more accurate stochastic solutions than the latter two methods. Last, two novel adaptive-sparse PDD methods were developed that entail global sensitivity analysis for defining the relevant pruning criteria. Compared with the past developments, the adaptive-sparse PDD methods do not require its truncation parameter(s) to be assigned a priori or arbitrarily. Numerical results reveal that an adaptive-sparse PDD method achieves a desired level of accuracy with considerably fewer coefficients compared with existing PDD approximations.
APA, Harvard, Vancouver, ISO, and other styles
8

Mühlpfordt, Tillmann [Verfasser], and V. [Akademischer Betreuer] Hagenmeyer. "Uncertainty Quantification via Polynomial Chaos Expansion – Methods and Applications for Optimization of Power Systems / Tillmann Mühlpfordt ; Betreuer: V. Hagenmeyer." Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1203211872/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Scott, Karen Mary Louise. "Practical Analysis Tools for Structures Subjected to Flow-Induced and Non-Stationary Random Loads." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/38686.

Full text
Abstract:
There is a need to investigate and improve upon existing methods to predict response of sensors due to flow-induced vibrations in a pipe flow. The aim was to develop a tool which would enable an engineer to quickly evaluate the suitability of a particular design for a certain pipe flow application, without sacrificing fidelity. The primary methods, found in guides published by the American Society of Mechanical Engineers (ASME), of simple response prediction of sensors were found to be lacking in several key areas, which prompted development of the tool described herein. A particular limitation of the existing guidelines deals with complex stochastic stationary and non-stationary modeling and required much further study, therefore providing direction for the second portion of this body of work. A tool for response prediction of fluid-induced vibrations of sensors was developed which allowed for analysis of low aspect ratio sensors. Results from the tool were compared to experimental lift and drag data, recorded for a range of flow velocities. The model was found to perform well over the majority of the velocity range showing superiority in prediction of response as compared to ASME guidelines. The tool was then applied to a design problem given by an industrial partner, showing several of their designs to be inadequate for the proposed flow regime. This immediate identification of unsuitable designs no doubt saved significant time in the product development process. Work to investigate stochastic modeling in structural dynamics was undertaken to understand the reasons for the limitations found in fluid-structure interaction models. A particular weakness, non-stationary forcing, was found to be the most lacking in terms of use in the design stage of structures. A method was developed using the Karhunen Loeve expansion as its base to close the gap between prohibitively simple (stationary only) models and those which require too much computation time. Models were developed from SDOF through continuous systems and shown to perform well at each stage. Further work is needed in this area to bring this work full circle such that the lessons learned can improve design level turbulent response calculations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Segui, Vasquez Bartolomé. "Modélisation dynamique des systèmes disque aubes multi-étages : Effets des incertitudes." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00961270.

Full text
Abstract:
Les conceptions récentes de turbomachines ont tendance à évoluer vers des liaisons entre étages de plus en plus souples et des niveaux d'amortissement faibles, donnant lieu à des configurations où les modes sont susceptibles de présenter des niveaux de couplages inter-étages forts. En général, les ensembles disques aubes multi-étagés n'ont aucune propriété de symétrie cyclique d'ensemble et l'analyse doit porter sur un modèle de la structure complète donnant lieu à des calculs très coûteux. Pour palier ce problème, une méthode récente appelée symétrie cyclique multi-étages peut être utilisée pour réduire le coût des calculs des rotors composés de plusieurs étages, même lorsque les étages ont un nombre différent de secteurs. Cette approche profite de la symétrie cyclique inhérente à chaque étage et utilise une hypothèse spécifique qui aboutit à des sous-problèmes découplés pour chaque ordre de Fourier spatial. La méthodologie proposée vise à étudier l'effet des incertitudes sur le comportement dynamique des rotors en utilisant l'approche de symétrie cyclique multi-étages et l'expansion en Chaos Polynomial. Les incertitudes peuvent découler de l'usure des aubes, des changements de température ou des tolérances de fabrication. En première approche, seules les incertitudes provenant de l'usure uniforme de l'ensemble des aubes sont étudiées. Celles-ci peuvent être modélisées en considérant une variation globale des propriétés du matériau de l'ensemble des aubes d'un étage particulier. L'approche de symétrie cyclique multi-étages peut alors être utilisée car l'hypothèse de secteurs identiques est respectée. La positivité des matrices aléatoires concernées est assurée par l'utilisation d'une loi gamma très adaptée à la physique du problème impliquant le choix des polynômes de Laguerre comme base pour le chaos polynomial. Dans un premier temps des exemples numériques représentatifs de différents types de turbomachines sont introduits dans le but d'évaluer la robustesse de la méthode de symétrie cyclique multi-étages. Ensuite, les résultats de l'analyse modale aléatoire et de la réponse aléatoire obtenus par le chaos polynomial sont validés par comparaison avec des simulations de Monte-Carlo. En plus des résultats classiquement rencontrés pour les fréquences et réponses forcées, les incertitudes considérées mettent en évidence des variations sur les déformées modales qui évoluent entre différentes familles de modes dans les zones de forte densité modale. Ces variations entraînent des modifications sensibles sur la dynamique globale de la structure analysée et doivent être considérées dans le cadre des conceptions robustes.
APA, Harvard, Vancouver, ISO, and other styles
11

El, Moçayd Nabil. "La décomposition en polynôme du chaos pour l'amélioration de l'assimilation de données ensembliste en hydraulique fluviale." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/17862/1/El_Mocayd_Nabil.pdf.

Full text
Abstract:
Ce travail porte sur la construction d'un modèle réduit en hydraulique fluviale avec une méthode de décomposition en polynôme du chaos. Ce modèle réduit remplace le modèle direct afin de réduire le coût de calcul lié aux méthodes ensemblistes en quantification d'incertitudes et assimilation de données. Le contexte de l'étude est la prévision des crues et la gestion de la ressource en eau. Ce manuscrit est composé de cinq parties, chacune divisée en chapitres. La première partie présente un état de l'art des travaux en quantification des incertitudes et en assimilation de données dans le domaine de l'hydraulique ainsi que les objectifs de la thèse. On présente le cadre de la prévision des crues, ses enjeux et les outils dont on dispose pour prévoir la dynamique des rivières. On présente notamment la future mission SWOT qui a pour but de mesurer les hauteurs d'eau dans les rivières avec un couverture globale à haute résolution. On précise notamment l'apport de ces mesures et leur complémentarité avec les mesures in-situ. La deuxième partie présente les équations de Saint-Venant, qui décrivent les écoulements dans les rivières, ainsi qu'une discrétisation numérique de ces équations, telle qu'implémentée dans le logiciel Mascaret-1D. Le dernier chapitre de cette partie propose des simplifications des équations de Saint-Venant. La troisième partie de ce manuscrit présente les méthodes de quantification et de réduction des incertitudes. On présente notamment le contexte probabiliste de la quantification d'incertitudes et d'analyse de sensibilité. On propose ensuite de réduire la dimension d'un problème stochastique quand on traite de champs aléatoires. Les méthodes de décomposition en polynômes du chaos sont ensuite présentées. Cette partie dédiée à la méthodologie s'achève par un chapitre consacré à l'assimilation de données ensemblistes et à l'utilisation des modèles réduits dans ce cadre. La quatrième partie de ce manuscrit est dédiée aux résultats. On commence par identifier les sources d'incertitudes en hydraulique que l'on s'attache à quantifier et réduire par la suite. Un article en cours de révision détaille la validation d'un modèle réduit pour les équations de Saint-Venant en régime stationnaire lorsque l'incertitude est majoritairement portée par les coefficients de frottement et le débit à l'amont. On montre que les moments statistiques, la densité de probabilité et la matrice de covariances spatiales pour la hauteur d'eau sont efficacement et précisément estimés à l'aide du modèle réduit dont la construction ne nécessite que quelques dizaines d'intégrations du modèle direct. On met à profit l'utilisation du modèle réduit pour réduire le coût de calcul du filtre de Kalman d'Ensemble dans le cadre d'un exercice d'assimilation de données synthétiques de type SWOT. On s'intéresse précisément à la représentation spatiale de la donnée telle que vue par SWOT: couverture globale du réseau, moyennage spatial entre les pixels observés. On montre notamment qu'à budget de calcul donné les résultats de l'analyse d'assimilation de données qui repose sur l'utilisation du modèle réduit sont meilleurs que ceux obtenus avec le filtre classique. On s'intéresse enfin à la construction du modèle réduit en régime instationnaire. On suppose ici que l'incertitude est liée aux coefficients de frottement. Il s'agit à présent de juger de la nécessité du recalcul des coefficients polynomiaux au fil du temps et des cycles d'assimilation de données. Pour ce travail seul des données in-situ ont été considérées. On suppose dans un deuxième temps que l'incertitude est portée par le débit en amont du réseau, qui est un vecteur temporel. On procède à une décomposition de type Karhunen-Loève pour réduire la taille de l'espace incertain aux trois premiers modes. Nous sommes ainsi en mesure de mener à bien un exercice d'assimilation de données. Pour finir, les conclusions et les perspectives de ce travail sont présentées en cinquième partie.
APA, Harvard, Vancouver, ISO, and other styles
12

Ene, Simon. "Analys av osäkerheter vid hydraulisk modellering av torrfåror." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448369.

Full text
Abstract:
Hydraulisk modellering är ett viktigt verktyg vid utvärdering av lämpliga åtgärder för torrfåror. Modelleringen påverkas dock alltid av osäkerheter och om dessa är stora kan en modells simuleringsresultat bli opålitligt. Det kan därför vara viktigt att presentera dess simuleringsresultat tillsammans med osäkerheter. Denna studie utreder olika typer av osäkerheter som kan påverka hydrauliska modellers simuleringsresultat. Dessutom utförs känslighetsanalyser där en andel av osäkerheten i simuleringsresultatet tillskrivs de olika inmatningsvariablerna som beaktas. De parametrar som ingår i analysen är upplösningen i använd terrängmodell, upplösning i den hydrauliska modellens beräkningsnät, inflöde till modellen och råheten genom Mannings tal. Studieobjektet som behandlades i denna studie var en torrfåra som ligger nedströms Sandforsdammen i Skellefteälven och programvaran TELEMAC-MASCARET nyttjades för samtliga hydrauliska simuleringar i denna studie.  För att analysera osäkerheter kopplade till upplösning i en terrängmodell och ett beräkningsnät användes ett kvalitativt tillvägagångsätt. Ett antal simuleringar utfördes där alla parametrar förutom de kopplade till upplösning fixerades. Simuleringsresultaten illustrerades sedan genom profil, sektioner, enskilda raster och raster som visade differensen mellan olika simuleringar. Resultaten för analysen visade att en låg upplösning i terrängmodeller och beräkningsnät kan medföra osäkerheter lokalt där det är högre vattenhastigheter och där det finns stor variation i geometrin. Några signifikanta effekter kunde dock inte skönjas på större skala.  Separat gjordes kvantitativa osäkerhets- och känslighetsanalyser för vattendjup och vattenhastighet i torrfåran. Inmatningsparametrarna inflöde till modellen och råhet genom Mannings tal ansågs medföra störst påverkan och övriga parametrar fixerades således. Genom script skapade i programmeringsspråket Python tillsammans med biblioteket OpenTURNS upprättades ett stort urval av möjliga kombinationer för storlek på inflöde och Mannings tal. Alla kombinationer som skapades antogs till fullo täcka upp för den totala osäkerheten i inmatningsparametrarna. Genom att använda urvalet för simulering kunde osäkerheten i simuleringsresultaten också beskrivas. Osäkerhetsanalyser utfördes både genom klassisk beräkning av statistiska moment och genom Polynomial Chaos Expansion. En känslighetsanalys följde sedan där Polynomial Chaos Expansion användes för att beräkna Sobols känslighetsindex för inflödet och Mannings tal i varje kontrollpunkt. Den kvantitativa osäkerhetsanalysen visade att det fanns relativt stora osäkerheter för både vattendjupet och vattenhastighet vid behandlat studieobjekt. Flödet bidrog med störst påverkan på osäkerheten medan Mannings tals påverkan var insignifikant i jämförelse, bortsett från ett område i modellen där dess påverkan ökade markant.
Hydraulic modelling is an important tool when measures for dry river stretches are assessed. The modelling is however always affected by uncertainties and if these are big the simulation results from the models could become unreliable. It may therefore be important to present its simulation results together with the uncertainties. This study addresses various types of uncertainties that may affect the simulation results from hydraulic models. In addition, sensitivity analysis is conducted where a proportion of the uncertainty in the simulation result is attributed to the various input variables that are included. The parameters included in the analysis are terrain model resolution, hydraulic model mesh resolution, inflow to the model and Manning’s roughness coefficient. The object studied in this paper was a dry river stretch located downstream of Sandforsdammen in the river of Skellefteälven, Sweden. The software TELEMAC-MASCARET was used to perform all hydraulic simulations for this thesis.  To analyze the uncertainties related to the resolution for the terrain model and the mesh a qualitative approach was used. Several simulations were run where all parameters except those linked to the resolution were fixed. The simulation results were illustrated through individual rasters, profiles, sections and rasters that showed the differences between different simulations. The results of the analysis showed that a low resolution for terrain models and meshes can lead to uncertainties locally where there are higher water velocities and where there are big variations in the geometry. However, no significant effects could be discerned on a larger scale.  Separately, quantitative uncertainty and sensitivity analyzes were performed for the simulation results, water depth and water velocity for the dry river stretch. The input parameters that were assumed to have the biggest impact were the inflow to the model and Manning's roughness coefficient. Other model input parameters were fixed. Through scripts created in the programming language Python together with the library OpenTURNS, a large sample of possible combinations for the size of inflow and Manning's roughness coefficient was created. All combinations were assumed to fully cover the uncertainty of the input parameters. After using the sample for simulation, the uncertainty of the simulation results could also be described. Uncertainty analyses were conducted through both classical calculation of statistical moments and through Polynomial Chaos Expansion. A sensitivity analysis was then conducted through Polynomial Chaos Expansion where Sobol's sensitivity indices were calculated for the inflow and Manning's M at each control point. The analysis showed that there were relatively large uncertainties for both the water depth and the water velocity. The inflow had the greatest impact on the uncertainties while Manning's M was insignificant in comparison, apart from one area in the model where its impact increased.
APA, Harvard, Vancouver, ISO, and other styles
13

Fajraoui, Noura. "Analyse de sensibilité globale et polynômes de chaos pour l'estimation des paramètres : application aux transferts en milieu poreux." Phd thesis, Université de Strasbourg, 2014. http://tel.archives-ouvertes.fr/tel-01019528.

Full text
Abstract:
La gestion des transferts des contaminants en milieu poreux représentent une préoccupation croissante et revêtent un intérêt particulier pour le contrôle de la pollution dans les milieux souterrains et la gestion de la ressource en eau souterraine, ou plus généralement la protection de l'environnement. Les phénomènes d'écoulement et de transport de polluants sont décrits par des lois physiques traduites sous forme d'équations algébro-différentielles qui dépendent d'un grand nombre de paramètres d'entrée. Pour la plupart, ces paramètres sont mal connus et souvent ne sont pas directement mesurables et/ou leur mesure peut être entachée d'incertitude. Ces travaux de thèse concernent l'étude de l'analyse de sensibilité globale et l'estimation des paramètres pour des problèmes d'écoulement et de transport en milieux poreux. Pour mener à bien ces travaux, la décomposition en polynômes de chaos est utilisée pour quantifier l'influence des paramètres sur la sortie des modèles numériques utilisés. Cet outil permet non seulement de calculer les indices de sensibilité de Sobol mais représente également un modèle de substitution (ou métamodèle) beaucoup plus rapide à exécuter. Cette dernière caractéristique est alors exploitée pour l'inversion des modèles à partir des données observées. Pour le problème inverse, nous privilégions l'approche Bayésienne qui offre un cadre rigoureux pour l'estimation des paramètres. Dans un second temps, nous avons développé une stratégie efficace permettant de construire des polynômes de chaos creux, où seuls les coefficients dont la contribution sur la variance du modèle est significative, sont retenus. Cette stratégie a donné des résultats très encourageants pour deux problèmes de transport réactif. La dernière partie de ce travail est consacrée au problème inverse lorsque les entrées du modèle sont des champs stochastiques gaussiens spatialement distribués. La particularité d'un tel problème est qu'il est mal posé car un champ stochastique est défini par une infinité de coefficients. La décomposition de Karhunen-Loève permet de réduire la dimension du problème et également de le régulariser. Toutefois, les résultats de l'inversion par cette méthode fournit des résultats sensibles au choix à priori de la fonction de covariance du champ. Un algorithme de réduction de la dimension basé sur un critère de sélection (critère de Schwartz) est proposé afin de rendre le problème moins sensible à ce choix.
APA, Harvard, Vancouver, ISO, and other styles
14

Braun, Mathias. "Reduced Order Modelling and Uncertainty Propagation Applied to Water Distribution Networks." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0050/document.

Full text
Abstract:
Les réseaux de distribution d’eau consistent en de grandes infrastructures réparties dans l’espace qui assurent la distribution d’eau potable en quantité et en qualité suffisantes. Les modèles mathématiques de ces systèmes sont caractérisés par un grand nombre de variables d’état et de paramètres dont la plupart sont incertains. Les temps de calcul peuvent s’avérer conséquents pour les réseaux de taille importante et la propagation d’incertitude par des méthodes de Monte Carlo. Par conséquent, les deux principaux objectifs de cette thèse sont l’étude des techniques de modélisation à ordre réduit par projection ainsi que la propagation spectrale des incertitudes des paramètres. La thèse donne tout d’abord un aperçu des méthodes mathématiques utilisées. Ensuite, les équations permanentes des réseaux hydrauliques sont présentées et une nouvelle méthode de calcul des sensibilités est dérivée sur la base de la méthode adjointe. Les objectifs spécifiques du développement de modèles d’ordre réduit sont l’application de méthodes basées sur la projection, le développement de stratégies d’échantillonnage adaptatives plus efficaces et l’utilisation de méthodes d’hyper-réduction pour l’évaluation rapide des termes résiduels non linéaires. Pour la propagation des incertitudes, des méthodes spectrales sont introduites dans le modèle hydraulique et un modèle hydraulique intrusif est formulé. Dans le but d’une analyse plus efficace des incertitudes des paramètres, la propagation spectrale est ensuite évaluée sur la base du modèle réduit. Les résultats montrent que les modèles d’ordre réduit basés sur des projections offrent un avantage considérable par rapport à l’effort de calcul. Bien que l’utilisation de l’échantillonnage adaptatif permette une utilisation plus efficace des états système pré-calculés, l’utilisation de méthodes d’hyper-réduction n’a pas permis d’améliorer la charge de calcul. La propagation des incertitudes des paramètres sur la base des méthodes spectrales est comparable aux simulations de Monte Carlo en termes de précision, tout en réduisant considérablement l’effort de calcul
Water distribution systems are large, spatially distributed infrastructures that ensure the distribution of potable water of sufficient quantity and quality. Mathematical models of these systems are characterized by a large number of state variables and parameter. Two major challenges are given by the time constraints for the solution and the uncertain character of the model parameters. The main objectives of this thesis are thus the investigation of projection based reduced order modelling techniques for the time efficient solution of the hydraulic system as well as the spectral propagation of parameter uncertainties for the improved quantification of uncertainties. The thesis gives an overview of the mathematical methods that are being used. This is followed by the definition and discussion of the hydraulic network model, for which a new method for the derivation of the sensitivities is presented based on the adjoint method. The specific objectives for the development of reduced order models are the application of projection based methods, the development of more efficient adaptive sampling strategies and the use of hyper-reduction methods for the fast evaluation of non-linear residual terms. For the propagation of uncertainties spectral methods are introduced to the hydraulic model and an intrusive hydraulic model is formulated. With the objective of a more efficient analysis of the parameter uncertainties, the spectral propagation is then evaluated on the basis of the reduced model. The results show that projection based reduced order models give a considerable benefit with respect to the computational effort. While the use of adaptive sampling resulted in a more efficient use of pre-calculated system states, the use of hyper-reduction methods could not improve the computational burden and has to be explored further. The propagation of the parameter uncertainties on the basis of the spectral methods is shown to be comparable to Monte Carlo simulations in accuracy, while significantly reducing the computational effort
APA, Harvard, Vancouver, ISO, and other styles
15

Mulani, Sameer B. "Uncertainty Quantification in Dynamic Problems With Large Uncertainties." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28617.

Full text
Abstract:
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency. In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application. Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Svobodová, Miriam. "Dynamika soustav těles s neurčitostním modelem vzájemné vazby." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-418197.

Full text
Abstract:
This diploma thesis deal with evaluation of the impact in the scale of uncertaintly stiffness on the tool deviation during grooving process. By the affect of the insufficient stiffness in each parts of the machine, there is presented a mechanical vibration during the cutting process which may cause a damage to the surface of the workpiece, to the tool or to the processing machine. The change of the stiffness is caused in the result of tool wear, impact of setted cutting conditions and many others. In the first part includes teoretical introduction to field of the uncertainty and choosing suitable methods for the solutions. Chosen methods are Monte Carlo and polynomial chaos expansion which are procced in the interface of MATLAB. Both of the methods are primery tested on the simple systems with the indefinited enters of the stiffness. These systems replace the parts of the stiffness characteristics of the each support parts. After that, the model is defined for the turning during the process of grooving with the 3 degrees of freedom. Then the analyses of the uncertainity and also sensibility analyses for uncertainity entering data of the stiffness are carried out again by both methods. At the end are both methods compared in the points of view by the time consuption and also by precission. Judging by gathered data it is clear that the change of the stiffness has significant impact on vibration in all degrees of freedome of the analysed model. As the example a maximum and a minimum calculated deviation of the workpiece stiffness was calculated via methode of Monte Carlo. The biggest impact on the finall vibration of the tool is found by stiffness of the ball screw. The solution was developed for the more stabile cutting process.
APA, Harvard, Vancouver, ISO, and other styles
17

Kouassi, Attibaud. "Propagation d'incertitudes en CEM. Application à l'analyse de fiabilité et de sensibilité de lignes de transmission et d'antennes." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC067/document.

Full text
Abstract:
De nos jours, la plupart des analyses CEM d’équipements et systèmes électroniques sont basées sur des approches quasi-déterministes dans lesquelles les paramètres internes et externes des modèles sont supposés parfaitement connus et où les incertitudes les affectant sont prises en compte sur les réponses par le biais de marges de sécurité importantes. Or, l’inconvénient de telles approches est qu’elles sont non seulement trop conservatives, mais en outre totalement inadaptées à certaines situations, notamment lorsque l’objectif de l’étude impose de prendre en compte le caractère aléatoire de ces paramètres via des modélisations stochastiques appropriées de type variables, processus ou champs aléatoires. Cette approche probabiliste a fait l’objet ces dernières années d’un certain nombre de recherches en CEM, tant au plan national qu’au plan international. Le travail présenté dans cette thèse est une contribution à ces recherches et a un double objectif : (1) développer et mettre en œuvre une méthodologie probabiliste et ses outils numériques d’accompagnement pour l’évaluation de la fiabilité et l’analyse sensibilité des équipements et systèmes électroniques en se limitant à des modélisations stochastiques par variables aléatoires ; (2) étendre cette étude au cas des modélisations stochastiques par processus et champs aléatoires dans le cadre d’une analyse prospective basée sur la résolution de l’équation aux dérivées partielles des télégraphistes à coefficients aléatoires.L’approche probabiliste mentionnée au point (1) consiste à évaluer la probabilité de défaillance d’un équipement ou d’un système électronique vis-à-vis d’un critère de défaillance donné et à déterminer l’importance relative de chacun des paramètres aléatoires en présence. Les différentes méthodes retenues à cette fin sont des adaptations à la CEM de méthodes développées dans le domaine de la mécanique aléatoire pour les études de propagation d’incertitudes. Pour le calcul des probabilités de défaillance, deux grandes catégories de méthodes sont proposées : celles basées sur une approximation de la fonction d’état-limite relative au critère de défaillance et les méthodes de Monte-Carlo basées sur la simulation numérique des variables aléatoires du modèle et l’estimation statistique des probabilités cibles. Pour l’analyse de sensibilité, une approche locale et une approche globale sont retenues. Ces différentes méthodes sont d’abord testées sur des applications académiques afin de mettre en lumière leur intérêt dans le domaine de la CEM. Elles sont ensuite appliquées à des problèmes de lignes de transmission et d’antennes plus représentatifs de la réalité.Dans l’analyse prospective, des méthodes de résolution avancées sont proposées, basées sur des techniques spectrales requérant les développements en chaos polynomiaux et de Karhunen-Loève des processus et champs aléatoires présents dans les modèles. Ces méthodes ont fait l’objet de tests numériques encourageant, mais qui ne sont pas présentés dans le rapport de thèse, faute de temps pour leur analyse complète
Nowadays, most EMC analyzes of electronic or electrical devices are based on deterministic approaches for which the internal and external models’ parameters are supposed to be known and the uncertainties on models’ parameters are taken into account on the outputs by defining very large security margins. But, the disadvantage of such approaches is their conservative character and their limitation when dealing with the parameters’ uncertainties using appropriate stochastic modeling (via random variables, processes or fields) is required in agreement with the goal of the study. In the recent years, this probabilistic approach has been the subject of several researches in the EMC community. The work presented here is a contribution to these researches and has a dual purpose : (1) develop a probabilistic methodology and implement the associated numerical tools for the reliability and sensitivity analyzes of the electronic devices and systems, assuming stochastic modeling via random variables; (2) extend this study to stochastic modeling using random processes and random fields through a prospective analysis based on the resolution of the telegrapher equations (partial derivative equations) with random coefficients. The first mentioned probabilistic approach consists in computing the failure probability of an electronic device or system according to a given criteria and in determining the relative importance of each considered random parameter. The methods chosen for this purpose are adaptations to the EMC framework of methods developed in the structural mechanics community for uncertainty propagation studies. The failure probabilities computation is performed using two type of methods: the ones based on an approximation of the limit state function associated to the failure criteria, and the Monte Carlo methods based on the simulation of the model’s random variables and the statistical estimation of the target failure probabilities. In the case of the sensitivity analysis, a local approach and a global approach are retained. All these methods are firstly applied to academic EMC problems in order to illustrate their interest in the EMC field. Next, they are applied to transmission lines problems and antennas problems closer to reality. In the prospective analysis, more advanced resolution methods are proposed. They are based on spectral approaches requiring the polynomial chaos expansions and the Karhunen-Loève expansions of random processes and random fields considered in the models. Although the first numerical tests of these methods have been hopeful, they are not presented here because of lack of time for a complete analysis
APA, Harvard, Vancouver, ISO, and other styles
18

Alhajj, Chehade Hicham. "Geosynthetic-Reinforced Retaining Walls-Deterministic And Probabilistic Approaches." Thesis, Université Grenoble Alpes, 2021. http://www.theses.fr/2021GRALI010.

Full text
Abstract:
L'objectif de cette thèse est de développer, dans le cadre de la mécanique des sols, des méthodes d’analyse de la stabilité interne des murs de soutènement renforcés par géosynthétiques sous chargement sismique. Le travail porte d'abord sur des analyses déterministes, puis est étendu à des analyses probabilistes. Dans la première partie de cette thèse, un modèle déterministe, basé sur le théorème cinématique de l'analyse limite, est proposé pour évaluer le facteur de sécurité d’un mur en sol renforcé ou la résistance nécessaire du renforcement pour stabiliser la structure. Une technique de discrétisation spatiale est utilisée pour générer une surface de rupture rotationnelle, afin de pouvoir considérer des remblais hétérogènes et/ou de représenter le chargement sismique par une approche de type pseudo-dynamique. Les cas de sols secs, non saturés et saturés sont étudiés. La présence de fissures dans le sol est également prise en compte. Ce modèle déterministe permet d’obtenir des résultats rigoureux et est validé par confrontation avec des résultats existants dans la littérature. Dans la deuxième partie du mémoire de thèse, ce modèle déterministe est utilisé dans un cadre probabiliste. Tout d'abord, l’approche en variables aléatoires est utilisée. Les incertitudes considérées concernent les paramètres de résistance au cisaillement du sol, la charge sismique et la résistance des renforcements. L'expansion du chaos polynomial qui consiste à remplacer le modèle déterministe coûteux par un modèle analytique, combinée avec la technique de simulation de Monte Carlo est la méthode fiabiliste considérée pour effectuer l'analyse probabiliste. L'approche en variables aléatoires néglige la variabilité spatiale du sol puisque les propriétés du sol et les autres paramètres modélisés par des variables aléatoires, sont considérés comme constants dans chaque simulation déterministe. Pour cette raison, dans la dernière partie du manuscrit, la variabilité spatiale du sol est considérée en utilisant la théorie des champs aléatoires. La méthode SIR/A-bSPCE, une combinaison entre la technique de réduction dimensionnelle SIR (Sliced Inverse Regression) et une expansion de chaos polynomial adaptative (A-bSPCE), est la méthode fiabiliste considérée pour effectuer l'analyse probabiliste. Le temps de calcul total de l'analyse probabiliste, effectuée à l'aide de la méthode SIR-SPCE, est considérablement réduit par rapport à l'exécution directe des méthode probabilistes classiques. Seuls les paramètres de résistance du sol sont modélisés à l'aide de champs aléatoires, afin de se concentrer sur l'effet de la variabilité spatiale sur les résultats fiabilistes
The aim of this thesis is to assess the seismic internal stability of geosynthetic reinforced soil retaining walls. The work first deals with deterministic analyses and then focus on probabilistic ones. In the first part of this thesis, a deterministic model, based on the upper bound theorem of limit analysis, is proposed for assessing the reinforced soil wall safety factor or the required reinforcement strength to stabilize the structure. A spatial discretization technique is used to generate the rotational failure surface and give the possibility of considering heterogeneous backfills and/or to represent the seismic loading by the pseudo-dynamic approach. The cases of dry, unsaturated and saturated soils are investigated. Additionally, the crack presence in the backfill soils is considered. This deterministic model gives rigorous results and is validated by confrontation with existing results from the literature. Then, in the second part of the thesis, this deterministic model is used in a probabilistic framework. First, the uncertain input parameters are modeled using random variables. The considered uncertainties involve the soil shear strength parameters, seismic loading and reinforcement strength parameters. The Sparse Polynomial Chaos Expansion that consists of replacing the time expensive deterministic model by a meta-model, combined with Monte Carlo Simulations is considered as the reliability method to carry out the probabilistic analysis. Random variables approach neglects the soil spatial variability since the soil properties and the other uncertain input parameters, are considered constant in each deterministic simulation. Therefore, in the last part of the manuscript, the soil spatial variability is considered using the random field theory. The SIR/A-bSPCE method, a combination between the dimension reduction technique, Sliced Inverse Regression (SIR) and an active learning sparse polynomial chaos expansion (A-bSPCE), is implemented to carry out the probabilistic analysis. The total computational time of the probabilistic analysis, performed using SIR-SPCE, is significantly reduced compared to directly running classical probabilistic methods. Only the soil strength parameters are modeled using random fields, in order to focus on the effect of the spatial variability on the reliability results
APA, Harvard, Vancouver, ISO, and other styles
19

OTTONELLO, ANDREA. "Application of Uncertainty Quantification techniques to CFD simulation of twin entry radial turbines." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1046507.

Full text
Abstract:
L'argomento principale della tesi è l'applicazione delle tecniche di quantificazione dell'incertezza (UQ) alla simulazione numerica (CFD) di turbine radiali twin entry impiegate nella turbosovralimentazione automobilistica. Lo studio approfondito di questo tipo di turbomacchine è affrontato nel capitolo 3, finalizzato alla comprensione dei principali parametri che caratterizzano e influenzano le prestazioni fluidodinamiche delle turbine twin scroll. Il capitolo 4 tratta di una piattaforma per l'analisi UQ sviluppata internamente tramite il set di strumenti open source ‘Dakota’. La piattaforma è stata testata dapprima su un caso di interesse industriale, ovvero un ugello de Laval supersonico (capitolo 5); l'analisi ha evidenziato l'utilizzo pratico delle tecniche di quantificazione dell'incertezza nella previsione delle prestazioni di un ugello affetto da condizioni di fuori progetto con complessità fluidodinamica dovuta alla forte non linearità. L'esperienza maturata con l'approccio UQ ha agevolato l'identificazione di metodi idonei per applicare la propagazione dell’incertezza alla simulazione CFD di turbine radiali twin scroll (capitolo 6). In tal caso sono state studiate e messe in pratica diverse tecniche di quantificazione dell'incertezza al fine di acquisire un'esperienza approfondita sull’attuale stato dell'arte. Il confronto dei risultati ottenuti dai diversi approcci e la discussione dei pro e dei contro relativi a ciascuna tecnica hanno portato a conclusioni interessanti, che vengono proposte come linee guida per future applicazioni di quantificazione dell’incertezza alla simulazione CFD delle turbine radiali. L'integrazione di modelli e metodologie UQ, oggi utilizzati solo da alcuni centri di ricerca accademica, con solutori CFD commerciali consolidati ha permesso di raggiungere l'obiettivo finale della tesi di dottorato: dimostrare all'industria l'elevato potenziale delle tecniche UQ nel migliorare, attraverso distribuzioni di probabilità, la previsione delle prestazioni relative ad un componente soggetto a diverse fonti di incertezza. Lo scopo dell’attività di ricerca consiste pertanto nel fornire ai progettisti dati prestazionali associati a margini di incertezza che consentano di correlare meglio simulazione e applicazione reale. Per accordi di riservatezza, i parametri geometrici relativi alla turbina twin entry in oggetto sono forniti adimensionali, i dati sensibili sugli assi dei grafici sono stati omessi e nelle figure si è reso necessario eliminare le legende dei contours ed ogni eventuale riferimento dimensionale.
The main topic of the thesis is the application of uncertainty quantification (UQ) techniques to the numerical simulation (CFD) of twin entry radial turbines used in automotive turbocharging. The detailed study of this type of turbomachinery is addressed in chapter 3, aimed at understanding the main parameters which characterize and influence the fluid dynamic performance of twin scroll turbines. Chapter 4 deals with the development of an in-house platform for UQ analysis through ‘Dakota’ open source toolset. The platform was first tested on a test case of industrial interest, i.e. a supersonic de Laval nozzle (chapter 5); the analysis highlighted the practical use of uncertainty quantification techniques in predicting the performance of a nozzle affected by off-design conditions with fluid dynamic complexity due to strong non-linearity. The experience gained with the UQ approach facilitated the identification of suitable methods for applying the uncertainty propagation to the CFD simulation of twin entry radial turbines (chapter 6). In this case different uncertainty quantification techniques have been investigated and put into practice in order to acquire in-depth experience on the current state of the art. The comparison of the results coming from the different approaches and the discussion of the pros and cons related to each technique led to interesting conclusions, which are proposed as guidelines for future uncertainty quantification applications to the CFD simulation of radial turbines. The integration of UQ models and methodologies, today used only by some academic research centers, with well established commercial CFD solvers allowed to achieve the final goal of the doctoral thesis: to demonstrate to industry the high potential of UQ techniques in improving, through probability distributions, the prediction of the performance relating to a component subject to different sources of uncertainty. The purpose of the research activity is therefore to provide designers with performance data associated with margins of uncertainty that allow to better correlate simulation and real application. Due to confidentiality agreements, geometrical parameters concerning the studied twin entry radial turbine are provided dimensionless, confidential data on axes of graphs are omitted and legends of the contours as well as any dimensional reference have been shadowed.
APA, Harvard, Vancouver, ISO, and other styles
20

Rousseau, Marie. "Propagation d'incertitudes et analyse de sensibilité pour la modélisation de l'infiltration et de l'érosion." Phd thesis, Université Paris-Est, 2012. http://pastel.archives-ouvertes.fr/pastel-00788360.

Full text
Abstract:
Nous étudions la propagation et la quantification d'incertitudes paramétriques au travers de modèles hydrologiques pour la simulation des processus d'infiltration et d'érosion en présence de pluie et/ou de ruissellement. Les paramètres incertains sont décrits dans un cadre probabiliste comme des variables aléatoires indépendantes dont la fonction de densité de probabilité est connue. Cette modélisation probabiliste s'appuie sur une revue bibliographique permettant de cerner les plages de variations des paramètres. L'analyse statistique se fait par échantillonage Monte Carlo et par développements en polynômes de chaos. Nos travaux ont pour but de quantifier les incertitudes sur les principales sorties du modèle et de hiérarchiser l'influence des paramètres d'entrée sur la variabilité de ces sorties par une analyse de sensibilité globale. La première application concerne les effets de la variabilité et de la spatialisation de la conductivité hydraulique à saturation du sol dans le modèle d'infiltration de Green--Ampt pour diverses échelles spatiales et temporelles. Notre principale conclusion concerne l'importance de l'état de saturation du sol. La deuxième application porte sur le modèle d'érosion de Hairsine--Rose. Une des conclusions est que les interactions paramétriques sont peu significatives dans le modèle de détachement par la pluie mais s'avèrent importantes dans le modèle de détachement par le ruissellement
APA, Harvard, Vancouver, ISO, and other styles
21

Kassir, Wafaa. "Approche probabiliste non gaussienne des charges statiques équivalentes des effets du vent en dynamique des structures à partir de mesures en soufflerie." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1116/document.

Full text
Abstract:
Afin d'estimer les forces statiques équivalentes du vent, qui produisent les réponses quasi-statiques et dynamiques extrêmes dans les structures soumises au champ de pression instationnaire induit par les effets du vent, une nouvelle méthode probabiliste est proposée. Cette méthode permet de calculer les forces statiques équivalentes du vent pour les structures avec des écoulements aérodynamiques complexes telles que les toitures de stade, pour lesquelles le champ de pression n'est pas gaussien et pour lesquelles la réponse dynamique de la structure ne peut être simplement décrite en utilisant uniquement les premiers modes élastiques (mais nécessitent une bonne représentation des réponses quasi-statiques). Généralement, les mesures en soufflerie du champ de pression instationnaire appliqué à une structure dont la géométrie est complexe ne suffisent pas pour construire une estimation statistiquement convergée des valeurs extrêmes des réponses dynamiques de la structure. Une telle convergence est nécessaire pour l'estimation des forces statiques équivalentes afin de reproduire les réponses dynamiques extrêmes induites par les effets du vent en tenant compte de la non-gaussianité du champ de pression aléatoire instationnaire. Dans ce travail, (1) un générateur de réalisation du champ de pression instationnaire non gaussien est construit en utilisant les réalisations qui sont mesurées dans la soufflerie à couche limite turbulente; ce générateur basé sur une représentation en chaos polynomiaux permet de construire un grand nombre de réalisations indépendantes afin d'obtenir la convergence des statistiques des valeurs extrêmes des réponses dynamiques, (2) un modèle d'ordre réduit avec des termes d'accélération quasi-statique est construit et permet d'accélérer la convergence des réponses dynamiques de la structure en n'utilisant qu'un petit nombre de modes élastiques, (3) une nouvelle méthode probabiliste est proposée pour estimer les forces statiques équivalentes induites par les effets du vent sur des structures complexes décrites par des modèles éléments finis, en préservant le caractère non gaussien et sans introduire le concept d'enveloppes des réponses. L'approche proposée est validée expérimentalement avec une application relativement simple et elle est ensuite appliquée à une structure de toiture de stade pour laquelle des mesures expérimentales de pressions instationnaires ont été effectuées dans la soufflerie à couche limite turbulente
In order to estimate the equivalent static wind loads, which produce the extreme quasi-static and dynamical responses of structures submitted to random unsteady pressure field induced by the wind effects, a new probabilistic method is proposed. This method allows for computing the equivalent static wind loads for structures with complex aerodynamic flows such as stadium roofs, for which the pressure field is non-Gaussian, and for which the dynamical response of the structure cannot simply be described by using only the first elastic modes (but require a good representation of the quasi-static responses). Usually, the wind tunnel measurements of the unsteady pressure field applied to a structure with complex geometry are not sufficient for constructing a statistically converged estimation of the extreme values of the dynamical responses. Such a convergence is necessary for the estimation of the equivalent static loads in order to reproduce the extreme dynamical responses induced by the wind effects taking into account the non-Gaussianity of the random unsteady pressure field. In this work, (1) a generator of realizations of the non-Gaussian unsteady pressure field is constructed by using the realizations that are measured in the boundary layer wind tunnel; this generator based on a polynomial chaos representation allows for generating a large number of independent realizations in order to obtain the convergence of the extreme value statistics of the dynamical responses, (2) a reduced-order model with quasi-static acceleration terms is constructed, which allows for accelerating the convergence of the structural dynamical responses by using only a small number of elastic modes of the structure, (3) a novel probabilistic method is proposed for estimating the equivalent static wind loads induced by the wind effects on complex structures that are described by finite element models, preserving the non-Gaussian property and without introducing the concept of responses envelopes. The proposed approach is experimentally validated with a relatively simple application and is then applied to a stadium roof structure for which experimental measurements of unsteady pressures have been performed in boundary layer wind tunnel
APA, Harvard, Vancouver, ISO, and other styles
22

Lebon, Jérémy. "Towards multifidelity uncertainty quantification for multiobjective structural design." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01002392.

Full text
Abstract:
This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
APA, Harvard, Vancouver, ISO, and other styles
23

Bourgey, Florian. "Stochastic approximations for financial risk computations." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX052.

Full text
Abstract:
Dans cette thèse, nous examinons plusieurs méthodes d'approximations stochastiques à la fois pour le calcul de mesures de risques financiers et pour le pricing de produits dérivés.Comme les formules explicites sont rarement disponibles pour de telles quantités, le besoin d'approximations analytiques rapides,efficaces et fiables est d'une importance capitale pour les institutions financières.Nous visons ainsi à donner un large aperçu de ces méthodes d'approximation et nous nous concentrons sur trois approches distinctes.Dans la première partie, nous étudions plusieurs méthodes d'approximation Monte Carlo multi-niveaux et les appliquons à deux problèmes pratiques :l'estimation de quantités impliquant des espérances imbriquées (comme la marge initiale) ainsi que la discrétisation des intégrales apparaissant dans les modèles rough pour la variance forward pour le pricing d'options sur le VIX.Dans les deux cas, nous analysons les propriétés d'optimalité asymptotique des estimateurs multi-niveaux correspondants et démontrons numériquement leur supériorité par rapport à une méthode de Monte Carlo classique.Dans la deuxième partie, motivés par les nombreux exemples issus de la modélisation en risque de crédit, nous proposons un cadre général de métamodélisation pour de grandes sommes de variables aléatoires de Bernoulli pondérées, qui sont conditionnellement indépendantes par rapport à un facteur commun X. Notre approche générique est basée sur la décomposition en polynômes du chaos du facteur commun et sur une approximation gaussienne. Les estimations d'erreur L2 sont données lorsque le facteur X est associé à des polynômes orthogonaux classiques.Enfin, dans la dernière partie de cette thèse, nous nous intéressons aux asymptotiques en temps court de la volatilité implicite américaine et les prix d'options américaines dans les modèles à volatilité locale. Nous proposons également une approximation en loi de l'indice VIX dans des modèles rough pour la variance forward, exprimée en termes de proxys log-normaux et dérivons des résultats d'expansion pour les options sur le VIX dont les coefficients sont explicites
In this thesis, we investigate several stochastic approximation methods for both the computation of financial risk measures and the pricing of derivatives.As closed-form expressions are scarcely available for such quantities, %and because they have to be evaluated daily, the need for fast, efficient, and reliable analytic approximation formulas is of primal importance to financial institutions.We aim at giving a broad overview of such approximation methods and we focus on three distinct approaches.In the first part, we study some Multilevel Monte Carlo approximation methods and apply them for two practical problems: the estimation of quantities involving nested expectations (such as the initial margin) along with the discretization of integrals arising in rough forward variance models for the pricing of VIX derivatives.For both cases, we analyze the properties of the corresponding asymptotically-optimal multilevel estimatorsand numerically demonstrate the superiority of multilevel methods compare to a standard Monte Carlo.In the second part, motivated by the numerous examples arising in credit risk modeling, we propose a general framework for meta-modeling large sums of weighted Bernoullirandom variables which are conditional independent of a common factor X.Our generic approach is based on a Polynomial Chaos Expansion on the common factor together withsome Gaussian approximation. L2 error estimates are given when the factor X is associated withclassical orthogonal polynomials.Finally, in the last part of this dissertation, we deal withsmall-time asymptotics and provide asymptoticexpansions for both American implied volatility and American option prices in local volatility models.We also investigate aweak approximations for the VIX index inrough forward variance models expressed in termsof lognormal proxiesand derive expansions results for VIX derivatives with explicit coefficients
APA, Harvard, Vancouver, ISO, and other styles
24

Schiavazzi, Daniele. "Redundant Multiresolution Uncertainty Propagation." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3422585.

Full text
Abstract:
Stochastic partial differential equations can be efficiently solved using collocation approaches combined with polynomial expansion in parameter space. Estimators based on these concepts show smaller variance than traditional or stratified Monte Carlo approaches under mild dimensionality. Research efforts in this context are focused on improving the efficiency of these methodologies for high dimensional problems (increasing number of input random variables) or for problems with discontinuous response in parameter space. In the present work, we use Compressive Sampling in order to minimize the number of deterministic computations needed to evaluate expansion coefficients for stochastic responses which are sparse in selected dictionaries of basis. Moreover, multiresolution approximation techniques are extended in the context of non-intrusive uncertainty propagation. Finally, an adaptive Importance Sampling strategy is used where samples are iteratively added to locations containing relevant features of increasingly smaller size. Applications are presented for analytical functions, stochastic differential equations, dynamical systems whose response is discontinuous or characterized by large gradients. Engineering problems involving robust optimization of windmill airfoils and passive damping of structures under uncertainty are also discussed. The last Chapter is devoted to methodologies aiming to restore element conservativeness for numerical and experimental velocity fields.
Metodi non intrusivi basati sull’espansione della risposta di un dato sistema nello spazio dei parametri (Chaos expansion methods) consentono di risolvere equazioni differenziali stocastiche con un numero di soluzioni deterministiche minori rispetto ad approcci tradizionali alla Monte Carlo con campionamento classico o stratificato. In tale ambito gli sforzi di ricerca odierni sono volti allo sviluppo di metodologie atte alla riduzione del costo computazionale in problemi caratterizzati da alta dimensionalitá (numero significativo di variabili aleatorie in input) ed al trattamento di problemi con risposta discontinua nello spazio dei parametri. La ricerca condotta si é concentrata sull’utilizzo di recenti tecniche di Compressive Sampling per la minimizzazione del numero di soluzioni deterministiche necessarie alla ricostruzione di risposte dotate di sparsitá secondo un pre-definito dizionario di basi. Inoltre, tecniche di approssimazione multi-risoluzione sono state estese a metodologie non intrusive di propagazione dell’incertezza. Infine, tecniche di Importance Sampling sono state utilizzate per determinare in modo adattativo l’ubicazione di nuovi samples al fine di cogliere le scale maggiormente importanti nelle risposte approssimate. Le metodologie approfondite ed implementate nell’ambito della ricerca svolta sono state applicate ad un insieme di funzioni analitiche, sistemi descritti da equazioni differenziali stocastiche, sistemi dinamici con risposte caratterizzate da elevati gradienti o discontinuitá, problemi ingegneristici con particolare riferimento all’ottimizzazione robusta della performance aerodinamica di profili per pale eoliche e sistemi passivi di smorzamento delle vibrazioni operanti sotto incertezza. Vengono inoltre presentate metodologie atte a ripristinare doti di conservazione di massa in flussi numerici e sperimentali.
APA, Harvard, Vancouver, ISO, and other styles
25

Riahi, Hassen. "Analyse de structures à dimension stochastique élevée : application aux toitures bois sous sollicitation sismique." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881187.

Full text
Abstract:
Le problème de la dimension stochastique élevée est récurrent dans les analyses probabilistes des structures. Il correspond à l'augmentation exponentielle du nombre d'évaluations du modèle mécanique lorsque le nombre de paramètres incertains est élevé. Afin de pallier cette difficulté, nous avons proposé dans cette thèse, une approche à deux étapes. La première consiste à déterminer la dimension stochastique efficace, en se basant sur une hiérarchisation des paramètres incertains en utilisant les méthodes de criblage. Une fois les paramètres prépondérants sur la variabilité de la réponse du modèle identifiés, ils sont modélisés par des variables aléatoires et le reste des paramètres est fixé à leurs valeurs moyennes respectives, dans le calcul stochastique proprement dit. Cette tâche fut la deuxième étape de l'approche proposée, dans laquelle la méthode de décomposition de la dimension est utilisée pour caractériser l'aléa de la réponse du modèle, par l'estimation des moments statistiques et la construction de la densité de probabilité. Cette approche permet d'économiser jusqu'à 90% du temps de calcul demandé par les méthodes de calcul stochastique classiques. Elle est ensuite utilisée dans l'évaluation de l'intégrité d'une toiture à ossature bois d'une habitation individuelle installée sur un site d'aléa sismique fort. Dans ce contexte, l'analyse du comportement de la structure est basée sur un modèle éléments finis, dans lequel les assemblages en bois sont modélisés par une loi anisotrope avec hystérésis et l'action sismique est représentée par huit accélérogrammes naturels fournis par le BRGM. Ces accélérogrammes permettent de représenter différents types de sols selon en se référant à la classification de l'Eurocode 8. La défaillance de la toiture est définie par l'atteinte de l'endommagement, enregistré dans les assemblages situés sur les éléments de contreventement et les éléments d'anti-flambement, d'un niveau critique fixé à l'aide des résultats des essais. Des analyses déterministes du modèle éléments finis ont montré que la toiture résiste à l'aléa sismique de la ville du Moule en Guadeloupe. Les analyses probabilistes ont montré que parmi les 134 variables aléatoires représentant l'aléa dans le comportement non linéaire des assemblages, 15 seulement contribuent effectivement à la variabilité de la réponse mécanique ce qui a permis de réduire la dimension stochastique dans le calcul des moments statistiques. En s'appuyant sur les estimations de la moyenne et de l'écart-type on a montré que la variabilité de l'endommagement dans les assemblages situés dans les éléments de contreventement est plus importante que celle de l'endommagement sur les assemblages situés sur les éléments d'anti-flambement. De plus, elle est plus significative pour les signaux les plus nocifs sur la structure.
APA, Harvard, Vancouver, ISO, and other styles
26

Sevieri, Giacomo. "The seismic assessment of existing concrete gravity dams: FE model uncertainty quantification and reduction." Doctoral thesis, 2019. http://hdl.handle.net/2158/1171930.

Full text
Abstract:
The implementation of resilience-enhancing strategies on existing concrete gravity dams is a task of primary importance for the society. This aim can be achieved by estimating the risk of concrete dams against multi-hazards and by improving the structural control. Focusing the attention only on the seismic hazard, numerical models assume great importance due to the lack of case studies. However, for the same reason, numerical models are characterised by a high level of uncertainty which must be reduced by exploiting all available information. In this way reliable predictive models of the structural behaviour can be built, thus improving the seismic fragility estimation and the dam control. In this context, the observations recorded by the monitoring systems are a powerful source of information. In this thesis two Bayesian frameworks for Structural Health Monitoring (SHM) of existing concrete gravity dams are proposed. On the one hand, the first proposed framework is defined for static SHM, so the dam displacements are considered as Quantity of Interest (QI). On the other hand, a dynamic SHM framework is defined by assuming the modal characteristics of the system as QI. In this second case an innovative numerical algorithm is proposed to solve the well-known mode matching problem without using the concept of system mode shapes or objective functions. Finally, a procedure based on the Optimal Bayesian Experimental Design is proposed in order to design the devices layout by optimizing the probability of damage detection. In all the three procedures the general Polynomial Chaos Expansion (gPCE) is widely used in order to strongly reduce the computational burden, thus making possible the application of the proposed procedure even without High Performance Computing (HPC). Two real large concrete gravity dams are analysed in order to show the effectiveness of the proposed procedures in the real world. In the first part of the thesis an extended literature review on the fragility assessment of concrete gravity dams and the application of SHM is presented. Afterwards, the statistical tools used for the definition of the proposed procedures are introduced. Finally, before the presentation of SHM frameworks, the main sources of uncertainties in the numerical analysis of concrete gravity dams are discussed in order to quantify their effects on the model outputs.
APA, Harvard, Vancouver, ISO, and other styles
27

Winokur, Justin Gregory. "Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification." Diss., 2015. http://hdl.handle.net/10161/9845.

Full text
Abstract:

Polynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.

Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.

We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.

In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error.

In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Yu-Tuan, and 林玉端. "Implementations of Tailored Finite Point Method and Polynomial Chaos Expansion for Solving Problems Related to Fluid Dynamics, Image Processing and Finance." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/40488536171794178165.

Full text
Abstract:
博士
國立中興大學
應用數學系所
104
In this dissertation, we study the tailored finite point method (TFPM) and polynomial chaos expansion (PCE) scheme for solving partial differential equations (PDEs). These PDEs are related to fluid dynamics, imaging processing and finance problems. In the first part, we concern on quasilinear time-dependent Burgers'' equations with small coefficients of viscosity. The selected basis functions for the TFPM method automatically fit the properties of the local solution in time and space simultaneously. We apply the Hopf-Cole transformation to derive the first TFPM-I scheme. For the second scheme, we approximate the solution by using local exact solutions and consider iterated processes to attain numerical solutions to the original form of the Burgers'' equation. The TFPM-II is particularly suitable for a solution with steep gradients or discontinuities. More importantly, the TFPM obtained numerical solutions with reasonable accuracy even on relatively coarse meshes for Burgers'' equations. In the second part, we employ the application of the TFPM in an anisotropic convection-diffusion (ACD) filter for image denoising. A quadtree structure is implemented in order to allow multi-level storage during the denoising and compression process. The ACD filter exhibits the potential to get a more accurate approximated solution to the PDEs. In the third part, we regard the TFPM for Black-Scholes equations, European option pricing. We compare the performance of our algorithm with other popular numerical schemes. The numerical experiments using the TFPM is more efficient and accurate compared to other well-known methods. In the last part, we present the polynomial chaos expansion (PCE) for stochastic PDEs. We provide a review of the theory of generalized polynomial chaos expansion (gPCE) and arbitrary polynomial chaos expansion (aPCE) including the case analysis of test problems. We demonstrate the accuracy of the gPCE and aPCE for the Black-Scholes model with the log-normal random volatilities. Furthermore, we employ the aPCE scheme for arbitrary distributions of uncertainty volatilities with short term price data. This is the forefront of adopting the polynomial chaos expansion in the randomness of volatilities in financial mathematics.
APA, Harvard, Vancouver, ISO, and other styles
29

(5930765), Pratik Kiranrao Naik. "History matching of surfactant-polymer flooding." Thesis, 2019.

Find full text
Abstract:
This thesis presents a framework for history matching and model calibration of surfactant-polymer (SP) flooding. At first, a high-fidelity mechanistic SP flood model is constructed by performing extensive lab-scale experiments on Berea cores. Then, incorporating Sobol based sensitivity analysis, polynomial chaos expansion based surrogate modelling (PCE-proxy) and Genetic algorithm based inverse optimization, an optimized model parameter set is determined by minimizing the miss-fit between PCE-proxy response and experimental observations for quantities of interests such as cumulative oil recovery and pressure profile. The epistemic uncertainty in PCE-proxy is quantified using a Gaussian regression process called Kriging. The framework is then extended to Bayesian calibration where the posterior of model parameters is inferred by directly sampling from it using Markov chain Monte Carlo (MCMC). Finally, a stochastic multi-objective optimization problem is posed under uncertainties in model parameters and oil price which is solved using a variant of Bayesian global optimization routine.
APA, Harvard, Vancouver, ISO, and other styles
30

Pepi, Chiara. "Suitability of dynamic identification for damage detection in the light of uncertainties on a cable stayed footbridge." Doctoral thesis, 2019. http://hdl.handle.net/2158/1187384.

Full text
Abstract:
Structural identification is a very important task especially in all those countries characterized by significant historical and architectural patrimony and strongly vulnerable infrastructures, subjected to inherent degradation with time and to natural hazards e.g. seismic loads. Structural response of existing constructions is usually estimated using suitable numerical models which are driven by a set of geometrical and/or mechanical parameters that are mainly unknown and/or affected by different levels of uncertainties. Some of these information can be obtained by experimental tests but it is practically impossible to have all the required data to have reliable response estimations. For these reasons it is current practice to calibrate some of the significant unknown and/or uncertain geometrical and mechanical parameters using measurements of the actual response (static and/or dynamic) and solving an inverse structural problem. Model calibration is also affected by uncertainties due to the quality (e.g. signal to noise ratio, random properties) of the measured data and to the algorithms used to estimate structural parameters. In this thesis a new robust framework to be used in structural identification is proposed in order to have a reliable numerical model that can be used both for random response estimation and for structural health monitoring. First a parametric numerical model of the existing structural system is developed and updated using probabilistic Bayesian framework. Second, virtual samples of the structural response affected by random loads are evaluated. Third, this virtual samples are used as virtual experimental response in order to analyze the uncertainties on the main modal parameters varying the number and time length of samples, the identification technique and the target response. Finally, the information given by the measurement uncertainties are used to assess the capability of vibration based damage identification method.
APA, Harvard, Vancouver, ISO, and other styles
31

Dutta, Parikshit. "New Algorithms for Uncertainty Quantification and Nonlinear Estimation of Stochastic Dynamical Systems." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9951.

Full text
Abstract:
Recently there has been growing interest to characterize and reduce uncertainty in stochastic dynamical systems. This drive arises out of need to manage uncertainty in complex, high dimensional physical systems. Traditional techniques of uncertainty quantification (UQ) use local linearization of dynamics and assumes Gaussian probability evolution. But several difficulties arise when these UQ models are applied to real world problems, which, generally are nonlinear in nature. Hence, to improve performance, robust algorithms, which can work efficiently in a nonlinear non-Gaussian setting are desired. The main focus of this dissertation is to develop UQ algorithms for nonlinear systems, where uncertainty evolves in a non-Gaussian manner. The algorithms developed are then applied to state estimation of real-world systems. The first part of the dissertation focuses on using polynomial chaos (PC) for uncertainty propagation, and then achieving the estimation task by the use of higher order moment updates and Bayes rule. The second part mainly deals with Frobenius-Perron (FP) operator theory, how it can be used to propagate uncertainty in dynamical systems, and then using it to estimate states by the use of Bayesian update. Finally, a method to represent the process noise in a stochastic dynamical system using a nite term Karhunen-Loeve (KL) expansion is proposed. The uncertainty in the resulting approximated system is propagated using FP operator. The performance of the PC based estimation algorithms were compared with extended Kalman filter (EKF) and unscented Kalman filter (UKF), and the FP operator based techniques were compared with particle filters, when applied to a duffing oscillator system and hypersonic reentry of a vehicle in the atmosphere of Mars. It was found that the accuracy of the PC based estimators is higher than EKF or UKF and the FP operator based estimators were computationally superior to the particle filtering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography