Дисертації з теми "Incertitudes de propagation"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Incertitudes de propagation".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Bertin, Michaël. "Propagation des incertitudes dans un modèle réduit de propagation des infrasons." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0020/document.
The perturbation of a system can give rise to wave propagation. A classical approach to understand this phenomenon is to look for natural modes of vibration of the medium. Mathematically, finding these modes requires to seek the eigenvalues and eigenfunctions of the propagation operator. However, from a numerical point of view, the operation can be costly because the matrices can be of very large size. Furthermore, in most applications, uncertainties are inevitably associated with our model. The question then arises as to whether we should allocate significant computational resources for simulation while the accuracy of the result is not guaranteed. We propose in this thesis an approach that allows both a better understanding of the influence of uncertainties on the propagation and a significant decrease of computational costs for infrasound propagation in the atmosphere. The main idea is that all modes do not have the same importance and only a few of them is often sufficient to account for the phenomenon without a significant loss of accuracy. These modes appear to be those which are most sensitive to atmospheric disturbances. Specifically, a sensitivity analysis is used to identify the most influential structures of the atmosphere, the associated groups of modes and their associated parts of the infrasound signal. These groups of modes can be specifically targeted in a spectrum calculation with the projection of the operator onto Krylov subspaces, that allows a significant decrease of the computational cost. This method of model reduction can be applied in a statistical framework as well and estimations of the expectation and the variance of the results are carried out without a significant loss of accuracy and still with a low cost
Kayser, Bill. "Estimation des incertitudes de modélisation du bruit des éoliennes." Thesis, Le Mans, 2020. http://www.theses.fr/2020LEMA1035.
There is a major societal challenge to study the emission and propagation of noise emitted by wind turbines, and in particular to quantify the uncertainty in the estimation of sound levels. Although these sound levels are relatively low compared to those generated by other acoustic sources in the environment (e.g. land transport), noise pollution from wind turbines is often highlighted as a potential nuisance. Thus, this thesis work aims to quantify the spatial and frequency dependence of the uncertainties encountered in environmental acoustics, and in particular concerning wind turbine noise. To do so, a coupling is made between an acoustic emission model allowing to take into account the acoustic specificities of wind turbines (spectrum and directivity), and an acoustic propagation model allowing to take into account the effects of the atmosphere (thermal and aerodynamic vertical profiles) and the ground effects (absorption and roughness) on acoustic propagation. A sensitivity analysis is then carried out to determine the environmental parameters that have the greatest influence on the dispersion of sound levels. Following the development of a metamodel, an uncertainty analysis is carried out to estimate the total variability of the sound levels. Finally the method is applied to some examples of situations characteristic of the wind turbine noise
Nouy, Anthony. "Contributions à la quantification et à la propagation des incertitudes en mécanique numérique." Habilitation à diriger des recherches, Université de Nantes, 2008. http://tel.archives-ouvertes.fr/tel-00422364.
Reposant sur des bases mathématiques fortes, les méthodes spectrales de type Galerkin semblent constituer une voie prometteuse pour l'obtention de prédictions numériques fiables de la réponse de modèles régis par des équations aux dérivées partielles stochastiques (EDPS). Plusieurs inconvénients freinent cependant l'utilisation de ces techniques et leur transfert vers des applications de grande taille : le temps de calcul, les capacités de stockage mémoire requises et le caractère ``intrusif'', nécessitant une bonne connaissance des équations régissant le modèle et l'élaboration de solveurs spécifiques à une classe de problèmes donnée. Un premier volet de mes travaux de recherche a consisté à proposer une stratégie de résolution alternative tentant de lever ces inconvénients. L'approche proposée, baptisée méthode de décomposition spectrale généralisée, s'apparente à une technique de réduction de modèle a priori. Elle consiste à rechercher une décomposition spectrale optimale de la solution sur une base réduite de fonctions, sans connaître la solution a priori.
Un deuxième volet de mes activités a porté sur le développement d'une méthode de résolution d'EDPS pour le cas où l'aléa porte sur la géométrie. Dans le cadre des approches spectrales stochastiques, le traitement d'aléa sur l'opérateur et le second membre est en effet un aspect aujourd'hui bien maîtrisé. Par contre, le traitement de géométrie aléatoire reste un point encore très peu abordé mais qui peut susciter un intérêt majeur dans de nombreuses applications. Mes travaux ont consisté à proposer une extension de la méthode éléments finis étendus (X-FEM) au cadre stochastique. L'avantage principal de cette approche est qu'elle permet de traiter le cas de géométries aléatoires complexes, tout en évitant les problèmes liés au maillage et à la construction d'espaces d'approximation conformes.
Ces deux premiers volets ne concernent que l'étape de prédiction numérique, ou de propagation des incertitudes. Mes activités de recherche apportent également quelques contributions à l'étape amont de quantification des incertitudes à partir de mesures ou d'observations. Elles s'insèrent dans le cadre de récentes techniques de représentation fonctionnelle des incertitudes. Mes contributions ont notamment porté sur le développement d'algorithmes efficaces pour le calcul de ces représentations. En particulier, ces travaux ont permis la mise au point d'une méthode d'identification de géométrie aléatoire à partir d'images, fournissant une description des aléas géométriques adaptée à la simulation numérique. Une autre contribution porte sur l'identification de lois multi-modales par une technique de représentation fonctionnelle adaptée.
Geraci, Gianluca. "Schémas et stratégies pour la propagation et l'analyse des incertitudes dans la simulation d'écoulements." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00940294.
Lignon, Sylvain. "Approche robuste du risque sismique." Ecully, Ecole centrale de Lyon, 2006. http://bibli.ec-lyon.fr/exl-doc/slignon.pdf.
At present, the question of seismic modelling is still very open. The work reported in this document proposes a modelling of the whole of the chain, from the description of the initialization of the earth tremor until the damage of the reinforced concrete structure, while passing by the transmission of the wave through the ground, the goal being to implement a robust approach of the damage structure. A decoupling of the various phenomena is introduced, in order to work on simplified models on each level. To start, an original model of seismic fault is developed, the 3S model. The wave propagation is then studied by regarding the ground as a stratified medium. Lastly, uncertainties are taken into account in the evaluation of the damage of the structures. Specific methods of convex analysis are then implemented
Guerra, Jonathan. "Optimisation multi-objectif sous incertitudes de phénomènes de thermique transitoire." Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0024/document.
This work aims at solving multi-objective optimization problems in the presence of uncertainties and costly numerical simulations. A validation is carried out on a transient thermal test case. First of all, we develop a multi-objective optimization algorithm based on kriging and requiring few calls to the objective functions. This approach is adapted to the distribution of the computations and favors the restitution of a regular approximation of the complete Pareto front. The optimization problem under uncertainties is then studied by considering the worst-case and probabilistic robustness measures. The superquantile integrates every event on which the output value is between the quantile and the worst case. However, it requires an important number of calls to the uncertain objective function to be accurately evaluated. Few methods give the possibility to approach the superquantile of the output distribution of costly functions. To this end, we have developed an estimator based on importance sampling and kriging. It enables to approach superquantiles with little error and using a limited number of samples. Moreover, the setting up of a coupling with the multi-objective algorithm allows to reuse some of those evaluations. In the last part, we build spatio-temporal surrogate models capable of predicting non-linear, dynamic and long-term in time phenomena by using few learning trajectories. The construction is based on recurrent neural networks and a construction facilitating the learning is proposed
Jalid, Abdelilah. "Contribution à l'évaluation des incertitudes liées aux résultats de mesure tridimensionnelle." Lille 1, 2007. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2007/50376-2007-29.pdf.
Bachmann, Jérôme. "Contribution à la propagation des incertitudes dans les gammes de mesure des machines à mesurer par coordonnées." Aix-Marseille 2, 2003. http://www.theses.fr/2003AIX22090.
The statistical approach of the association of surfaces allows us today, at a given risk, to identify the parameters of usual surfaces. The moment of order 2 is a quality indicator of the estimated parameters and is homogeneous with a variance. This approach opens interesting prospects with regard to the propagation of uncertainties. We set up a method, which must apply to the whole of the specifications of the standard ISO 1101 without modifier. The law of propagation of the GUM was rewritten for vector quantities. The method of generalized propagation is applied to simple cases, which have the characteristic to highlight the complexity of the method. This one requires a matric assembly, which aims to retain only the functional parts of the covariance matrix and the jacobian matrices according to the geometrical operation carried out
Brizard, Matthieu. "Développement et étude d'un viscosimètre absolu à chute de bille." Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00009440.
Leissing, Thomas. "Propagation d'ondes non linéaires en milieu complexe - Application à la propagation en environnement urbain." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00455590.
Chen, Shengli. "Maîtrise des biais et incertitudes des sections efficaces et de la modélisation de la cinématique associées aux réactions nucléaires conduisant aux dommages dans les matériaux sous irradiation." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALI048.
Because the irradiation damage is a major challenge of nuclear materials, it is of upmost importance to accurately calculate it with reliable uncertainty estimates. The main objective of this thesis is to develop and improve the methodologies for computing the neutron irradiation-induced displacement damages as well as their uncertainties. After a brief review on nuclear reaction models and primary radiation damage models, we propose a complete methodology for calculating damage cross sections from different nuclear reactions and the subsequent calculation of Displacement per Atom (DPA) rates.The recoil energies from neutron-induced reactions are summarized with an estimation of the relativistic effect and the target thermal vibration. Particularly, a new method for computing the recoil energy from charged particle emission reactions is proposed by considering both the quantum tunneling and the Coulomb barrier. Some methods are developed to improve and verify numerical calculations. Damage cross section calculations from neutron radiative capture reaction and N-body reactions are also thoroughly analyzed and discussed. In addition to the neutron irradiation-induced displacement damage, the electron, positron, photon-induced DPA cross sections, as well as the beta decay and Fission Products (FPs)-induced damage are also investigated. Orders of magnitude of their relative contributions are given.For the neutron irradiation-induced DPA rate calculation, attention should be paid when using infinite dilution cross sections. E.g., in the ASTRID inner core, the self-shielding correction on ECCO 33-group damage cross sections leads to a 10% reduction of DPA rate, whereas the multigroup correction is still not automatically treated for DPA rate calculation in neutronic codes nor for computing Primary Knock-on Atom (PKA) spectrum. Based on the presently proposed method for computing the FPs-induced DPA by atomistic simulations, the peak value of the FPs-induced DPA rate can be 4 to 5 times larger than the neutron-induced one in the cladding of the ASTRID inner core, even though the penetration of FPs in the Fe-14Cr cladding is less than 10 µm. Therefore, the question of whether the FPs-induced damage should be considered for determining fuel assembly lifetime in fast reactors needs to be discussed.In the reactor vessel of a simplified pressurized water reactor, the covariance matrices of 235U prompt fission neutron spectrum from ENDF/B-VII.1 and JENDL-4.0 respectively lead to 11% and 7% relative uncertainty of DPA rate. Neglecting the correlations of the neutron flux and PKA spectrum results in an underestimation by a factor of 21. The total uncertainties of damage energy rate are respectively 12% and 9%, whereas an underestimation by a factor of 3 is found if the correlations of damage cross section and neutron flux are not considered
Iooss, Bertrand. "Contributions au traitement des incertitudes en modélisation numérique : propagation d'ondes en milieu aléatoire et analyse statistique d'expériences simulées." Habilitation à diriger des recherches, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00360995.
Birman, Jessie. "Quantification et propagation d'incertitude dans les phases amont de projets de conception d'avions : de l'optimisation déterministe à l'optimisation sous contraintes probabilistes." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2269/.
Conceptual aircraft sizing is the first step in the development project of a passenger transport aircraft. Classically, in this phase, a lot of aircraft configurations are compared after having been globally sized thanks to a deterministic, multidisciplinary and constrained optimisation problem. The purpose is to determine the main characteristics of the airplane according to a set of Top Level Requirements. At preliminary stage, designers have to deal with limited knowledge and high uncertainty when solving this problem. Managing that uncertainty is a major issue: assessing its impact on the design in the early stage allows to save time and cost. This PhD thesis introduces a new methodology to solve the aircraft design optimisation affected by uncertainty. First of all, the main source of uncertainty involved at this stage is identified as predictive model uncertainty, which is part of epistemic uncertainty. This uncertainty is quantified within a probabilistic framework. For that purpose, based on the Beta distribution, we create a new generic distribution function able to assume a wide range of distribution shapes: it is called Beta-Mystique distribution. Second of all, we realise uncertainty propagation studies with Monte Carlo and moment propagation methods, in order to analyse the robustness of aircraft configuration according to a set of uncertainties. Finally, a chance constrained optimisation is solved to produce a robust aircraft configuration. Two strategies are considered: the use of Surrogate models to approximate the probabilities and the resolution of the optimisation problem thanks to the moment propagation method
Hu, Xiaosu, and Xiaosu Hu. "Approche probabiliste de la propagation des incertitudes dans le modèle mécano-numérique du système couplé "fémur-prothèse non cimentée"." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00629650.
Hu, XiaoSu. "Approche probabiliste de la propagation des incertitudes dans le modèle mécano-numérique du système couplé "fémur-prothèse non cimentée"." Thesis, Clermont-Ferrand 2, 2010. http://www.theses.fr/2010CLF22065/document.
The hip arthroplasty with cementless hip prosthesis is a solution usually used for the patients suffering the problems of the musculoskeletal system. However, such a solution has a major disadvantage, pointed by all users : the lack of primary stability of the prosthesis. This weakness can cause serious complications or failure of the surgery. Therefore, to achieve a good primary fixation is a crucial point of this type of surgery to ensure a short and a long term clinical satisfaction. In order to better understand this central issue, a preoperative track is adopted. A finite element model to describe the mechanical behavior of the coupled system " femur-cementless prosthesis : DePuy Corail® "has been created and validated by the experiments in vitro. Then, in order to take into account the high variability of model parameters, inherent to the nature of the problem, the stochastic modeling of random input parameters has been introduced and a mechanical-probabilistic strategy has been proposed, on the one hand to quantify, in probabilistic terms, the effect, on the response, of the uncertainties affecting the input parameters of the model, and on the other hand to evaluate the primary stability of the bone-prosthesis system in reliability context. The practical implementation of this approach is realized by using the numerical tools based on the standard Monte Carlo method and the stochastic collocation procedure. The originality of the work presented is primarily in the proposition of a probabilistic methodology capable of taking into account the uncertainties in the problem of primary stability of cementless hip prostheses. It also lies in the potentiality of this methodology to be transplantable easily in industrial context
Benoit, Jean-Christophe. "Développement d'un code de propagation des incertitudes des données nucléaires sur la puissance résiduelle dans les réacteurs à neutrons rapides." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-01064275.
Benoit, Jean-christophe. "Développement d’un code de propagation des incertitudes des données nucléaires sur la puissance résiduelle dans les réacteurs à neutrons rapides." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112254/document.
This PhD study is in the field of nuclear energy, the back end of nuclear fuel cycle and uncertainty calculations. The CEA must design the prototype ASTRID, a sodium cooled fast reactor (SFR) and one of the selected concepts of the Generation IV forum, for which the calculation of the value and the uncertainty of the decay heat have a significant impact. In this study is developed a code of propagation of uncertainties of nuclear data on the decay heat in SFR.The process took place in three stages.The first step has limited the number of parameters involved in the calculation of the decay heat. For this, an experiment on decay heat on the reactor PHENIX (PUIREX 2008) was studied to validate experimentally the DARWIN package for SFR and quantify the source terms of the decay heat.The second step was aimed to develop a code of propagation of uncertainties : CyRUS (Cycle Reactor Uncertainty and Sensitivity). A deterministic propagation method was chosen because calculations are fast and reliable. Assumptions of linearity and normality have been validated theoretically. The code has also been successfully compared with a stochastic code on the example of the thermal burst fission curve of 235U.The last part was an application of the code on several experiments : decay heat of a reactor, isotopic composition of a fuel pin and the burst fission curve of 235U. The code has demonstrated the possibility of feedback on nuclear data impacting the uncertainty of this problem.Two main results were highlighted. Firstly, the simplifying assumptions of deterministic codes are compatible with a precise calculation of the uncertainty of the decay heat. Secondly, the developed method is intrusive and allows feedback on nuclear data from experiments on the back end of nuclear fuel cycle. In particular, this study showed how important it is to measure precisely independent fission yields along with their covariance matrices in order to improve the accuracy of the calculation of the decay heat
Clamens, Olivier. "Analyse et propagation des incertitudes associées à la dépressurisation de l’Hélium 3 sur les transitoires de puissance du réacteur CABRI." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAI061/document.
CABRI is a pool type pulsed reactor designed for studying pre-irradiated nuclear fuel behavior under RIA (Reactivity Initiated Accident) conditions.The helium-3 depressurization from the transient rods system allows the insertion of up to 4 $ reactivity mainly countered by the Doppler effect when the power reaches in few milliseconds up to 200,000 times the initial 100~kW power.This thesis presents the improvements added to the power transients prediction and the associated uncertainties studies.The point kinetics calculation coupled with 1D thermal-hydraulics and heat transfer has been improved by the addition of surrogate models based on experimental analysis and Best-Estimate calculations of the helium-3 depressurization and of the reactivity effects and of the kinetics parameters.The power transients modeling improvements have a positiv impact on the CABRI tests prediction.The propagation of the experimental and of the modeling uncertainties was realized with the SPARTE code and the URANIE uncertainty platform.Finally, the power transients characteristics optimization is approached in order to improve the CABRI experiments designing
Li, Zhao. "Développement multipolaire en harmoniques sphériques et propagation des incertitudes appliqués à la modélisation de source de rayonnement en compatibilité électromagnétique." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC049/document.
With the advances in technology, the power density of electrical equipment is increasing, which leads to an increase in electromagnetic interference between two power electronic systems. This interference may cause the electronic equipment to malfunction. In order to avoid such problems, a large number of experimental verifications are necessary after the development of a prototype, which leads to a significant additional cost in its industrial cycle. This thesis aims at taking into account the problems of near-field electromagnetic compatibility in the design phase of a product. The method is based on the development in spherical harmonics of the parasitic radiation generated by the devices or the components of the device. The harmonic coefficients of a device make it possible not only to model its near field by a punctual source, but also to determine the inductive coupling with the others. In this context, the multipole expansion method is studied theoretically, numerically (by the software Flux3D) and experimentally in this work. In this document, the new automated measuring system is presented and different approaches related to the solution of this problem are then studied, such as the optimal choice of the development origin, the error compensation of the measurements due to the secondary sources, the study of the propagation of uncertainty in the inverse problem and how to take into account the a priori information, etc
Volat, Ludovic. "Développement d’une méthode stochastique de propagation des incertitudes neutroniques associées aux grands coeurs de centrales nucléaires : application aux réacteurs de génération III." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0330/document.
Generation III Light Water Reactors undoubtedly follow design guidelines comparable to those of current PWRs. Furthermore, they take advantage of enhanced features in terms of safety, energy efficiency, radiation protection and environment. Then, we talk about an evolutionary approach. Amongst those improvements, the significant size and the use of a heavy reflector translate into a better neutronics efficacy, leading to intrinsic enrichment benefits then to natural uranium profits. They contribute to the core vessel preservation as well.Because of their large dimensions, the neutronic bulge of this kind of reactors is emphasized. Therefore, it is a parameter of interest in the reactor safety studies. Nevertheless, the uncertainty related to the radial power map is hardly reachable by using the numerical methods implemented in the neutronics codes.Given the above, this PhD work aims to develop an innovative stochastic neutronics uncertainties propagation method. By using recent probabilistic results, it makes good use of the growing calculation means in order to explore all the physical states of the reactor statiscally foreseen.After being validated , our method has been applied to a reactor proposed in the framework of a large core OECD/NEA international benchmark with carefully chosen covariances values. Thus, for this system, the uncertainties related to the keff reaches 638~pcm $(1\sigma)$. What is more, the total bulge equals 8.8~\% $(1\sigma)$ and the maximal uncertainty related to the insertion of a group of control rods reaches 11~\% $(1\sigma)$
Hally, Alan. "Prévisibilité des épisodes méditerranéens de précipitations intenses : propagation des incertitudes liées aux paramétrisations de la microphysique des nuages et de la turbulence." Phd thesis, Toulouse 3, 2013. http://thesesups.ups-tlse.fr/3241/.
The north-western Mediterranean region is susceptible to heavy precipitation events (HPEs) between the months of September and November with south-eastern France being one of the areas most affected. Numerical forecasting of such events has drastically improved in recent years. However, because of the complex and multiscale interactions taking place in deep convective cloud systems, model errors remain, especially in the representation of the physical processes. In particular, the cloud physics and turbulence parameterisations rely upon unavoidable simplifications and assumptions which clearly limit the potential accuracy of deterministic forecasts in capturing extreme weather events. To represent the uncertainties associated with these parameterisations, a probabilistic methodology using ensemble prediction systems (EPSs) was proposed. Perturbations were introduced upon the time tendencies of the warm and cold microphysical and turbulence processes, as well as the adjustable parameters of the microphysical parameterisations. This approach was tested and assessed upon two idealised convective events with the aim of uncovering the processes which induced the greatest dispersion in the surface rainfall. Following these tests, ensembles were constructed for seven real world events which occurred over south-eastern France in the autumns of 2010, 2011 and 2012. Simultaneously perturbing the time tendencies of the warm and cold microphysical and turbulence processes was shown to produce the greatest degree of dispersion in the rainfall pattern with the level of that dispersion depending upon the nature of the precipitating event. The dispersion is gen- erally more important when the incident flow is weak to moderate and precipitation is observed on the plains. For certain events, the level of dispersion introduced by perturbations upon the physical processes is similar to that generated by perturbing the initial and boundary conditions. It is concluded that, even though the impact of perturbing the physical parameterisations is moderate, it is sufficiently important to warrant an inclusion of such perturbations in an operational EPS
Hally, Alan. "Prévisibilité des épisodes méditerranéens de précipitations intenses : Propagation des incertitudes liées aux paramétrisations de la microphysique des nuages et de la turbulence." Phd thesis, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00939287.
Dumont, Nicolas. "Méthodes numériques et modèle réduit de chimie tabulée pour la propagation d'incertitudes de cinétique chimique." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC037.
Numerical simulation plays a key role in the field of combustion today, either in the research area by permitting a better understanding of phenomenons taking place inside reactive flows or in the development of industrial application by reducing designing cost of systems. Large Eddy Simulation is at the time the most suited tool for the simulation of reactive flows. Large Eddy Simulation of reactive flows is in practice only possible thanks to a modeling of different phenomenons:- turbulence is modeled for small structures allowing to resolve only big structures which results in lower computational cost- chemistry is modeled using reduction methods which allows to drastically reduce computational costThe maturity of Large Eddy Simulation of reactive flows makes it today a reliable, predictive and promising tool. It now makes sense to focus on the impact of the parameters involved in the different models on the simulation results. This study of the impact of the modeling parameters can be seen from the perspective of uncertainties propagation, and can give interesting informations both from a practical side for the robust design of systems but also on the theoretical side in order to improve the models used and guide the experimental measurements to be made for the reliability improvement of these models.The context of this thesis is the development of efficient methods allowing the propagation of uncertainties present in the chemical kinetic parameters of the reaction mechanisms within Large Eddy Simulation, these methods having to be non-intrusive in order to take advantage of the existence of the different computation codes which are tools requiring heavy means for their development. Such a propagation of uncertainties using a brute-force method suffers from the "curse of dimensionality" because of the large number of chemical kinetic parameters, implying a practical impossibility with the current means of computation which justifies the development of efficient methods.The objective of the thesis is the development of a reduced model that can be used for uncertainties propagation in Large Eddy simulations. The handling and implementation of various tools resulting from the uncertainties propagation framework has been an essential preliminary work in this thesis in order to bring this knowledge and skills into the EM2C laboratory.The method developed in this thesis for the propagation of chemical kinetic parameters uncertainties is limited to chemistry models in which the advancement of the combustion process is summarized by the evolution of a progress variable given by a transport equation, the access to other informations being made through the use of a table. Through the study of the evolution of a constant pressure adiabatic reactor containing a homogeneous mixture of air and dihydrogen, it is shown that a large part of the uncertainties of such a system can be explained by the uncertainties of the progress variable. This makes it possible to define a chemical table that can be used to propagate uncertainties of chemical kinetic parameters in Large Eddy Simulations. The introduction of the uncertainties is then done only by the modeling of the source term present in the transport equation of the progress variable, which can be parameterized with the help of few uncertain parameters thus avoiding the "curse of dimensionality"
Navas, Nunez Rafael. "Modélisation hydrologique distribuée des crues en région Cévennes-Vivarais : impact des incertitudes liées à l'estimation des précipitations et à la paramétrisation du modèle." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAU025/document.
It is known that having a precipitation observation system at high space - time resolution is crucial to obtain good results in rainfall - runoff modeling. Radar is a tool that offers quantitative precipitation estimates with very good resolution. When it is merged with a rain gauge network the advantages of both systems are achieved. However, radars estimates have different uncertainties than those obtained with the rain gauge. In the modeling process, uncertainty of precipitation interacts with uncertainty of the hydrological model. The objective of this work is: To study methods used to quantify the uncertainty in radar – raingauge merge precipitation estimation and uncertainty in hydrological modeling, in order to develop a methodology for the analysis of their individual contributions in the uncertainty of rainfall - runoff estimation.The work is divided in two parts, the first one evaluates: How the uncertainty of radar precipitation estimation can be quantified? To address the question, the geostatistical approach by Kriging with External Drift (KED) and Stochastic Generation of Precipitation was used, which allows to model the spatio - temporal structure of errors. The method was applied in the Cévennes - Vivarais region (France), where there is a very rich observation system. The second part explains: How can it be quantified the uncertainty of the hydrological simulation coming from the radar precipitation estimates and hydrological modeling process? In this point, the hydrological mesoscale computation tool was developed; it is distributed hydrological software in time continuous, within the basis of the Curve Number and the Unit Hydrograph. It was applied in 20 spatio-temporal resolutions ranging from 10 to 300 km2 and 1 to 6 hours in the Ardèche (~ 1971 km2) and the Gardon (1810 km2) basins. After a sensitivity analysis, the model was simplified with 4 parameters and the uncertainty of the chain of process was analyzed: 1) Precipitation estimation; 2) Hydrological modeling; and 3) Rainfall - runoff estimation, by using the coefficient of variation of the simulated flow.It has been shown that KED is a method that provides the standard deviation of the precipitation estimation, which can be transformed into a stochastic estimation of the local error. In the chain of processes: 1) Uncertainty in precipitation estimation increases with decreasing spatio-temporal scale, and its effect is attenuated by hydrological modeling, probably due by storage and transport properties of the basin; 2) The uncertainty of hydrological modeling depends on the simplification of hydrological processes and not on the surface of the basin; 3) Uncertainty in rainfall - runoff treatment is the result of the amplified combination of precipitation and hydrologic modeling uncertainties
Molto, Quentin. "Estimation de la biomasse en forêt tropicale humide : propagation des incertitudes dans la modélisation de la distribution spatiale de la biomasse en Guyane Française." Thesis, Antilles-Guyane, 2012. http://www.theses.fr/2012AGUY0567/document.
Tropical forest yield and important part of the aerial vegetation carbon stock on earth. Measuring and understanding this stock distribution is crucial for the management of the tropical forests facing the actual environmental challenges (REDD+, carbon market).Aerial Biomass estimation of forest census plots requires few models depending on the precision of the inventory: diameter models, height models, wood density models. Spatial extrapolation between census plots relies on aerial data (satellite measurements, images) or ground-based data (geology, altitude).The uncertainty of the estimation of a region’s biomass is the result of the uncertainty brought by all these models. The aim of the thesis is to develop models and methods to estimate the biomass of a region while propagating the uncertainties. This is applied to the neo-tropical forest of French Guiana (South America, Guianas Plateau)
Daouk, Sami. "Propagation d’incertitudes à travers des modèles dynamiques d’assemblages de structures mécaniques." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN046/document.
When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R&D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment
Ahmad, Jalaa. "Analyse modale des structures avec incertitudes par la méthode des éléments finis stochastiques spectrale." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2009. http://tel.archives-ouvertes.fr/tel-00725464.
Ahmad, Jalaa. "Analyse modale des structures avec incertitudes par la méthode des éléments finis stochastiques spectrale." Phd thesis, Clermont-Ferrand 2, 2009. http://www.theses.fr/2009CLF21936.
Rousseau, Marie. "Propagation d'incertitudes et analyse de sensibilité pour la modélisation de l'infiltration et de l'érosion." Phd thesis, Université Paris-Est, 2012. http://pastel.archives-ouvertes.fr/pastel-00788360.
Perrin, Guillaume. "Random fields and associated statistical inverse problems for uncertainty quantification : application to railway track geometries for high-speed trains dynamical responses and risk assessment." Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-01001045.
Safatly, Elias Emile. "Méthode multiéchelle et réduction de modèle pour la propagation d'incertitudes localisées dans les modèles stochastiques." Nantes, 2012. http://archive.bu.univ-nantes.fr/pollux/show.action?id=cc8428f5-edf4-4ce1-96c9-42d69e43459a.
In many physical problems, an uncertain model can be represented as a set of stochastic partial differential equations. We are here interested in problems with many sources of uncertainty with a localized character in space. In the context of functional approaches for uncertainty propagation, these problems present two major difficulties. The first one is that their solutions are multi-scale, which requires model reduction methods and appropriate computational strategies. The second difficulty is associated with the representation of functions of many parameters in order to take into account many sources of uncertainty. To overcome these difficulties, we first propose a multi-scale domain decomposition method that exploits the localized side of uncertainties. An iterative algorithm is proposed, which entails the alternated resolution of global and local problems, the latter being defined on patches containing localized variabilities. Tensor approximation methods are then used to deal with high dimensional functional representations. Multi-scale separation improves the conditioning of local and global problems and also the convergence of the tensor approximation methods which is related to the spectral content of functions to be decomposed. Finally, for the handling of localized geometrical variability, specific methods based on fictitious domain approaches are introduced
Do, Hai Quan. "Modèles réduits et propagation d'incertitude pour les problèmes de contact frottant et d'instabilité vibratoire." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0036/document.
To improve the quality of products and tend to reliable and robust designs, numerical simulations have nowadays taken a key role in many engineering domains. In spite of more complex and realistic numerical models, the correlation between a deterministic simulation and experimentations are not obvious, especially if the observed phenomenon have a fugitive nature. To take into account possible evolutions of behaviour, multiple samplings techniques such as designs of experiments, sensitivity analyses or non-deterministic approaches are currently performed. Nevertheless, these advanced simulations necessarily generate prohibitive computational times, which are not compatible with more and more shorter design steps.The aim of this work is to explore new numerical ways to solve mechanical problems including both the contact nonlinearity, the friction and several variability on model parameters. To achieve this objective, the integration of Fuzzy Logic Controllers has been first studied in the case of static frictional contact problems. The proposed idea is to decompose the non linear problem in a set of reduced linear problems. These last ones can be reanalyzed thanks to homotopy developments and projection techniques as a function of introduced perturbations. Second, the proposed strategy has been extended to the case of friction induced vibrations problems such as squeal
Huyghe, Jordan. "Exploitation du corpus de données expérimentales pour la puissance résiduelle des combustibles de réacteurs à eau pressurisée afin d'accroître la maîtrise des biais et incertitudes de calcul avec l'OCS DARWIN2.3." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0359.
The decay heat is a dimensioning parameter at every step of the back-end nuclear cycle, from the reactor shutdown until the final storage of non-recoverable radioactive waste. Accurate control of the calculation of the decay heat and its uncertainty is essential for all the Pressurized Water Reactors (PWRs) in the french reactor infrastructure : PWR-UOx and PWR-MOx fuels, over a wide range of cooling times. The calculation of the decay heat and its uncertainty is performed at CEA with the french reference DARWIN2.3 package. To answer the ASN's guide number 28 relative to the safety demonstration for the first barrier resistance, DARWIN2.3 follows a verification, validation & uncertainty quantification process, in order to prove the control of the decay heat calculation and the associated uncertainty. To define the latter, the propagation of nuclear data covariances to the decay heat or the transposition of the DARWIN2.3's experimental validation are possible. The approach considered for the thesis consisted in studying the representativity of experiments coming from the DARWIN2.3's experimental validation or the literature with PWR fuels. These studies enabled the definition of a 'validity domain' with DARWIN2.3 for the decay heat calculation of PWR-UOx and MOx fuels. Then, the thesis consisted in identifying the gaps in the DARWIN2.3 'validity domain' for the PWR-UOx and PWR-MOx fuels and in identifying the decay heat experiments to be carried out to fill these gaps
Jannet, Basile. "Influence de la non-stationnarité du milieu de propagation sur le processus de Retournement Temporel (RT)." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22436/document.
The aim of this thesis is to measure and quantify the impacts of uncertainties in the Time Reversal (TR) process. These random variations, coming from diverse sources, can have a huge influence if they happen between the TR steps. On this perspective, the Stochastique Collocation (SC) method is used. Very good results in terms of effectiveness and accuracy had been noticed in previous studies in ElectroMagnetic Compatibility (EMC). The conclusions are still excellent here on TR problems. Although, when the problem dimension rises (high number of Random Variables (RV)), the SC method reaches its limits and the efficiency decreases. Therefore a study on Sensitivity Analysis (SA) techniques has been carried out. Indeed, these methods emphasize the respective influences of the random variables of a model. Among the various quantitative or qualitative SA techniques the Morris method and the Sobol total sensivity indices have been adopted. Since only a split of the inputs (point out of the predominant RV) is expected, they bring results at a lesser cost. That is why a novel method is built, combining SA techniques and the SC method. In a first step, the model is reduced with SA techniques. Then, the shortened model in which only the prevailing inputs remain, allows the SC method to show once again its efficiency with a high accuracy. This global process has been validated facing Monte Carlo results on several analytical and numerical TR cases subjet to random variations
Safatly, Elias. "Méthode multiéchelle et réduction de modèle pour la propagation d'incertitudes localisées dans les modèles stochastiques." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00798526.
Ben, Daoued Amine. "Modélisation de la conjonction pluie-niveau marin et prise en compte des incertitudes et de l’impact du changement climatique : application au site du Havre." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2528.
The modeling of the combinations of flood hazard phenomena is a current issue for the scientific community which is primarily interested in urban and nuclear sites. Indeed, it is very likely that the deterministic approach exploring several scenarios has certain limits because these deterministic scenarios ensure an often excessive conservatism. Probabilistic approaches provide additional precision by relying on statistics and probabilities to complement deterministic approaches. These probabilistic approaches aim to identify and combine many possible hazard scenarios to cover many possible sources of risk. The Probabilistic Flood Hazard Assessment (PFHA) proposed in this thesis allows to characterize a quantity(ies) of interest (water level, volume, duration of immersion, ect.) at different points of interest of a site based on the distributions of the different phenomena of the flood hazard as well as the characteristics of the site. The main steps of the PFHA are: i) screening of the possible phenomena (rainfall, sea level, waves, ect.), ii) identification and probabilization of the parameters representative of the selected flood phenomena, iii) propagation of these phenomena from their sources to the point of interest on the site, iv) construction of hazard curves by aggregating the contributions of the flood phenomena. Uncertainties are an important topic of the thesis insofar as they will be taken into account in all the steps of the probabilistic approach. The work of this thesis is based on the study of the conjunction of rain and sea level and provide a new method for taking into account the temporal phase shift between the phenomena (coincidence). An aggregation model has been developed to combine the contributions of different flood phenomena. The question of uncertainties has been studied and a method based on the theory of belief functions has been used because it has various advantages (faithful modeling in cases of total ignorance and lack of information, possibility to combine information of different origins and natures, ect.). The proposed methodology is applied on the site of Le Havre in France
Prigent, Sylvain. "Approche novatrice pour la conception et l’exploitation d’avions écologiques." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0014/document.
The objective of this PhD work is to pose, investigate, and solve the highly multidisciplinary and multiobjective problem of environmentally efficient aircraft design and operation. In this purpose, the main three drivers for optimizing the environmental performance of an aircraft are the airframe, the engine, and the mission profiles. The figures of merit, which will be considered for optimization, are fuel burn, local emissions, global emissions, and climate impact (noise excluded). The study will be focused on finding efficient compromise strategies and identifying the most powerful design architectures and design driver combinations for improvement of environmental performances. The modeling uncertainty will be considered thanks to rigorously selected methods. A hybrid aircraft configuration is proposed to reach the climatic impact reduction objective
Sabouri, Pouya. "Application of perturbation theory methods to nuclear data uncertainty propagation using the collision probability method." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENI071/document.
This dissertation presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (UACSA Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms
Rameh, Hala. "Instrumentation optimale pour le suivi des performances énergétiques d’un procédé industriel." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM032/document.
Energy efficiency is becoming an essential research area in the scientific community given its importance in the fight against current and future energy crises. The analysis of the energy performances of the industrial processes requires the determination of the quantities involved in the mass and energy balances. Hence: how to choose the placement of the measurement points in an industrial site to find the values of all the energy indicators, without engendering an excess of unnecessary information due to redundancies (reducing measurements costs) and while respecting an accepted level of accuracy of the results ? The first part presents the formulation of the instrumentation problem which aims to guaranteeing a minimal observability of the system in favor of the key variables. This problem is combinatory. A method of validation of the different sensors combinations has been introduced. It is based on the structural interpretation of the matrix representing the process. The issue of long computing times while addressing medium and large processes was tackled. Sequential methods were developed to find a set of different sensor networks to be used satisfying the observability requirements, in less than 1% of the initial required computation time. The second part deals with the choice of the optimal instrumentation scheme. The difficulty of uncertainty propagation in a problem of variable size was addressed. To automate the evaluation of the uncertainty for all the found sensor networks, the proposed method suggested modeling the process based on binary parameters. Finally, the complete methodology is applied to an industrial case and the results were presented
Roncen, Thomas. "Réponses vibratoires non-linéaires dans un contexte industriel : essais et simulations sous sollicitations sinusoïdale et aléatoire en présence d'incertitudes." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC036/document.
This PhD work focuses on the experimental and numerical study of nonlinear structures subjected to both harmonic and random vibrations, in the presence of modeling and experimental uncertainties. Experimental studies undertaken at the CEA / CESTA show a strong dependence of the jointed structures towards the excitation level, as well as a variability in the response for a given excitation level. These experimental results cannot be simulated using the classical determinist linear vibration simulation method.The objective of this work is to propose and set up numerical methods to study these nonlinear responses, while quantifying and propagating the relevant uncertainties in the simulations. This objective involves the study of structural assemblies of increasing complexity and subjected to the same vibratory phenomena as CEA / CESTA industrial structures. Advanced nonlinear numerical methods developed in academia are applied in the CEA / CESTA industrial context.The first test structure is a clamped-clamped steel beam that has a geometrical nonlinearity. The beam is modeled by a Duffing oscillator which is a widely studied model in the field of nonlinear dynamics. This allows for a validation of the numerical developments proposed in this work, first on the issue of random vibrations, and second on the issue of the propagation of uncertainties. The simulations are based on two techniques of reference (shooting method and harmonic balance method). Firstly, the simulation results are validated by comparison with the experimental results for random vibrations. Secondly, the harmonic balance method is used in adequation with a non-intrusive polynomial chaos in order to take into accounts the modeling uncertainties.The second test structure is a mass linked to a solid casing via a vibration-absorbing elastomeric material of biconical shape surrounded by a cage of aluminum. The nonlinear behavior of the elastomer is at the heart of this work. Various vibration tests were performed on this structure in order to identify the simplest nonlinear model possible to answer our queries. The identified model is validated through comparisons between the simulation results and the experimental results for both sine-swept and random vibrations.The central assembly of this work is an industrial assembly with friction joints and vibration-absorbing elastomeric joints, named Harmonie-Gamma. The vibration tests performed exhibit resonance modes as well as a strong dependency of the response with the excitation level. A numerical finite element model is developed and reduced with a substructuration technique. The resulting nonlinear reduced model is simulated using an harmonic balance method with a continuation method. The simulated responses are compared with the experiments and allow for an analysis of coupled nonlinearities in the CEA / CESTA industrial context
Merheb, Rania. "Fiabilité des outils de prévision du comportement des systèmes thermiques complexes." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00969036.
Bontemps, Stéphanie. "Validation expérimentale de modèles : application aux bâtiments basse consommation." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0337/document.
Construction of low, passive and positive energy buildings is generalizing and existing buildings are being renovated. For this reason, it is essential to use simulation in order to estimate, among other things, energy and environmental performances reached by these new buildings. Expectations regarding guarantee of energy performance being more and more important, it is crucial to ensure the reliability of simulation tools being used. Indeed, simulation codes should reflect the behavior of these new kinds of buildings in the most consistent and accurate manner. Moreover, the uncertainty related to design parameters, as well as solicitations and building uses have to be taken into account in order to guarantee building energy performance during its lifetime.This thesis investigates the empirical validation of models applied to a test cell building. This validation process is divided into several steps, during which the quality of the model is evaluated as far as consistency and accuracy are concerned. Several study cases were carried out, from which we were able to identify the most influential parameters on model output, inspect the influence of time step on the empirical validation process, analyze the influence of initialization and confirm methodology’s ability to test the model
Chakchouk, Mohamed. "Conceptiοn d'un détecteur de système mécatronique mobile intelligent pour observer des molécules en phase gazeuse en ΙR". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR06.
This work anticipates that, in an ever-expanding digital technology world, technological breakthroughs in the analysis of data collected by spectroscopic devices will allow the almost instantaneous identification of known species observed in-situ in a specific environment, leaving the necessary in-depth analysis of unobserved species. The method derived from RBDO (Reliability Based Design Optimization) technology will be used to implement an artificial intelligence procedure to identify observed species from a mobile IR sensor. To successfully analyze the obtained data, it is necessary to appropriately assign molecular species from the observed IR data using appropriate theoretical models. This work focuses on the observation from mobile devices equipped with appropriate sensors, antennas, and electronics to capture and send raw or analyzed data from an interesting IR spectroscopic environment. It is therefore interesting if not essential to focus on symmetry-based theoretical tools for the spectroscopic analysis of molecules, which allows to identify the IR windows to be chosen for observation in the design of the device. Then, by fitting the theoretical spectroscopic parameters to the observed frequencies, the spectrum of a molecular species can be reconstructed. A deconvolution of the observed spectra is necessary before the analysis in terms of intensity, width and line center characterizing a line shape. Therefore, an adequate strategy is needed in the design to include data analysis during the observation phase, which can benefit from an artificial intelligence algorithm to account for differences in the IR spectral signature. In this regard, the analytical power of the instrument data can be improved by using the reliability-based design optimization (RBDO) methodology. Based on the multi-physics behavior of uncertainty propagation in the hierarchical system tree, RBDO uses probabilistic modeling to analyze the deviation from the desired output as feedback parameters to optimize the design in the first place. The goal of this thesis is to address IR observation window parameters to address reliability issues beyond mechatronic design to include species identification through analysis of collected data
Aleksovska, Ivana. "Améliorer les prévisions à court et moyen termes des modèles agronomiques en prenant mieux en compte l'incertitude des prévisions météorologiques." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30270.
This PhD-thesis demonstrates the potential of ensemble weather forecasts in the decision support tools developed to assist farmers in anticipating the application of phytosanitary treatments. We consider the models EVA that simulates the dynamics of grape berry moth and Septo-LIS that forecasts the development of wheat septoria. We illustrate the potential of using ensemble weather forecasts in agronomic models compared to frequency data. We then propose strategies to design seamless ensemble weather forecasts that combine information from three ensembles with different spatio-temporal scales. Finally these seamless forecasts are evaluated from a meteorological and agronomic point of view. The design of seamless ensemble predictions is considered as a concatenation problem. Ensemble predictions are first calibrated using a parametric approach, then the concatenation of forecasts is handled with a distance measure and an assignment algorithm. We show that the so-called Hungarian method is able to provide ensembles of independent and temporally consistent forecasts. It is shown that the EVA model is significantly improved by the calibration of temperature forecasts, while the benefit of seamless forecasts is not significant
Amarger, Stéphane. "Propagation des contraintes avec des probabilités conditionnelles imprécises." Toulouse 3, 1993. http://www.theses.fr/1993TOU30130.
Hoang, Tuan Nha. "Incertitude des données biomécaniques : modélisation et propagation dans les modèles de diagnostic des pathologies du système musculosquelettique." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2171/document.
The aim of the project is to investigate the modeling of the reliability/incertitude/imprecision of biomedical and biomechanics data (medical images, kinematics/kinetics/EMG data, etc.) and its propagation in the predictive diagnosls models of the disorders of musculoskeletal systems. These diagnosis models will be based on multimodal and multidimensional patient data (3D medical imaging, mechanical data,dinical data,etc.). The literature-based data have been collected to estabish an uncertainty space, which represents fused data from multiple sources, of morphological, mechanical, and movement analysis properties of the musculoskeletal system from multiple sources (i.e. research papers from Science Direct and Pubmed). After that,a new clustering method (US-ECM) is proposed for integrating fused data from multiple sources ln form of a multidimensional uncertainty space (US). Reliability of biomechanical data was evaluated by a fusion approach expert opinion. Reliability criteria of a data source (ie scientific paper published) focus on the technique used the acquisition protocol and measurement and the number of data. A system of questionnaires was developed to co!lect expert opinion. Then, the theory of beliet functions has been applied to merge these opinion to establish a confidence level each data source
Mostarshedi, Shermila. "Réflexion des champs électromagnétiques en milieu urbain et incertitude associée : analyse au moyen de fonctions de Green." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00366853.
Leroy, Olivia. "Estimation d'incertitudes pour la propagation acoustique en milieu extérieur." Le Mans, 2010. http://cyberdoc.univ-lemans.fr/theses/2010/2010LEMA1027.pdf.
Outdoor sound propagation phenomena are subject to substantial uncertainties and non-negligible variability in space and time. The combination of ground effects, micrometeorological phenomena, variability of the source gives rise to complex phenomena. It is so difficult to measure and/or assess sound pressure levels with the required accuracy from a regulatory and normative point of view. A statistical method has been defined in order to assess these uncertainties for any configurations : the Calibration Under Uncertainty (CUU) methodology. This method integrates both modeled data and experimental data ; available databases which are representative of more or less complex sites allow to apply the statistical process and to improve it step by step. System specification (datasets constitution), observables definition, system calibration (cost function formulation and parameters estimation), system validation and prediction are the main steps of the statistical process. This finalized process can be applied to any case study within the limitations of available prediction models and experimental data regarding the complexity of phenomena. These limitations are also highlighted
Darishchev, Alexander. "Analyse de connectivité et techniques de partitionnement de données appliquées à la caractérisation et la modélisation d'écoulement au sein des réservoirs très hétérogènes." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S162.
Computer-based workflows have gained a paramount role in development and exploitation of natural hydrocarbon resources and other subsurface operations. One of the crucial problems of reservoir modelling and production forecasting is in pre-selecting appropriate models for quantifying uncertainty and robustly matching results of flow simulation to real field measurements and observations. This thesis addresses these and other related issues. We have explored a strategy to facilitate and speed up the adjustment of such numerical models to available field production data. Originally, the focus of this research was on conceptualising, developing and implementing fast proxy models related to the analysis of connectivity, as a physically meaningful property of the reservoir, with advanced cluster analysis techniques. The developed methodology includes also several original probability-oriented approaches towards the problems of sampling uncertainty and determining the sample size and the expected value of sample information. For targeting and prioritising relevant reservoir models, we aggregated geostatistical realisations into distinct classes with a generalised distance measure. Then, to improve the classification, we extended the silhouette-based graphical technique, called hereafter the "entire sequence of multiple silhouettes" in cluster analysis. This approach provided clear and comprehensive information about the intra- and inter-cluster dissimilarities, especially helpful in the case of weak, or even artificial, structures. Finally, the spatial separation and form-difference of clusters were graphically visualised and quantified with a scale-invariant probabilistic distance measure. The obtained relationships appeared to justify and validate the applicability of the proposed approaches to enhance the characterisation and modelling of flow. Reliable correlations were found between the shortest "injector-producer" pathways and water breakthrough times for different configurations of well placement, various heterogeneity levels and mobility ratios of fluids. The proposed graph-based connectivity proxies provided sufficiently accurate results and competitive performance at the meta-level. The use of them like precursors and ad hoc predictors is beneficial at the pre-processing stage of the workflow. Prior to history matching, a suitable and manageable number of appropriate reservoir models can be identified from the comparison of the available production data with the selected centrotype-models regarded as the class representatives, only for which the full fluid flow simulation is pre-requisite. The findings of this research work can easily be generalised and considered in a wider scope. Possible extensions, further improvements and implementation of them may also be expected in other fields of science and technology
Ehrlacher, Virginie. "Quelques modèles mathématiques en chimie quantique et propagation d'incertitudes." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1073/document.
The contributions of this thesis work are two fold. The first part deals with the study of local defects in crystalline materials. Chapter 1 gives a brief overview of the main models used in quantum chemistry for electronic structure calculations. In Chapter 2, an exact variational model for the description of local defects in a periodic crystal in the framework of the Thomas-Fermi-von Weisz"acker theory is presented. It is justified by means of thermodynamic limit arguments. In particular, it is proved that the defects modeled within this theory are necessarily neutrally charged. Chapters 3 and 4 are concerned with the so-called spectral pollution phenomenon. Indeed, when an operator is discretized, spurious eigenvalues which do not belong to the spectrum of the initial operator may appear. In Chapter 3, we prove that standard Galerkin methods with finite elements discretization for the approximation of perturbed periodic Schrödinger operators are prone to spectral pollution. Besides, the eigenvectors associated with spurious eigenvalues can be characterized as surface states. It is possible to circumvent this problem by using augmented finite element spaces, constructed with the Wannier functions of the periodic unperturbed Schr"odinger operator. We also prove that the supercell method, which consists in imposing periodic boundary conditions on a large simulation domain containing the defect, does not produce spectral pollution. In Chapter 4, we give a priori error estimates for the supercell method. It is proved in particular that the rate of convergence of the method scales exponentiall with respect to the size of the supercell. The second part of this thesis is devoted to the study of greedy algorithms for the resolution of high-dimensional uncertainty quantification problems. Chapter 5 presents the most classical numerical methods used in the field of uncertainty quantification and an introduction to greedy algorithms. In Chapter 6, we prove that these algorithms can be applied to the minimization of strongly convex nonlinear energy functionals and that their convergence rate is exponential in the finite-dimensional case. We illustrate these results on obstacle problems with uncertainty via penalized formulations