Dissertationen zum Thema „Analyse de sensibilité (indice de Sobol)“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-25 Dissertationen für die Forschung zum Thema "Analyse de sensibilité (indice de Sobol)" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Chastaing, Gaëlle. „Indices de Sobol généralisés par variables dépendantes“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM046.
Der volle Inhalt der QuelleA mathematical model aims at characterizing a complex system or process that is too expensive to experiment. However, in this model, often strongly non linear, input parameters can be affected by a large uncertainty including errors of measurement of lack of information. Global sensitivity analysis is a stochastic approach whose objective is to identify and to rank the input variables that drive the uncertainty of the model output. Through this analysis, it is then possible to reduce the model dimension and the variation in the output of the model. To reach this objective, the Sobol indices are commonly used. Based on the functional ANOVA decomposition of the output, also called Hoeffding decomposition, they stand on the assumption that the incomes are independent. Our contribution is on the extension of Sobol indices for models with non independent inputs. In one hand, we propose a generalized functional decomposition, where its components is subject to specific orthogonal constraints. This decomposition leads to the definition of generalized sensitivity indices able to quantify the dependent inputs' contribution to the model variability. On the other hand, we propose two numerical methods to estimate these constructed indices. The first one is well-fitted to models with independent pairs of dependent input variables. The method is performed by solving linear system involving suitable projection operators. The second method can be applied to more general models. It relies on the recursive construction of functional systems satisfying the orthogonality properties of summands of the generalized decomposition. In parallel, we illustrate the two methods on numerical examples to test the efficiency of the techniques
Gayrard, Emeline. „Analyse bayésienne de la gerbe d'éclats provoquée pa l'explosion d'une bombe à fragmentation naturelle“. Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC039/document.
Der volle Inhalt der QuelleDuring this thesis, a method of statistical analysis on sheaf of bomb fragments, in particular on their masses, has been developed. Three samples of incomplete experimental data and a mechanical model which simulate the explosion of a ring were availables. First, a statistical model based on the mechanical model has been designed, to generate data similar to those of an experience. Then, the distribution of the masses has been studied. The classical methods of analysis being not accurate enough, a new method has been developed. It consists in representing the mass by a random variable built from a basis of chaos polynomials. This method gives good results however it doesn't allow to take into account the link between slivers. Therefore, we decided to model the masses by a stochastic process, and not a random variable. The range of fragments, which depends of the masses, has also been modeled by a process. Last, a sensibility analysis has been carried out on this range with Sobol indices. Since these indices are applied to random variables, it was necessary to adapt them to stochastic process in a way that take into account the links between the fragments. In the last part, it is shown how the results of this analysis could be improved. Specifically, the indices presented in the last part are adapted to dependent variables and therefore, they could be suitable to processes with non independent increases
Chastaing, Gaëlle. „Indices de Sobol généralisés pour variables dépendantes“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00930229.
Der volle Inhalt der QuelleKamari, Halaleh. „Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes“. Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.
Der volle Inhalt der QuelleIn this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
Tissot, Jean-yves. „Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin“. Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM064/document.
Der volle Inhalt der QuelleIn the fields of modelization and numerical simulation, simulators generally depend on several input parameters whose impact on the model outputs are not always well known. The main goal of sensitivity analysis is to better understand how the model outputs are sensisitive to the parameters variations. One of the most competitive method to handle this problem when complex and potentially highly non linear models are considered is based on the ANOVA decomposition and the Sobol' indices. More specifically the latter allow to quantify the impact of each parameters on the model response. In this thesis, we are interested in the issue of the estimation of the Sobol' indices. In the first part, we revisit in a rigorous way existing methods in light of discrete harmonic analysis on cyclic groups and randomized orthogonal arrays. It allows to study theoretical properties of this method and to intriduce generalizations. In a second part, we study the Monte Carlo method for the Sobol' indices and we introduce a new approach to reduce the number of simulations of this method. In parallel with this theoretical work, we apply these methods on a marine ecosystem model
Tissot, Jean-Yves. „Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin“. Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00762800.
Der volle Inhalt der QuelleJannet, Basile. „Influence de la non-stationnarité du milieu de propagation sur le processus de Retournement Temporel (RT)“. Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22436/document.
Der volle Inhalt der QuelleThe aim of this thesis is to measure and quantify the impacts of uncertainties in the Time Reversal (TR) process. These random variations, coming from diverse sources, can have a huge influence if they happen between the TR steps. On this perspective, the Stochastique Collocation (SC) method is used. Very good results in terms of effectiveness and accuracy had been noticed in previous studies in ElectroMagnetic Compatibility (EMC). The conclusions are still excellent here on TR problems. Although, when the problem dimension rises (high number of Random Variables (RV)), the SC method reaches its limits and the efficiency decreases. Therefore a study on Sensitivity Analysis (SA) techniques has been carried out. Indeed, these methods emphasize the respective influences of the random variables of a model. Among the various quantitative or qualitative SA techniques the Morris method and the Sobol total sensivity indices have been adopted. Since only a split of the inputs (point out of the predominant RV) is expected, they bring results at a lesser cost. That is why a novel method is built, combining SA techniques and the SC method. In a first step, the model is reduced with SA techniques. Then, the shortened model in which only the prevailing inputs remain, allows the SC method to show once again its efficiency with a high accuracy. This global process has been validated facing Monte Carlo results on several analytical and numerical TR cases subjet to random variations
Andrianandraina. „Approche d'éco-conception basée sur la combinaison de l'analyse de cycle de vie et de l'analyse de sensibilité : Cas d'application sur le cycle de vie du matériau d'isolation thermique biosourcé, le béton de chanvre“. Ecole centrale de Nantes, 2014. http://www.theses.fr/2014ECDN0005.
Der volle Inhalt der QuelleThe purpose of this PhD thesis is to establish an ecodesign method based on Life Cycle Assessment, that should allow identifying action levers specific for each economic actor of the life cycle of a product, for improved environmental performances. Life Cycle Assessment was coupled with two methods of sensitivity analysis in five steps: (i) definition of objectives and system, (ii) modeling calculation of inventory and impact indicators with different approaches according to foreground and background sub-systems, (iii) characterization of parameters using a typology specific to possibilities of control of the considered economic actor, (iv) application of two sensitivity analysis methods (Morris and Sobol) and (v) results interpretation in order to identify potential efficient improvements. The approach was applied on the hemp concrete insulation product, including agricultural production, industrial transformation of hemp fibers, and use of hemp concrete as a thermal insulator for buildings. The approach provides potential technological scenarios improving environmental performances for each single economic actor of the product’s life cycle. Performing the method presently requires additional information, but will probably be paid back in the future by driving more robust choices for a given product
Gilquin, Laurent. „Échantillonnages Monte Carlo et quasi-Monte Carlo pour l'estimation des indices de Sobol' : application à un modèle transport-urbanisme“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM042/document.
Der volle Inhalt der QuelleLand Use and Transportation Integrated (LUTI) models have become a norm for representing the interactions between land use and the transportation of goods and people in a territory. These models are mainly used to evaluate alternative planning scenarios, simulating their impact on land cover and travel demand.LUTI models and other mathematical models used in various fields are most of the time based on complex computer codes. These codes often involve poorly-known inputs whose uncertainty can have significant effects on the model outputs.Global sensitivity analysis methods are useful tools to study the influence of the model inputs on its outputs. Among the large number of available approaches, the variance based method introduced by Sobol' allows to calculate sensitivity indices called Sobol' indices. These indices quantify the influence of each model input on the outputs and can detect existing interactions between inputs.In this framework, we favor a particular method based on replicated designs of experiments called replication method. This method appears to be the most suitable for our application and is advantageous as it requires a relatively small number of model evaluations to estimate first-order or second-order Sobol' indices.This thesis focuses on extensions of the replication method to face constraints arising in our application on the LUTI model Tranus, such as the presence of dependency among the model inputs, as far as multivariate outputs.Aside from that, we propose a recursive approach to sequentially estimate Sobol' indices. The recursive approach is based on the iterative construction of stratified designs, latin hypercubes and orthogonal arrays, and on the definition of a new stopping criterion. With this approach, more accurate Sobol' estimates are obtained while recycling previous sets of model evaluations. We also propose to combine such an approach with quasi-Monte Carlo sampling.An application of our contributions on the LUTI model Tranus is presented
Causse, Mathieu. „Contributions à l'extension de la méthode des Sparse Grids pour les calculs de fiabilité en modélisation de processus“. Toulouse 3, 2010. http://www.theses.fr/2010TOU30336.
Der volle Inhalt der QuelleThe aim of this thesis is to show the efficiency of Sparse Grid approximation method applied to high dimensional real-life problems. For that kind of problems main parameters detection is fundamental. First we introduce Sparse Grid approximation method and emphasize its adaptive form. Then we show the efficiency of the method on standard test functions to show Sparse Grid specificities in main parameters detection. Due to excellent performance properties of the method, we apply it to a real-life problem and obtain accurate results with a reduced computation cost. The first application is dedicated to a pollutant diffusion problem, the second one aims to evaluate the performance of a power network
Niang, Ibrahima. „Quantification et méthodes statistiques pour le risque de modèle“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1015/document.
Der volle Inhalt der QuelleIn finance, model risk is the risk of loss resulting from using models. It is a complex risk which recover many different situations, and especially estimation risk and risk of model misspecification. This thesis focuses: on model risk inherent in yield and credit curve construction methods and the analysis of the consistency of Sobol indices with respect to stochastic ordering of model parameters. it is divided into three chapters. Chapter 1 focuses on model risk embedded in yield and credit curve construction methods. We analyse in particular the uncertainty associated to the construction of yield curves or credit curves. In this context, we derive arbitrage-free bounds for discount factor and survival probability at the most liquid maturities. In Chapter 2 of this thesis, we quantify the impact of parameter risk through global sensitivity analysis and stochastic orders theory. We analyse in particular how Sobol indices are transformed further to an increase of parameter uncertainty with respect to the dispersive or excess wealth orders. Chapter 3 of the thesis focuses on contrast quantile index. We link this latter with the risk measure CTE and then we analyse on the other side, in which circumstances an increase of a parameter uncertainty in the sense of dispersive or excess wealth orders implies and increase of contrast quantile index. We propose finally an estimation procedure for this index. We prove under some conditions that our estimator is consistent and asymptotically normal
Abily, Morgan. „Modélisation hydraulique à surface libre haute-résolution : utilisation de données topographiques haute-résolution pour la caractérisation du risque inondation en milieux urbains et industriels“. Thesis, Nice, 2015. http://www.theses.fr/2015NICE4121/document.
Der volle Inhalt der QuelleHigh Resolution (infra-metric) topographic data, including LiDAR photo-interpreted datasets, are becoming commonly available at large range of spatial extent, such as municipality or industrial site scale. These datasets are promising for High-Resolution (HR) Digital Elevation Model (DEM) generation, allowing inclusion of fine aboveground structures that influence overland flow hydrodynamic in urban environment. DEMs are one key input data in Hydroinformatics to perform free surface hydraulic modelling using standard 2D Shallow Water Equations (SWEs) based numerical codes. Nonetheless, several categories of technical and numerical challenges arise from this type of data use with standard 2D SWEs numerical codes. Objective of this thesis is to tackle possibilities, advantages and limits of High-Resolution (HR) topographic data use within standard categories of 2D hydraulic numerical modelling tools for flood hazard assessment purpose. Concepts of HR topographic data and 2D SWE based numerical modelling are recalled. HR modelling is performed for : (i) intense runoff and (ii) river flood event using LiDAR and photo-interpreted datasets. Tests to encompass HR surface elevation data in standard modelling tools ranges from industrial site scale to a megacity district scale (Nice, France). Several standard 2D SWEs based codes are tested (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D). Tools and methods for assessing uncertainties aspects with 2D SWE based models are developed to perform a spatial Global Sensitivity Analysis related to HR topographic data use. Results show the importance of modeller choices regarding ways to integrate the HR topographic information in models
Solís, Maikol. „Conditional covariance estimation for dimension reduction and sensivity analysis“. Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2354/.
Der volle Inhalt der QuelleThis thesis will be focused in the estimation of conditional covariance matrices and their applications, in particular, in dimension reduction and sensitivity analyses. In Chapter 2, we are in a context of high-dimensional nonlinear regression. The main objective is to use the sliced inverse regression methodology. Using a functional operator depending on the joint density, we apply a Taylor decomposition around a preliminary estimator. We will prove two things: our estimator is asymptotical normal with variance depending only the linear part, and this variance is efficient from the Cramér-Rao point of view. In the Chapter 3, we study the estimation of conditional covariance matrices, first coordinate-wise where those parameters depend on the unknown joint density which we will replace it by a kernel estimator. We prove that the mean squared error of the nonparametric estimator has a parametric rate of convergence if the joint distribution belongs to some class of smooth functions. Otherwise, we get a slower rate depending on the regularity of the model. For the estimator of the whole matrix estimator, we will apply a regularization of type "banding". Finally, in Chapter 4, we apply our results to estimate the Sobol or sensitivity indices. These indices measure the influence of the inputs with respect to the output in complex models. The advantage of our implementation is that we can estimate the Sobol indices without use computing expensive Monte-Carlo methods. Some illustrations are presented in the chapter showing the capabilities of our estimator
Mtibaa, Mohamed. „Οptimisatiοn de cοuplage Ρrοcédé/Ρrοpriétés/Fiabilité des Structures en Μatériaux Cοmpοsites Fοnctiοnnels“. Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMLH03.
Der volle Inhalt der QuelleThis research focuses on the challenges and interactions between the manufacturing processes (Resin Transfer Molding ‘RTM’ and Compression Resin Transfer Molding ‘CRTM’), the mechanical properties, and the reliability of composite material structures; more specifically the functional composites. A number of numerical models have been developed for simulating the suspension (resin + particles) impregnation through the fibrous medium (fibers) in the RTM and CRTM processes. These models are validated by comparing their results with experimental, semi-analytical, and analytical ones from the literature. A parametric study is carried out to demonstrate the impact of various process parameters on particles’ distribution in the final composite. Moreover, a comparison between the injection and compression modes is done. The results of this part show that the distribution of particles in the final part depends on the initial concentration, the distance travelled, and the initial fibers’ volume fraction. However, it is independent of the parameters values of injection and compression. It is also observed that the CRTM process with imposed pressure injection and imposed force compression represents the most favorable scenario for producing composite parts.For the purpose of controlling the final particles’ distribution in the composite material, manufactured by the RTM process, two key steps have been identified. The first step consists in a sensitivity analysis that examines three parameters: the temporal evolution of the initial injected particles’ concentration, the injection pressure field and the initial fibers’ porosity. The conclusions indicate a minimal impact of the initial porosity and the injection pressure field; while the evolution of the initial concentration of the injected particles has a dominant effect. In a second step, an optimization algorithm is implemented in the numerical model of the RTM process. It is used to determine the optimal configuration of the initial injected particles’ concentration’s evolution; in order to approximate the particles’ distribution in the final composite to the desired profiles. The obtained results from the genetic algorithm provide a very satisfactory control of this distribution. To complete this section, a model, estimating the mechanical properties of the manufactured part, is developed. It is found that there is a positive correlation between the particles’ fraction and certain mechanical properties, namely the elastic modulus E11 and E22, and the shear modulus G12 and G23. Nevertheless, the Poisson’s ratio (Nu12) is inversely proportional to the particles’ fraction. Also, the shear module G12 is the most significantly influenced by this fraction.Following this, the control of the mechanical properties of the composite parts, manufactured by the CRTM process, is targeted, and compared to the results of the RTM process. The conclusions reveal that the RTM process offers a better control of these properties. Whereas, the CRTM process improves considerably the mechanical properties of the parts due to its compression phase, which increases the fibers’ volume fraction and consequently enhances these properties.Finally, a static analysis is conducted based on the developed numerical model that uses the finite element method (Ansys APDL). This model is combined with those of the CRTM process and the mechanical properties calculation. An optimization algorithm is integrated in our global model to adapt the mechanical properties of the composite part according to the configuration (cantilever or simply supported) and the load distribution. Moreover, it minimizes the composite part’s weight and ensures the respect of the predetermined mechanical constraints such as the maximum deformation limit. The obtained results correspond perfectly to these objectives
Saint-Geours, Nathalie. „Analyse de sensibilité de modèles spatialisés : application à l'analyse coût-bénéfice de projets de prévention du risque d'inondation“. Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20203/document.
Der volle Inhalt der QuelleVariance-based global sensitivity analysis is used to study how the variability of the output of a numerical model can be apportioned to different sources of uncertainty in its inputs. It is an essential component of model building as it helps to identify model inputs that account for most of the model output variance. However, this approach is seldom applied in Earth and Environmental Sciences, partly because most of the numerical models developed in this field include spatially distributed inputs or outputs . Our research work aims to show how global sensitivity analysis can be adapted to such spatial models, and more precisely how to cope with the following two issues: i) the presence of spatial auto-correlation in the model inputs, and ii) the scaling issues. We base our research on the detailed study of the numerical code NOE, which is a spatial model for cost-benefit analysis of flood risk management plans. We first investigate how variance-based sensitivity indices can be computed for spatially distributed model inputs. We focus on the “map labelling” approach, which allows to handle any complex spatial structure of uncertainty in the modelinputs and to assess its effect on the model output. Next, we offer to explore how scaling issues interact with the sensitivity analysis of a spatial model. We define “block sensitivity indices” and “site sensitivity indices” to account for the role of the spatial support of model output. We establish the properties of these sensitivity indices under some specific conditions. In particular, we show that the relative contribution of an uncertain spatially distributed model input to the variance of the model output increases with its correlation length and decreases with the size of the spatial support considered for model output aggregation. By applying our results to the NOE modelling chain, we also draw a number of lessons to better deal with uncertainties in flood damage modelling and cost-benefit analysis of flood riskmanagement plans
Beaulieu, Lucie. „Analyse de sensibilité d'un indice de risque de perte de phosphore en zone cultivée“. Thesis, Université Laval, 2005. http://www.theses.ulaval.ca/2005/22790/22790.pdf.
Der volle Inhalt der QuelleAlhossen, Iman. „Méthode d'analyse de sensibilité et propagation inverse d'incertitude appliquées sur les modèles mathématiques dans les applications d'ingénierie“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30314/document.
Der volle Inhalt der QuelleApproaches for studying uncertainty are of great necessity in all disciplines. While the forward propagation of uncertainty has been investigated extensively, the backward propagation is still under studied. In this thesis, a new method for backward propagation of uncertainty is presented. The aim of this method is to determine the input uncertainty starting from the given data of the uncertain output. In parallel, sensitivity analysis methods are also of great necessity in revealing the influence of the inputs on the output in any modeling process. This helps in revealing the most significant inputs to be carried in an uncertainty study. In this work, the Sobol sensitivity analysis method, which is one of the most efficient global sensitivity analysis methods, is considered and its application framework is developed. This method relies on the computation of sensitivity indexes, called Sobol indexes. These indexes give the effect of the inputs on the output. Usually inputs in Sobol method are considered to vary as continuous random variables in order to compute the corresponding indexes. In this work, the Sobol method is demonstrated to give reliable results even when applied in the discrete case. In addition, another advancement for the application of the Sobol method is done by studying the variation of these indexes with respect to some factors of the model or some experimental conditions. The consequences and conclusions derived from the study of this variation help in determining different characteristics and information about the inputs. Moreover, these inferences allow the indication of the best experimental conditions at which estimation of the inputs can be done
Vauchel, Nicolas. „Estimation des indices de Sobol à l'aide d'un métamodèle multi-éléments : application à la dynamique du vol“. Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILN008.
Der volle Inhalt der QuelleThe thesis is addressing a concrete issue on aircrafts safety. The post-stall flight domain is a complex flight domain where flows around an airfoil may be highly unstable and massively stalled. In this domain, which can be reached on purpose or accidentally, usual controls are less efficient or completely inefficient, which can endanger the pilot and its passengers. The thesis is about the determination of the flight predictions in the post-stall flight domain, their dependences to the selected model structure and about the uncertainties of the experimental data the model relies on. The dynamic of the motion of the aircraft is governed by a dynamic system of ordinary non-linear differential equations. In these equations, the effects from the fluid on the aircraft are traduced by the global aerodynamic coefficients, the dimensionless forces and moments applied by the fluid on the aircraft. These coefficients depend on a high number of variables in a non-linear fashion. Among these variables are the geometry of the aircraft, its velocity and its rotation rates compared to earth, and characteristics of the surrounding flow. A representation model having a selected structure is determined for every aerodynamic coefficient, in order to represent these complex dependences. This model rely on experimental data obtained on a scale model, free flight data on a real aircraft being too expensive and too risky to get in the post-stall domain. Another way of obtaining data would be to use computational simulations. Nevertheless, the complex and unsteady flows around the 3D geometry of the aircraft makes the simulation too expensive with the current ressources, even if some recent studies begin to explore this direction of research. The selected models in the thesis are built on experimental data only. In the dynamic system, the global aerodynamic coefficients are evaluated by interpolation in these databases according to the selected model structure. The fact of selecting a simplified structure of the model makes it deficient. Moreover, as these models rely on experimental data, they are uncertain. The gaps and the uncertainties of the model have some impacts on the flight predictions. The initial objective of the thesis is therefore to study these impacts.During the thesis, new scientific objectives appeared, objectives going beyond the scope of Flight Dynamics. First, a new multi-element surrogate model for Uncertainty Quantification based on modern Machine learning methods is developed. Multi-element surrogate models were developed to address the loss of accuracy of Polynomial Chaos model in presence of discontinuities. Then, a formula linking the sensitivity Sobol indices to the coefficient of a multi-element surrogate model is derived. These results are used in the case of Flight Dynamics in order to address the issue raised in the initial objective of the thesis. The numerous bifurcations of the dynamic system can be traduced by discontinuities and/or irregularities in the evolution of the state variables compared to the uncertain parameters. The methods of Sensitivity Analysis and of Uncertainty Quantification developed in the thesis are therefore good candidates to analyse the system
Jacques, Julien. „Contributions à l'analyse de sensibilité et à l'analyse discriminante généralisée“. Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00011169.
Der volle Inhalt der QuelleL'analyse de sensibilité globale d'un modèle mathématique étudie comment les variables de sortie de ce dernier réagissent à des perturbations de ses entrées. Les méthodes basées sur l'étude de la variance quantifient les parts de variance de la réponse du modèle dues à chaque variable d'entrée et chaque sous-ensemble de variables d'entrée. Le premier problème abordé est l'impact d'une incertitude de modèle sur les résultats d'une analyse de sensibilité. Deux formes particulières d'incertitude sont étudiées : celle due à une mutation du modèle de référence, et celle due à l'utilisation d'un modèle simplifié à la place du modèle de référence. Un second problème relatif à l'analyse de sensibilité a été étudié au cours de cette thèse, celui des modèles à entrées corrélées. En effet, les indices de sensibilité classiques n'ayant pas de signification (d'un point de vue interprétation) en présence de corrélation des entrées, nous proposons une approche multidimensionnelle consistant à exprimer la sensibilité de la sortie du modèle à des groupes de variables corrélées. Des applications dans le domaine de l'ingénierie nucléaire illustrent ces travaux.
L'analyse discriminante généralisée consiste à classer les individus d'un échantillon test en groupes, en utilisant l'information contenue dans un échantillon d'apprentissage, lorsque ces deux échantillons ne sont pas issus d'une même population. Ce travail étend les méthodes existantes dans un cadre gaussien au cas des données binaires. Une application en santé publique illustre l'utilité des modèles de discrimination généralisée ainsi définis.
Garcia, Hernandez Elizabeth Antonia. „Analyse de sensibilité globale appliquée à l'évaluation des risques thermiques Kinetic modeling using temperature as an on-line measurement: application to the hydrolysis of acetic anhydride, a revisited kinetic model“. Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR10.
Der volle Inhalt der QuelleThermal runaway is one of the main critical events in chemical industry accidents. To evaluate the risk of such events, thermal risk assessment, must be done. Nevertheless, based on thermal risk assessment, it is not possible to know the most influence model inputs on the thermal risk. Global sensitivity analysis was proposed as a new perspective to evaluate the influence and the interaction of the model inputs on thermal risk parameters. The following parameters were studied: maximum reaction temperature, temperature rise and time to reach the maximum reaction temperature. The method was applied to two reaction systems: a homogeneous phase system with a single reaction, hydrolysis of acetic anhydride, and a two-phase system with several reactions, epoxidation of cottonseed oil
Sohier, Henri. „Modélisation, analyse et optimisation d’un largage de fusée spatiale depuis un porteur de type avion“. Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0044/document.
Der volle Inhalt der QuelleIn an air launch to orbit, a space rocket is launched from a carrier aircraft. Air launchto orbit appears as particularly interesting for small satellites. This Ph.D. thesis is part of the program Pegasus of the French space agency CNES and it follows the development of a small scale demonstrator called EOLE. It focuses on the very sensitive separation phase.The similitude constraints which have to be respected to study the large scale system with EOLEare first identified. A problem of mass limits the possibilities to directly extrapolate at a larger scale, in a deterministic approach, data obtained with EOLE. It is decided to study the separation in a probabilistic approach by developing a new multi-body model. A great variety of uncertainties are taken into account, from the aerodynamic interactions to the atmospheric turbulences, the separation mechanism, and the launch trajectories. A new performance criterion is developed to quantify the safety of the separation phase. It is based on elementary geometries and it could beused in other contexts.A sensitivity analysis is applied to estimate the influence of the uncertainties on the performance criterion. Given the large number of factors of uncertainty and the non-negligible simulation time,the model is first simplified. The Morris method is applied to identify the factors with a low influence which can be fixed to a given value. It is a frequent step, but it is shown that there isa high risk to fix the wrong factors. Any further study would then be altered. The risk to fix the wrong factors is significantly reduced by improving the factors sampling, the calculation of their influence, and the statistical treatment of the results. This new method is used to estimate the influence of the uncertainties at the separation and the safety is improved by optimizing launch trajectories
Wu, QiongLi. „Sensitivity Analysis for Functional Structural Plant Modelling“. Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00719935.
Der volle Inhalt der QuelleParveaud, Claude-Eric. „Propriétés radiatives des couronnes de Noyers (Juglans nigra x J. regia) et croissance des pousses annuelles - Influence de la géométrie du feuillage, de la position des pousses et de leur climat radiatif“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00087909.
Der volle Inhalt der QuelleGohore, Bi Goue D. „Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques“. Thèse, 2010. http://hdl.handle.net/1866/4535.
Der volle Inhalt der QuelleThe heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.
Gohore, Bi Gouê Denis. „Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques“. Thèse, 2010. http://hdl.handle.net/1866/4535.
Der volle Inhalt der QuelleThe heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.