Dissertations / Theses on the topic 'Méthode de krigeage'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 45 dissertations / theses for your research on the topic 'Méthode de krigeage.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hannat, Ridha. "Optimisation d'un système d'antigivrage à air chaud pour aile d'avion basée sur la méthode du krigeage dual." Mémoire, École de technologie supérieure, 2014. http://espace.etsmtl.ca/1302/1/HANNAT_Ridha.pdf.
Bouhlel, Mohamed Amine. "Optimisation auto-adaptative en environnement d’analyse multidisciplinaire via les modèles de krigeage combinés à la méthode PLS." Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0002/document.
Aerospace turbomachinery consists of a plurality of blades. Their main function is to transfer energybetween the air and the rotor. The bladed disks of the compressor are particularly important becausethey must satisfy both the requirements of aerodynamic performance and mechanical resistance.Mechanical and aerodynamic optimization of blades consists in searching for a set of parameterizedaerodynamic shape that ensures the best compromise solution between a set of constraints.This PhD introduces a surrogate-based optimization method well adapted to high-dimensionalproblems. This kind of high-dimensional problem is very similar to the Snecma’s problems. Ourmain contributions can be divided into two parts : Kriging models development and enhancementof an existing optimization method to handle high-dimensional problems under a large number ofconstraints. Concerning Kriging models, we propose a new formulation of covariance kernel which is able toreduce the number of hyper-parameters in order to accelerate the construction of the metamodel.One of the known limitations of Kriging models is about the estimation of its hyper-parameters.This estimation becomes more and more difficult when the number of dimension increases. Inparticular, the initial design of experiments (for surrogate modelling construction) requires animportant number of points and therefore the inversion of the covariance matrix becomes timeconsuming. Our approach consists in reducing the number of parameters to estimate using the Partial LeastSquares regression method (PLS). This method provides information about the linear relationshipbetween input and output variables. This information is integrated into the Kriging model kernelwhile maintaining the symmetry and the positivity properties of the kernels. Thanks to this approach,the construction of these new models called KPLS is very fast because of the low number of newparameters to estimate. When the covariance kernel used is of an exponential type, the KPLS methodcan be used to initialize parameters of classical Kriging models, to accelerate the convergence of theestimation of parameters. The final method, called KPLS+K, allows to improve the accuracy of themodel for multimodal functions. The second main contribution of this PhD is to develop a global optimization method to tacklehigh-dimensional problems under a large number of constraint functions thanks to KPLS or KPLS+Kmethod. Indeed, we extended the self adaptive optimization method called "Efficient Global Optimization,EGO" for high-dimensional problems under constraints. Several enriching criteria have been tested. This method allows to estimate known global optima on academic problems up to 50 inputvariables. The proposed method is tested on two industrial cases, the first one, "MOPTA", from the automotiveindustry (with 124 input variables and 68 constraint functions) and the second one is a turbineblade from Snecma company (with 50 input variables and 31 constraint functions). The results showthe effectiveness of the method to handle industrial problems.We also highlight some importantlimitations
Soilahoudine, Moindzé. "Optimisation de structures aéronautiques : une nouvelle méthode à fidélité adaptative." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30322.
The surrogate based optimization method with adaptive enrichment (Efficient Global Optimization type approach) may, in spite of its strengths, be prohibitive in terms of computational cost when applied to large scale problems with several local minima. They require the resolution of a full numerical model for each simulation, which can lead to intractable studies or to simulation times incompatible with the times allotted for the design of a product. This PhD thesis falls within the scope of optimizing expensive simulator codes by using substitution models of the simulator. These substitutions models can be of two types: a metamodel or a reduced order model. We have proposed here a new methodology for global optimization of mechanical systems by coupling adaptive surrogate based optimization methods with the reduced order modeling methods. The surrogate based optimization methods aim to reduce the number of objective function evaluations while the reduced order model methods aim to reduce the dimensionality of a model and thus its computational cost. The objective of the methodology proposed in this thesis is thus to reduce the number of the objective function evaluations while at the same time significantly reducing the computational expense to the resolutions of the full mechanical model. The basic idea of the proposed approach resides in the adaptive construction the metamodel of the objective function. This construction fuses full and reduced order models and thus adapts the model fidelity to the accuracy requirements of the optimization at the current iteration. The efficiency of our proposed algorithms was illustrated on two types of applications: i. a problem of identification of orthotropic elastic constants from full field displacement measurements based on a tensile test on a plate with a hole ii. a problem of stiffness maximization of laminated plates. The results have shown that our methodology provides a significant speed-up in terms of computational cost, compared to the traditional EGO algorithm
Djerboua, Abdelatif. "Prédétermination des pluies et crues extrêmes dans les Alpes franco-italiennes : prévision quantitative des pluies journalières par la méthode des analogues." Grenoble INPG, 2001. http://www.theses.fr/2001INPG0030.
Emery, Xavier. "Simulation conditionnelle de modèles isofactoriels." Phd thesis, École Nationale Supérieure des Mines de Paris, 2004. http://pastel.archives-ouvertes.fr/pastel-00001185.
Benassi, Romain. "Nouvel algorithme d'optimisation bayésien utilisant une approche Monte-Carlo séquentielle." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00864700.
Dalla'Rosa, Alexandre. "Modélisation et optimisation de la localisation d'émetteurs dans des systèmes de communication sans fil." Paris 11, 2007. http://www.theses.fr/2007PA112299.
During the last years, portability has been the “key” for the extraordinary expansion of the personal communication systems (PCS) in urban environments. People located anywhere, at any time, can access private and collective services, exchanging information, safely and quickly, without any fixed “wireline” connection (e. G. Bluetooth, WiFi, etc. ). One of the most important considerations in successful implementation of PCS is the indoor radio communication. For more than 10 years the indoor radio propagation channel is investigated and several tools were deployed based on two different approaches to deal with the wave propagation problem: empirical and deterministic approaches. In a few words, the main differences between these two techniques are the exactness and the waste time to build and solve a wave propagation problem. Up to now, most of the indoor PCS architectures are based on empirical approaches based on the simplicity and low computational cost. However, the evolution of the wireless systems and the associated disturbances in urban environment demands more efficient tools to guarantee an efficient expansion of PCS. In this work a technique is presented to investigate the indoor radio propagation channel and search for the optimal transmitter location. The TLM method and the Kriging technique are combined to approach an objective function in order to solve at low cost an optimization problem corresponding to an electromagnetic indoor environment prediction. The technique is applied to search the optimal transmitter location in a realistic problem
Ryazanova, Oleksiv Marta. "Approche statistique pour l'optimisation des stratégies d'analyses techniques basées sur des seuils." Phd thesis, École Nationale Supérieure des Mines de Paris, 2008. http://pastel.archives-ouvertes.fr/pastel-00005145.
Bompard, Manuel. "MODÈLES DE SUBSTITUTION POUR L'OPTIMISATION GLOBALE DE FORME EN AÉRODYNAMIQUE ET MÉTHODE LOCALE SANS PARAMÉTRISATION." Phd thesis, Université Nice Sophia Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00771799.
Ammar, Karim. "Conception multi-physique et multi-objectif des cœurs de RNR-Na hétérogènes : développement d’une méthode d’optimisation sous incertitudes." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112390/document.
Since Phenix shutting down in 2010, CEA does not have Sodium Fast Reactor (SFR) in operating condition. According to global energetic challenge and fast reactor abilities, CEA launched a program of industrial demonstrator called ASTRID (Advanced Sodium Technological Reactor for Industrial Demonstration), a reactor with electric power capacity equal to 600MW. Objective of the prototype is, in first to be a response to environmental constraints, in second demonstrates the industrial viability of:• SFR reactor. The goal is to have a safety level at least equal to 3rd generation reactors. ASTRID design integrates Fukushima feedback;• Waste reprocessing (with minor actinide transmutation) and it linked industry.Installation safety is the priority. In all cases, no radionuclide should be released into environment. To achieve this objective, it is imperative to predict the impact of uncertainty sources on reactor behaviour. In this context, this thesis aims to develop new optimization methods for SFR cores. The goal is to improve the robustness and reliability of reactors in response to existing uncertainties. We will use ASTRID core as reference to estimate interest of new methods and tools developed.The impact of multi-Physics uncertainties in the calculation of the core performance and the use of optimization methods introduce new problems:• How to optimize “complex” cores (i.e. associated with design spaces of high dimensions with more than 20 variable parameters), taking into account the uncertainties?• What is uncertainties behaviour for optimization core compare to reference core?• Taking into account uncertainties, optimization core are they still competitive? Optimizations improvements are higher than uncertainty margins?The thesis helps to develop and implement methods necessary to take into account uncertainties in the new generation of simulation tools. Statistical methods to ensure consistency of complex multi-Physics simulation results are also detailed.By providing first images of innovative SFR core, this thesis presents methods and tools to reduce the uncertainties on some performance while optimizing them. These gains are achieved through the use of multi-Objective optimization algorithms. These methods provide all possible compromise between the different optimization criteria, such as the balance between economic performance and safety
Mangapi, Augustin Assonga. "Krigeage et cokrigeage, méthodes d’interpolation spatiale pour les systèmes d’information géographique." Mémoire, Université de Sherbrooke, 1994. http://hdl.handle.net/11143/7880.
Bendaou, Omar. "Caractérisation thermomécanique, modélisation et optimisation fiabiliste des packages électroniques." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMIR20/document.
During operation, electronic packages are exposed to various thermal and mechanical solicitations. These solicitations combined are the source for most of electronic package failures. To ensure electronic packages robustness, manufacturers perform reliability testing and failure analysis prior to any commercialization. However, experimental tests, during design phase and prototypes development, are known to be constraining in terms of time and material resources. This research aims to develop four finite element models in 3D, validated/calibrated by experimental tests, integrating JEDEC recommendations to : - Perform electronic packages thermal and thermomechanical characterization ; - Predict the thermal fatigue life of solder joints in place of the standardized experimental characterization.However, implementation of the finite element model has some disadvantages related to uncertainties at the geometry, material properties, boundary conditions or loads. These uncertainties influence thermal and electronic systems thermomechanical behavior. Hence the need to formulate the problem in probabilistic terms, in order to conduct a reliability study and a electronic packages reliability based design optimization.To remedy the enormous computation time generated by classical reliability analysis methods, we developed methodologies specific to this problem, using approximation methods based on advanced kriging, which allowed us to build a substitution model, combining efficiency and precision. Therefore reliability analysis can be performed accurately and in a very short time with Monte Carlo simulation (MCS) and FORM / SORM methods coupled with the advanced model of kriging. Reliability analysis was associated in the optimization process, to improve the performance and electronic packages structural design reliability. In the end, we applied the reliability analysis methodologies to the four finite element models developed. As a result, reliability analysis proved to be very useful in predicting uncertainties effects related to material properties. Similarly, reliability optimization analysis performed out has enabled us to improve the electronic packages structural design performance and reliability. In the end, we applied the reliability analysis methodologies to the four finite element models developed. As a result, reliability analysis proved to be very useful in predicting uncertainties effects related to material properties. Similarly, reliability optimization analysis performed out has enabled us to improve the electronic packages structural design performance and reliability
Houret, Thomas. "Méthode d’estimation des valeurs extrêmes des distributions de contraintes induites et de seuils de susceptibilité dans le cadre des études de durcissement et de vulnérabilité aux menaces électromagnétiques intentionnelles." Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0011.
Intentional ElectroMagnetic Interference (IEMI) can cause equipment failure. The study of the effects of an IEMI begins with an assessment of the risk of equipment failure in order to implement appropriate protections, if required. Unfortunately, a deterministic prediction of a failure is impossible because both characteristics of equipment and of the aggression are very uncertain. The proposed strategy consists of modelling the stress generated by the aggression, as well as the susceptibility of the equipment, as random variables. Then, three steps are necessary: The first step deals with the estimation of the probability distribution of the random susceptibility variable. The second step deals with the similar estimation for the constraint / stress then that of the stress. Eventually, the third step concerns the calculation of the probability of failure. For the first step, we use statistical inference methods on a small sample of measured susceptibility thresholds. We compare two types of parametric inference: bayesian and maximum likelihood. We conclude that a relevant approach for a risk analysis is to use the confidence or credibility intervals of parameter estimates to frame the probability of failure, regardless of the inference method chosen. For the second step we explore extreme value exploration techniques while reducing the number of simulations required. In particular, we propose the technique of Controlled Stratification by Kriging. We show that this technique drastically improves performance compared to the classic approach (Monte Carlo simulation). In addition, we propose a particular implementation of this technique in order to control the calculation effort. Finally, the third step is the simplest once the first two steps have been completed since, by definition, a failure occurs when the stress is greater than the susceptibility. With the help of a final test case comprising the simulation of an electromagnetic aggression on a piece of equipment, we use the method developed in our work to estimate the frame of the probability of failure, More specifically, we show that the combined use of controlled stratification by kriging and inference of susceptibility distribution, allows to frame the estimated true value of the probability of failure
Bernard, Lucie. "Méthodes probabilistes pour l'estimation de probabilités de défaillance." Electronic Thesis or Diss., Tours, 2019. http://www.theses.fr/2019TOUR4037.
To evaluate the profitability of a production before the launch of the manufacturing process, most industrial companies use numerical simulation. This allows to test virtually several configurations of the parameters of a given product and to decide on its performance (i.e. the specifications imposed by the specifications). In order to measure the impact of industrial process fluctuations on product performance, we are particularly interested in estimating the probability of failure of the product. Since each simulation requires the execution of a complex and expensive calculation code, it is not possible to perform a sufficient number of tests to estimate this probability using, for example, a Monte-Carlo method.We are developing estimation methods, which must be robust despite the limited number of observations available through numerical simulation. Under the constraint of a limited number of code calls, we propose two very different estimation methods.medskip The first is based on the principles of Bayesian estimation. Our observations are the results of numerical simulation. The probability of failure is seen as a random variable, the construction of which is based on that of a random process to model the expensive calculation code. To correctly define this model, the kriging method is used. Conditional on observations, the posterior distribution of the random variable, which models the probability of failure, is inaccessible. To learn about this distribution, we construct approximations of the following characteristics: expectation, variance, quantiles... We use the theory of stochastic orders to compare random variables and, more specifically, the convex order.The construction of an optimal experimental design is ensured by the implementation of a sequential experimental planning procedure, based on the principle of SUR strategies.The second method is an iterative procedure, particularly adapted to the case where the probability of failure is very small, i.e. the redoubt event is rare. The expensive calculation code is represented by a function that is assumed to be Lipschitz continuous. At each iteration, this hypothesis is used to construct approximations, by default and by excess, of the probability of failure. We show that these approximations converge towards the true value with the number of iterations. In practice, they are estimated using the Monte-Carlo method known as splitting method. The proposed methods are relatively simple to implement and the results they provide can be easily interpreted. We test them on various examples, as well as on a real case from STMicroelectronics
Zahaby, Mohamed El. "Contribution à la définition d'une norme des sites pollués : élaboration d'une méthodologie pour l'évaluation de la contamination d'un sol par éléments tracés." Vandoeuvre-les-Nancy, INPL, 1998. http://www.theses.fr/1998INPL045N.
Huchet, Quentin. "Utilisation des méthodes de Krigeage pour le dimensionnement en fatigue des structures éoliennes posées en mer." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC074/document.
The mechanical certification of wind turbine structures is required for the funding of newoffshore projects on the French coasts. In order to ensure a maximal safety level of installations,a series of structural analyzes are required by the certification bodies. Amongst all, thedamage based computations represent an important numerical effort for EDF. The presentedworks focus on the applicability and the performances of Kriging metamodels for the estimationof the lifetime cumulated damage of offshore wind turbine structures (AK-DA approach)and the damage based reliability assessment of new designs (AK-MCS/AK-DA coupling)
Denimal, Enora. "Prédiction des instabilités de frottement par méta-modélisation et approches fréquentielles : Application au crissement de frein automobile." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC039/document.
Brake squeal is a noise nuisance that represents significant costs for the automotive industry. It originates from complex phenomena at the frictional interface between the brake pads and the disc. The stability analysis remains the preferred method in the industry today to predict the stability of a brake system despite its over- and under-predictive aspects.In order to build a robust brake system, it is necessary to find the technology that limits instabilities despite some uncertain parameters present in the system. Thus, one of the main objectives of the PhD thesis is to develop a method to treat and propagate the uncertainty and variability of some parameters in the finite element brake model with reasonable numerical costs.First, the influence of a first group of parameters corresponding to contacts within the system was studied in order to better understand the physical phenomena involved and their impacts on the squealing phenomenon. An approach based on the use of a genetic algorithm has also been implemented to identify the most unfavourable set of parameters in terms of squeal propensity on the brake system.In a second step, different meta-modelling methods were proposed to predict the stability of the brake system with respect to different parameters that may be design parameters or uncertain parameters related to the environment of the brake system.In a third step, a non-linear analysis method complementary to the stability analysis was proposed and developed. It is based on the tracking of the stability of an approximate vibrational solution and allows the identification of unstable modes present in the dynamic response of the system. This method was applied to a simple academic model before demonstrating its feasibility on the complete industrial brake finite element model under study
Zhu, Boyao. "Identification and metamodeling characterization of singularities in composite, highly dispersive media." Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0006.
Structural health monitoring (SHM) plays a crucial role in many industrial fields to ensure the safety, reliability, and performance of critical structures. The development of various types of sensors, data analysis, and wireless communication systems, enables the collection in situ of data attesting to the real-time state of structures within the framework of SHM modules helping for more accurate and automated decision-making processes. However, the SHM modules require data basis characterizing safe and damaged structures. Simulations based on numerical modelling such as finite element methods, are often used to construct this data basis. However, this approach is very time-consuming especially when the finite element model is complex, which is often the case due to the increasing complexity of structures. This thesis is within this framework. Indeed, it deals with the problem of efficiently obtaining damage-sensitive features of complex composite structures. More specifically, it aims to define and develop efficient numerical tools helping for SHM of complex composite structures. Hence, model reduction and metamodeling approaches based on the Wave-finite element (WFE) and Kriging methods respectively are proposed and investigated. So, the main objective of the present work is to assess the potential of the combination of the WFE and kriging metamodeling to be useful and efficient in predicting the structural and dynamic characteristics of complex composite structures. This efficiency is quantified by the prediction accuracy and the involved cost. Based on the predicted dynamic properties, some damage-sensitive indicators (such as amplitudes, natural frequencies, phase shifts) are defined and exploited to evaluate the health status of the considered structures.Based on the accomplished studies, it is shown that the proposed strategy, namely the Kriging-based WFEM, can ensure an interesting efficiency resulting in a suitable accuracy of predictions of the structural and dynamical properties while involving a smaller cost than the WFEM-based calculations. Moreover, the proposed strategy has kept the same sensitivity levels of dynamic properties to the considered damages (cracks and delamination) with the associated indexes. The strategy proved to be more efficient when using the adaptive sampling scheme with kriging
Ghayyem, Fatemeh. "Positionnement optimal de capteurs pour l'estimation du signal." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT045.
Many signal processing problems can be cast from a generic setting where a source signal propagates through a given environment to the sensors. Under this setting, we can be interested either in (i) estimating the source signal, or (ii) the environment, or even (iii) the resulting field of signals in some regions of the environment. In all these cases, signals are recorded by multiple sensors located at different positions. Due to price, energy or ergonomic constraints, the number of sensors is often limited and it becomes crucial to place a few sensors at positions which contain the maximum information. This problem corresponds to optimal sensor placement and it is faced in a great number of applications. The way to tackle the problem of optimal sensor placement depends on which of three aspects mentioned above we want to address.In this thesis, we focus on estimating the source signal from a set of noisy measurements collected from a limited number of sensors. Our approach differs from classical Kriging based optimal sensor placement approaches, since the latter focus on best reconstruction of the spatial measured field. For solving the problem, we propose a first criterion which maximizes the average signal to noise ratio of the estimated signal. Experimentally, performance obtained by this criterion outperforms the results obtained using Kriging-based methods. Since the signal to noise ratio is uncertain in this context, to achieve a robust signal extraction, we propose a second placement criterion based on the maximization of the probability that the SNR exceeds a given threshold. This criterion can be easily evaluated using the Gaussian process assumption for the signal, the noise, and the environment. Moreover, to reduce the computational complexity of the joint maximization of the criterion with respect to all sensor positions, we propose a greedy algorithm where the sensor positions are sequentially (i.e. one by one) selected. Experimental results show the superiority of the probabilistic criterion com- pared to the average SNR criterion. Finally, for improving the sub-optimal greedy algorithm, we present an optimization approach to locate all the sensors at once. For this purpose, we add a constraint to the problem that can control the average distances between the sensors. To solve our problem, we use an alternating optimization penalty method. In the end, we present experimental results that show the superiority of the proposed algorithm over the greedy one
Ould, Isselmou Yahya. "Interpolation de niveaux d'exposition aux émissions radioélectriques in situ à l'aide de méthodes géostatistiques." Phd thesis, École Nationale Supérieure des Mines de Paris, 2007. http://pastel.archives-ouvertes.fr/pastel-00003423.
Buslig, Leticia. "Méthodes stochastiques de modélisation de données : application à la reconstruction de données non régulières." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4734/document.
Bouguila, Maissa. "Μοdélisatiοn numérique et οptimisatiοn des matériaux à changement de phase : applicatiοns aux systèmes cοmplexes." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR05.
Phase-change materials exhibit considerable potential in the field of thermal management.These materials offer a significant thermal storage capacity. Excessive heat dissipated by miniature electronic components could lead to serious failures. A cooling system based on phase-change materials is among the most recommended solutions to guarantee the reliable performance of these microelectronic components. However, the low conductivity of these materials is considered a major limitation to their use in thermal management applications. The primary objective of this thesis is to address the challenge of improving the thermal conductivity of these materials. Numerical modeling is conducted, in the first chapters, to determine the optimal configuration of a heat sink, based on the study of several parameters such as fin insertion, nanoparticle dispersion, and the use of multiple phase-change materials. The innovation in this parametric study lies in the modeling of heat transfer from phase-change materials with relatively high nanoparticle concentration compared to the low concentration found in recent literature (experimental researchs). Significant conclusions are deducted from this parametric study, enabling us to propose a new model based on multiple phase-change materials improved with nanoparticles (NANOMCP). Reliable optimization studies are then conducted. Initially, a mono-objective reliability optimization study is carried out to propose a reliable and optimal model based on multiple NANOMCPs. The Robust Hybrid Method (RHM)proposes a reliable and optimal model, compared with the Deterministic Design Optimization method (DDO) and various Reliability Design Optimization methods (RBDO). Furthermore,the integration of a developed RBDO method (RHM) for the thermal management applicationis considered an innovation in recent literature. Additionally, a reliable multi-objective optimization study is proposed, considering two objectives: the total volume of the heat sink and the discharge time to reach ambient temperature. The RHM optimization method and the non-dominated sorting genetics algorithm (C-NSGA-II) were adopted to search for the optimal and reliable model that offers the best trade-off between the two objectives. Besides, An advanced metamodel is developed to reduce simulation time, considering the large number of iterations involved in finding the optimal model
Bachoc, François. "Estimation paramétrique de la fonction de covariance dans le modèle de Krigeage par processus Gaussiens : application à la quantification des incertitudes en simulation numérique." Phd thesis, Paris 7, 2013. http://www.theses.fr/2013PA077111.
The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a metamodeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown
Bachoc, François. "Estimation paramétrique de la fonction de covariance dans le modèle de Krigeage par processus Gaussiens : application à la quantification des incertitudes en simulation numérique." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00881002.
Marzat, Julien. "Diagnostic des systèmes aéronautiques et réglage automatique pour la comparaison de méthodes." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00647333.
Oger, Julie. "Méthodes probabilistes pour l'évaluation de risques en production industrielle." Phd thesis, Université François Rabelais - Tours, 2014. http://tel.archives-ouvertes.fr/tel-00982740.
Vazquez, Emmanuel. "Modélisation comportementale de systèmes non-linéaires multivariables par méthodes à noyaux et applications." Phd thesis, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00010199.
Lelièvre, Nicolas. "Développement des méthodes AK pour l'analyse de fiabilité. Focus sur les évènements rares et la grande dimension." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC045/document.
Engineers increasingly use numerical model to replace the experimentations during the design of new products. With the increase of computer performance and numerical power, these models are more and more complex and time-consuming for a better representation of reality. In practice, optimization is very challenging when considering real mechanical problems since they exhibit uncertainties. Reliability is an interesting metric of the failure risks of design products due to uncertainties. The estimation of this metric, the failure probability, requires a high number of evaluations of the time-consuming model and thus becomes intractable in practice. To deal with this problem, surrogate modeling is used here and more specifically AK-based methods to enable the approximation of the physical model with much fewer time-consuming evaluations. The first objective of this thesis work is to discuss the mathematical formulations of design problems under uncertainties. This formulation has a considerable impact on the solution identified by the optimization during design process of new products. A definition of both concepts of reliability and robustness is also proposed. These works are presented in a publication in the international journal: Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). The second objective of this thesis is to propose a new AK-based method to estimate failure probabilities associated with rare events. This new method, named AK-MCSi, presents three enhancements of AK-MCS: (i) sequential Monte Carlo simulations to reduce the time associated with the evaluation of the surrogate model, (ii) a new stricter stopping criterion on learning evaluations to ensure the good classification of the Monte Carlo population and (iii) a multipoints enrichment permitting the parallelization of the evaluation of the time-consuming model. This work has been published in Structural Safety (Lelièvre, et al. 2018). The last objective of this thesis is to propose new AK-based methods to estimate the failure probability of a high-dimensional reliability problem, i.e. a problem defined by both a time-consuming model and a high number of input random variables. Two new methods, AK-HDMR1 and AK-PCA, are proposed to deal with this problem based on respectively a functional decomposition and a dimensional reduction technique. AK-HDMR1 has been submitted to Reliability Enginnering and Structural Safety on 1st October 2018
Barbillon, Pierre. "Méthodes d'interpolation à noyaux pour l'approximation de fonctions type boîte noire coûteuses." Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00559502.
Felder, Jean. "Développement de méthodes de traitement d'images pour la détermination de paramètres variographiques locaux." Phd thesis, Paris, ENMP, 2011. http://www.theses.fr/2011ENMP0076.
Geostatistics provides many tools to characterize and deal with data spread in space. Most of these tools are based on the analysis and the modeling of a function called variogram. By characterizing the spatial correlation inherent to any data set, the variogram enables to build different spatial operators as estimation (kriging) and simulation ones. Variographic models are relatively intuitive: some variographic parameters can directly be interpreted as structural characteristics. These approaches are however limited since they are not enable to properly take into account the local data structure. There are several types of non-stationary geostatistical models. However, they are difficult to use in practice because they need a complicated, not really intuitive setting. Besides, they are not enable to take into account some types of non-stationarity. In order to answer the need for an effective and efficient consideration of non-stationarity of a data set, we have chosen, in the context of this PhD thesis, to compute local variographic parameters, called Moving Parameters (M-Parameters), by using image processing methods. Our approach relies mainly on the determination of morphological parameters of size and dimension. It follows from the determination of M-Parameters a better match between variographic models and structural characteristics of the data. These different methods for computing M-Parameters have been applied to bathymetry data, to data revealing complex geological bodies and to environmental data sets, such as air pollution in urban areas for example. These examples illustrate the improvements in the results of the geostatistical process using M-Parameters. Finally, based on the observation that some phenomena do not respect an euclidean metric (such as air pollution in urban areas), we have studied the influence of the choice of the distance metric on kriging results. Using geodesic distances, we have been able to obtain kriging results which are impossible to reproduce with an euclidean distance
Felder, Jean. "Développement de méthodes de traitement d'images pour la détermination de paramètres variographiques locaux." Phd thesis, École Nationale Supérieure des Mines de Paris, 2011. http://pastel.archives-ouvertes.fr/pastel-00681301.
Durrande, Nicolas. "Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2001. http://tel.archives-ouvertes.fr/tel-00770625.
Durrande, Nicolas. "Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00844747.
Labroquère, Jérémie. "Optimisation de dispositifs de contrôle actif pour des écoulements turbulents décollés." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4096/document.
Active flow control strategies, such as oscillatory blowing / suction, have proved their efficiency to modify flow characteristics for various purposes (e.g. skin friction reduction, separation delay, etc.) in case of rather simple configurations. To extend this approach to industrial cases, the simulation of a large number of devices at real scale and the optimization of parameters are required. The objective of this thesis is to set up an optimization procedure to solve this category of problems. In this perspective, the organization of the thesis is split into three main parts. First, the development and validation of an unsteady compressible turbulent flow solver using the Reynolds-Averaged Navier-Stokes (RANS) using a Mixed finite-Element/finite-Volume (MEV) framework is described. A particular attention is drawn on synthetic jet numerical model implementation by comparing different models in the context of a simulation over a flat plate. The second axis of the thesis describes and validates the implementation of a Gaussian Process surrogate model based global optimization method including an approach to account for some numerical errors during the optimization. This EGO (Efficient Global Optimization) method, is validated on noisy 1D and 2D analytical test cases. Finally, the optimization of two industrial relevant test cases using a synthetic jet actuator are considered: a turbulent flow over a NACA0015 for which the time-averaged lift is regarded as the control criterion to be maximized, and an incompressible turbulent flow over a Backward Facing Step for which the time-averaged recirculation length is minimized
Ioannidou, Despoina. "Characterization of environmental inequalities due to Polyaromatic Hydrocarbons in France : developing environmental data processing methods to spatialize exposure indicators for PAH substances." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1176/document.
Reducing environmental exposure inequalities has become a major focus of public health efforts in France, as evidenced by the French action plans for health and the environment. The aim of this thesis is to develop an integrated approach to characterize environmental inequalities and evaluate the spatialized exposure to PAH in France.The data produced as part of the monitoring quality networks of environmental media reflect the actual contamination of the environment and the overall exposure of the populations. However they do not always provide an adequate spatial resolution to characterize environmental exposures as they are usually not assembled for this specific purpose. Statistical methods are employed to process input databases (environmental concentrations in water, air and soil) in the objective of characterizing the exposure. A multimedia model interfaced with a GIS, allows the integration of environmental variables in order to yield exposure doses related to ingestion of food, water and soil as well as atmospheric contaminants' inhalation.The methodology was applied to three Polycyclic Aromatic Hydrocarbon substances, (benzo[a]pyrene, benzo[ghi]perylene and indeno[1,2,3-cd]pyrene), in France. The results obtained, allowed to map exposure indicators and to identify areas of overexposure and characterize environmental determinants. In the context of exposure characterization, the direct spatialization of available data from environmental measurement datasets poses a certain number of methodological questions which lead to uncertainties related to the sampling and the spatial and temporal representativeness of data. These could be reduced by acquiring additional data or by constructing predictive variables for the spatial and temporal phenomena considered.Data processing algorithms and calculation of exposure carried out in this work, will be integrated in the French coordinated integrated environment and health platform-PLAINE in order to be applied on other pollutants and prioritize preventative actions
Ioannidou, Despoina. "Characterization of environmental inequalities due to Polyaromatic Hydrocarbons in France : developing environmental data processing methods to spatialize exposure indicators for PAH substances." Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1176.
Reducing environmental exposure inequalities has become a major focus of public health efforts in France, as evidenced by the French action plans for health and the environment. The aim of this thesis is to develop an integrated approach to characterize environmental inequalities and evaluate the spatialized exposure to PAH in France.The data produced as part of the monitoring quality networks of environmental media reflect the actual contamination of the environment and the overall exposure of the populations. However they do not always provide an adequate spatial resolution to characterize environmental exposures as they are usually not assembled for this specific purpose. Statistical methods are employed to process input databases (environmental concentrations in water, air and soil) in the objective of characterizing the exposure. A multimedia model interfaced with a GIS, allows the integration of environmental variables in order to yield exposure doses related to ingestion of food, water and soil as well as atmospheric contaminants' inhalation.The methodology was applied to three Polycyclic Aromatic Hydrocarbon substances, (benzo[a]pyrene, benzo[ghi]perylene and indeno[1,2,3-cd]pyrene), in France. The results obtained, allowed to map exposure indicators and to identify areas of overexposure and characterize environmental determinants. In the context of exposure characterization, the direct spatialization of available data from environmental measurement datasets poses a certain number of methodological questions which lead to uncertainties related to the sampling and the spatial and temporal representativeness of data. These could be reduced by acquiring additional data or by constructing predictive variables for the spatial and temporal phenomena considered.Data processing algorithms and calculation of exposure carried out in this work, will be integrated in the French coordinated integrated environment and health platform-PLAINE in order to be applied on other pollutants and prioritize preventative actions
Zhang, Zebin. "Intégration des méthodes de sensibilité d'ordre élevé dans un processus de conception optimale des turbomachines : développement de méta-modèles." Thesis, Ecully, Ecole centrale de Lyon, 2014. http://www.theses.fr/2014ECDL0047/document.
The turbomachinery optimal design usually relies on some iterative methods with either experimental or numerical evaluations that can lead to high cost due to numerous manipulations and intensive usage of CPU. In order to limit the cost and shorten the development time, the present thesis work proposes to integrate a parameterization method and the meta-modelization method in an optimal design cycle of an axial low speed turbomachine. The parameterization, realized by the high order sensitivity study of Navier-Stokes equations, allows to construct a parameterized database that contains not only the evaluations results, but also the simple and cross derivatives of objectives as a function of parameters. Enriched information brought by the derivatives are utilized during the meta-model construction, particularly by the Co-Kriging method employed to couple several databases. Compared to classical methods that are without derivatives, the economic benefit of the proposed method lies in the use of less reference points. Provided the number of reference points is small, chances are a unique point presenting at one or several dimensions, which requires a hypothesis on the error distribution. For those dimensions, the Co-Kriging works like a Taylor extrapolation from the reference point making the most of its derivatives. This approach has been experimented on the construction of a meta-model for a conic hub fan. The methodology recalls the coupling of databases based on two fan geometries and two operating points. The precision of the meta-model allows to perform an optimization with help of NSGA-2, one of the optima selected reaches the maximum efficiency, and another covers a large operating range. The optimization results are eventually validated by further numerical simulations
Durrande, Nicolas. "Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. Une approche fonctionnelle et probabiliste." Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0631/document.
The framework of this thesis is the approximation of functions for which thevalue is known at limited number of points. More precisely, we consider here the so-calledkriging models from two points of view : the approximation in reproducing kernel Hilbertspaces and the Gaussian Process regression.When the function to approximate depends on many variables, the required numberof points can become very large and the interpretation of the obtained models remainsdifficult because the model is still a high-dimensional function. In light of those remarks,the main part of our work adresses the issue of simplified models by studying a key conceptof kriging models, the kernel. More precisely, the following aspects are adressed: additivekernels for additive models and kernel decomposition for sparse modeling. Finally, wepropose a class of kernels that is well suited for functional ANOVA representation andglobal sensitivity analysis
Frémondière, Pierre. "L'évolution de l'accouchement dans la lignée humaine. Estimation de la contrainte fœto-pelvienne par deux méthodes complémentaires : la simulation numérique de l'accouchement et l'analyse discriminante des modalités d'accouchement au sein d'un échantillon obstétrical." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM5013.
The purpose of this thesis is to estimate delivery outcomes for extinct hominids. We therefore use two complementary methods : numerical simulation of childbirth and discriminant analysis of delivery outcomes from an obstetrical sample. First, we use kriging to construct meshes of pelves and neonatal skulls. Fossil hominid specimens included in the study are Australopithecines, early Homo (EH) and middle to early Pleistocene Homo (MEPH). We estimate fetal cranial dimensions with chimpanzee or human cranial growth curve that we reversly use and apply on juveniles skull measurements. “Virtual” dyads are formed from pelves and neonatal skulls. Then, we simulate childbirth of these « virtual » dyads. Different levels of laxity of the sacro-iliac junction and different positions of the fetal head are considered. Finally, we use an obstetrical sample: delivery outcome is noted, CT-scans are used to obtain maternal pelvic measurements and diameters of the fetal head were also measured after delivery. A discriminant analysis is performed using this obstetrical sample to separate delivery outcomes thanks to fetal-pelvic measurements. Fossil dyads were subsequently added in the discriminant analysis to assess delivery outcomes to which they belong. Results suggest small fetal-pelvic constraint for Austalopithecines. This constraint is moderate for EH. Fetal-pelvic constraint is more important for MEPH. We suggest that rotational birth appears with EH. The curved trajectory of the fetal head appears with MEPH. Emergence of rotational birth and curved trajectory of fetal head are probably explained by two major increases in brain size during late and middle Pleistocene
Franchi, Gianni. "Machine learning spatial appliquée aux images multivariées et multimodales." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM071/document.
This thesis focuses on multivariate spatial statistics and machine learning applied to hyperspectral and multimodal and images in remote sensing and scanning electron microscopy (SEM). In this thesis the following topics are considered:Fusion of images:SEM allows us to acquire images from a given sample using different modalities. The purpose of these studies is to analyze the interest of fusion of information to improve the multimodal SEM images acquisition. We have modeled and implemented various techniques of image fusion of information, based in particular on spatial regression theory. They have been assessed on various datasets.Spatial classification of multivariate image pixels:We have proposed a novel approach for pixel classification in multi/hyper-spectral images. The aim of this technique is to represent and efficiently describe the spatial/spectral features of multivariate images. These multi-scale deep descriptors aim at representing the content of the image while considering invariances related to the texture and to its geometric transformations.Spatial dimensionality reduction:We have developed a technique to extract a feature space using morphological principal component analysis. Indeed, in order to take into account the spatial and structural information we used mathematical morphology operators
Barbarroux, Loïc. "Contributions à la modélisation multi-échelles de la réponse immunitaire T-CD8 : construction, analyse, simulation et calibration de modèles." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC026/document.
Upon infection by an intracellular pathogen, the organism triggers a specific immune response,mainly driven by the CD8 T cells. These cells are responsible for the eradication of this type of infections and the constitution of the immune repertoire of the individual. The immune response is constituted by many processes which act over several interconnected physical scales (intracellular scale, single cell scale, cell population scale). This biological phenomenon is therefore a complex process, for which it is difficult to observe or measure the links between the different processes involved. We propose three multiscale mathematical models of the CD8 immune response, built with different formalisms but related by the same idea : to make the behavior of the CD8 T cells depend on their intracellular content. For each model, we present, if possible, its construction process based on selected biological hypothesis, its mathematical study and its ability to reproduce the immune response using numerical simulations. The models we propose succesfully reproduce qualitatively and quantitatively the CD8 immune response and thus constitute useful tools to further investigate this biological phenomenon
Gauthier, Bertrand. "Approche spectrale pour l'interpolation à noyaux et positivité conditionnelle." Phd thesis, École Nationale Supérieure des Mines de Saint-Étienne, 2011. http://tel.archives-ouvertes.fr/tel-00631252.
Gauthier, Bertrand. "Approche spectrale pour l’interpolation à noyaux et positivité conditionnelle." Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0615/document.
We propose a spectral approach for the resolution of kernel-based interpolation problems of which numerical solution can not be directly computed. Such a situation occurs in particular when the number of data is infinite. We first consider optimal interpolation in Hilbert subspaces. For a given problem, an integral operator is defined from the underlying kernel and a parameterization of the data set based on a measurable space. The spectral decomposition of the operator is used in order to obtain a representation formula for the optimal interpolator and spectral truncation allows its approximation. The choice of the measure on the parameters space introduces a hierarchy onto the data set which allows a tunable precision of the approximation. As an example, we show how this methodology can be used in order to enforce boundary conditions in kernel-based interpolation models. The Gaussian processes conditioning problem is also studied in this context. The last part of this thesis is devoted to the notion of conditionally positive kernels. We propose a general definition of symmetric conditionally positive kernels relative to a given space and exposed the associated theory of semi-Hilbert subspaces. We finally study the optimal interpolation problem in such spaces
Chocat, Rudy. "Évaluation de la fiabilité en tolérance aux dommages pour les composants de moteurs spatiaux." Electronic Thesis or Diss., Compiègne, 2018. http://www.theses.fr/2018COMP2435.
To succeed their mission, the design of space engines must prevent the whole failure modes following dedicated design rules. The damage tolerance has to ensure the mechanical strength of the component considering the potential presence of a undetected defect which is, in a conservatve way, defined as a crack. To avoid the addition of unknown margins, uncertainties, implied by the use of numerical model, can be treated in the probabilitic framework. The goal of this work is to propose a methodology to assess the reliability (probability of failure), of damage tolernace for space engine components. The small rate of flights, the low targeted probability of failure and the use of models, possibly time consuming, which provide a mixed information respectively quantitative, or qualitaive, for a safe, or failed, component limit the use exting approaches. This work firstly present an orignal method to identify significant variables with a unavailable gradient in the failure region. Then, a reliability assessment methodology is proposed coupling regression and classification to compute low probabilities reducing the number of damage tolerance simulations. Finally, this contribution is applied to academical and damage tolerance test cases to lead to a complex space engine case
Fremondière, Pierre. "L'évolution de l'accouchement dans la lignée humaine. Estimation de la contrainte fœto-pelvienne par deux méthodes complémentaires : la simulation numérique de l'accouchement et l'analyse discriminante des modalités d'accouchement au sein d'un échantillon obstétrical." Thesis, 2015. http://www.theses.fr/2015AIXM5013/document.
The purpose of this thesis is to estimate delivery outcomes for extinct hominids. We therefore use two complementary methods : numerical simulation of childbirth and discriminant analysis of delivery outcomes from an obstetrical sample. First, we use kriging to construct meshes of pelves and neonatal skulls. Fossil hominid specimens included in the study are Australopithecines, early Homo (EH) and middle to early Pleistocene Homo (MEPH). We estimate fetal cranial dimensions with chimpanzee or human cranial growth curve that we reversly use and apply on juveniles skull measurements. “Virtual” dyads are formed from pelves and neonatal skulls. Then, we simulate childbirth of these « virtual » dyads. Different levels of laxity of the sacro-iliac junction and different positions of the fetal head are considered. Finally, we use an obstetrical sample: delivery outcome is noted, CT-scans are used to obtain maternal pelvic measurements and diameters of the fetal head were also measured after delivery. A discriminant analysis is performed using this obstetrical sample to separate delivery outcomes thanks to fetal-pelvic measurements. Fossil dyads were subsequently added in the discriminant analysis to assess delivery outcomes to which they belong. Results suggest small fetal-pelvic constraint for Austalopithecines. This constraint is moderate for EH. Fetal-pelvic constraint is more important for MEPH. We suggest that rotational birth appears with EH. The curved trajectory of the fetal head appears with MEPH. Emergence of rotational birth and curved trajectory of fetal head are probably explained by two major increases in brain size during late and middle Pleistocene