Dissertations / Theses on the topic 'Méthodes Agnostiques aux Modèles'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Méthodes Agnostiques aux Modèles.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Danesh, Alaghehband Tina Sadat. "Vers une conception robuste en ingénierie des procédés. Utilisation de modèles agnostiques de l'interprétabilité en apprentissage automatique." Electronic Thesis or Diss., Toulouse, INPT, 2023. http://www.theses.fr/2023INPT0138.
Robust process design holds paramount importance in various industries, such as process and chemical engineering. The nature of robustness lies in ensuring that a process can consistently deliver desired outcomes for decision-makers and/or stakeholders, even when faced with intrinsic variability and uncertainty. A robustly designed process not only enhances product quality and reliability but also significantly reduces the risk of costly failures, downtime, and product recalls. It enhances efficiency and sustainability by minimizing process deviations and failures. There are different methods to approach the robustness of a complex system, such as the design of experiments, robust optimization, and response surface methodology. Among the robust design methods, sensitivity analysis could be applied as a supportive technique to gain insights into how changes in input parameters affect performance and robustness. Due to the rapid development and advancement of engineering science, the use of physical models for sensitivity analysis presents several challenges, such as unsatisfied assumptions and computation time. These problems lead us to consider applying machine learning (ML) models to complex processes. Although, the issue of interpretability in ML has gained increasing importance, there is a growing need to understand how these models arrive at their predictions or decisions and how different parameters are related. As their performance consistently surpasses that of other models, such as knowledge-based models, the provision of explanations, justifications, and insights into the workings of ML models not only enhances their trustworthiness and fairness but also empowers stakeholders to make informed decisions, identify biases, detect errors, and improve the overall performance and reliability of the process. Various methods are available to address interpretability, including model-specific and model-agnostic methods. In this thesis, our objective is to enhance the interpretability of various ML methods while maintaining a balance between accuracy and interpretability to ensure decision-makers or stakeholders that our model or process could be considered robust. Simultaneously, we aim to demonstrate that users can trust ML model predictions guaranteed by model-agnostic techniques, which work across various scenarios, including equation-based, hybrid, and data-driven models. To achieve this goal, we applied several model-agnostic methods, such as partial dependence plots, individual conditional expectations, accumulated local effects, etc., to diverse applications
Grazian, Clara. "Contributions aux méthodes bayésiennes approchées pour modèles complexes." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED001.
Recently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior
Tarhini, Ali. "Analyse numérique des méthodes quasi-Monte Carlo appliquées aux modèles d'agglomération." Chambéry, 2008. http://www.theses.fr/2008CHAMS015.
Monte Carlo (MC) methods are probabilistic methods based on the use of random numbers in repeated experiments. Quasi-Monte Carlo (QMC) methods are deterministic versions of Monte Carlo methods. Random sequences are replaced by low discrepancy sequences. These sequences ha ve a better uniform repartition in the s-dimensional unit cube. We use a special class of low discrepany sequences called (t,s)-sequences. In this work, we develop and analyze Monte Carlo and quasi-Monte Carlo particle methods for agglomeration phenomena. We are interested, in particular, in the numerical simulation of the discrete coagulation equations (the Smoluchowski equation), the continuous coagulation equation, the continuous coagulation-fragmentation equation and the general dynamics equation (GDE) for aerosols. In all these particle methods, we write the equation verified by the mass distribution density and we approach this density by a sum of n Dirac measures ; these measures are weighted when simulating the GDE equation. We use an explicit Euler disretiza tion scheme in time. For the simulation of coagulation and coagulation-fragmentation, the numerical particles evolves by using random numbers (for MC simulations) or by quasi-Monte Carlo quadratures. To insure the convergence of the numerical scheme, we reorder the numerical particles by their increasing mass at each time step. In the case of the GDE equation, we use a fractional step iteration scheme : coagulation is simulated as previously, other phenomena (like condensation, evaporation and deposition) are integrated by using a deterministic particle method for solving hyperbolic partial differential equation. We prove the convergence of the QMC numerical scheme in the case of the coagulation equation and the coagulation-fragmentation equation, when the number n of numerical particles goes to infinity. All our numerical tests show that the numerical solutions calculated by QMC algorithms converges to the exact solutions and gives better results than those obtained by the corresponding Monte Carlo strategies
Rakotomarolahy, Patrick. "Méthodes non paramétriques : estimation, analyse et applications aux cycles économiques." Paris 1, 2011. http://www.theses.fr/2011PA010045.
Castric, Sébastien. "Méthodes de recalage de modèles et application aux émissions des moteurs diesel." Compiègne, 2007. http://www.theses.fr/2007COMP1696.
Since some decades, European vehicles are subjected to normative laws about pollutant emissions. To face these constraints, car manufacturers have used more and more complex technologies especially for diesel engine cars. This situation has led to a complexification of engine tuning since the numbers of setting parameters has increased too. The present research work was made for the car manufacturer Renault SAS. It aims at proposing methods that allow readjusting models and applications over pollutant models of diesel engine. Renault decided to use techniques of design of experiments, modelling and optimization to solve the problem of diesel engine tuning for emissions. Even if this approach gave good results, it has some drawbacks. The tuning process is composed of loops. Each loop involves hardware changes in the engine. In this case, the model representing the engine’s behaviour, which is a LOLIMOT model, is not valid anymore. Considering that it is not possible to completely rebuild a model, a question appears: “How is it possible to readjust the model after an hardware change by doing as few tests as possible?” This PhD proposes some ways to solve this problem. The first one consists in using the bayesian theory. By using the initial model as an a priori, we created an algorithm permitting to readjust LOLIMOT models. In addition, we proposed a method derived from the first one, and, which aims at using the tuner knowledge about the engine as a prior knowledge. We tested our methods by simulation and owing to tests made on a 2L diesel engine, which was subjected to different hardware changes. In a second time, we considered that even if the Bayesian theory is able to take into account some knowledge, it does not take into account the hardware change characteristics. Thus, we decided to create a new model integrating physical parameters, like, for example, the number of holes in the injectors. We developed a model of diesel combustion. It simulates the evolution of thermodynamic variables inside the combustion chamber even for the multi injection case. Next, we adapted models of pollutants using these variables as inputs. We tested the whole model on prediction of cylinder pressure and pollutants over 2L diesel engine tests
Darblade, Gilles. "Méthodes numériques et conditions aux limites pour les modèles Shallow-Water multicouches." Bordeaux 1, 1997. http://www.theses.fr/1997BOR10588.
Debreu, Laurent. "Raffinement adaptatif de maillage et méthodes de zoom : application aux modèles d'océan." Université Joseph Fourier (Grenoble), 2000. http://www.theses.fr/2000GRE10004.
Montier, Laurent. "Application de méthodes de réduction de modèles aux problèmes d'électromagnétisme basse fréquence." Thesis, Paris, ENSAM, 2018. http://www.theses.fr/2018ENAM0029/document.
In the electrical engineering field, numerical simulation allows to avoid experiments which can be expensive, difficult to carry out or harmful for the device. In this context, the Finite Element Method has become to be one of the most used approach since it allows to obtain precise results on devices with complex geometries. However, these simulations can be computationally expensive because of a large number of unknowns and time-steps, and of strong nonlinearities of ferromagnetic materials to take into account. Numerical techniques to reduce the computational effort are thus needed. In this context, model order reduction approaches seem well adapted to this kind of problem since they have already been successfully applied to many engineering fields, among others, fluid and solid mechanics. A first class of methods allows to seek the solution in a reduced basis, allowing to dramatically reduce the number of unknowns of the numerical model. The most famous technics are probably the Proper Orthogonal Decomposition, the Proper Generalized Decomposition and the Arnoldi Projection. The second class of approaches consists of methods allowing to reduce the computational cost associated to nonlinearities, using interpolation methods like the Empirical Interpolation Method and the Gappy POD. This Ph.D. has been done within the LAMEL, the joint laboratory between the L2EP and EDF R&D, in order to identify and implement the model order reduction methods which are the most adapted to electrical engineering models. These methods are expected to reduce the computational cost while taking into account the motion of an electrical machine rotor, the nonlinearities of the ferromagnetic materials and also the mechanical and electrical environment of the device. Finally, an error indicator which evaluates the error introduced by the reduction technic has been developed, in order to guarantee the accuracy of the results obtained with the reduced model
Infante, Acevedo José Arturo. "Méthodes et modèles numériques appliqués aux risques du marché et à l'évaluation financière." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00937131.
Baskind, Alexis. "Modèles et méthodes de description spatiale de scènes sonores : application aux enregistrements binauraux." Paris 6, 2003. http://www.theses.fr/2003PA066407.
Infante, Acevedo José Arturo. "Méthodes et modèles numériques appliqués aux risques du marché et à l’évaluation financière." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1086/document.
This work is organized in two themes : (i) A novel numerical method to price options on manyassets, (ii) The liquidity risk, the limit order book modeling and the market microstructure.First theme : Greedy algorithms and applications for solving partial differential equations in high dimension Many problems of interest for various applications (material sciences, finance, etc) involve high-dimensional partial differential equations (PDEs). The typical example in finance is the pricing of a basket option, which can be obtained by solving the Black-Scholes PDE with dimension the number of underlying assets. We propose to investigate an algorithm which has been recently proposed and analyzed in [ACKM06, BLM09] to solve such problems and try to circumvent the curse of dimensionality. The idea is to represent the solution as a sum of tensor products and to compute iteratively the terms of this sum using a greedy algorithm. The resolution of high dimensional partial differential equations is highly related to the representation of high dimensional functions. In Chapter 1, we describe various linear approaches existing in literature to represent high dimensional functions and we introduce the high dimensional problems in finance that we will address in this work. The method studied in this manuscript is a non-linear approximation method called the Proper Generalized Decomposition. Chapter 2 shows the application of this method to approximate the so-lution of a linear PDE (the Poisson problem) and also to approximate a square integrable function by a sum of tensor products. A numerical study of this last problem is presented in Chapter 3. The Poisson problem and the approximation of a square integrable function will serve as basis in Chapter 4for solving the Black-Scholes equation using the PGD approach. In numerical experiments, we obtain results for up to 10 underlyings. Second theme : Liquidity risk, limit order book modeling and market microstructure. Liquidity risk and market microstructure have become in the past years an important topic in mathematical finance. One possible reason is the deregulation of markets and the competition between them to try to attract as many investors as possible. Thus, quotation rules are changing and, in general, more information is available. In particular, it is possible to know at each time the awaiting orders on some stocks and to have a record of all the past transactions. In this work we study how to use this information to optimally execute buy or sell orders, which is linked to the traders' behaviour that want to minimize their trading cost. In [AFS10], Alfonsi, Fruth and Schied have proposed a simple LOB model. In this model, it is possible to explicitly derive the optimal strategy for buying (or selling) a given amount of shares before a given deadline. Basically, one has to split the large buy (or sell) order into smaller ones in order to find the best trade-off between attracting new orders and the price of the orders. Here, we focus on an extension of the Limit Order Book (LOB) model with general shape introduced by Alfonsi, Fruth and Schied. The additional feature is a time-varying LOB depth that represents a new feature of the LOB highlighted in [JJ88, GM92, HH95, KW96]. We solve the optimal execution problem in this framework for both discrete and continuous time strategies. This gives in particular sufficient conditions to exclude Price Manipulations in the sense of Huberman and Stanzl [HS04] or Transaction-Triggered Price Manipulations (see Alfonsi, Schied and Slynko). The seconditions give interesting qualitative insights on how market makers may create price manipulations
Vivares, Florence. "Contribution à la modélisation de méthodes : application aux méthodes de Jackson." Toulouse, ENSAE, 1991. http://www.theses.fr/1991ESAE0019.
Kreit, Zakwan. "Contribution à l'étude des méthodes quantitatives d'aide à la décision appliquées aux indices du marché d'actions." Bordeaux 4, 2007. https://tel.archives-ouvertes.fr/tel-00413979.
This thesis is divided into two parts : first, it concerns the study of different quantitative methods used for decision making support in all situations. Second, study and analysis of the stock market index in Egypt. Indeed, The Egyptian stock market is considered to be inefficient with respect to the international stock market. According to this, we expect that it is very difficult to use the traditional forecasting methods to predict the trend of the stock market index. In order to predict the Cairo & Alexandria Stock Exchanges (CASE), the Box-Jenkins Auto Regressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) methods were applied to predict the stock market index of (CASE) in Egypt. For this purpose, we have used the stock market index samples for the CASE collected from 1992-2005 (3311 daily time series observation). The traditional forecasting method ARIMA was found to be not able to predict the CASE stock market index. However, the ANN prediction method was found to be able to follow the real trend of the index. This was confirmed by the Mean Absolute Percentage Error (MAPE) and Mean Square Error (MSE). Hence, neural networks for weekly prediction of financial stock markets are efficient. Consequently, the individual investor could make the most of the use of this forecasting method for his decision especially in the stock market
Poignard, Clair. "Méthodes asymptotiques pour le calcul des champs électromagnétiques dans des milieux à couches minces : application aux cellules biologiques." Lyon 1, 2006. http://tel.archives-ouvertes.fr/docs/00/13/71/88/PDF/these.pdf.
This thesis deals with asymptotic methods for electromagnetism in heterogeneous domains with thin layer. The motivation is the computation of electromagnetic fields in biological cells, which are highly heterogeneous media. We replace the thin cytoplasmic membrane with an appropriate condition on the boundary of the cytoplasm and we estimate the error. Two equations are considered : the steady state voltage equation and Helmholtz equation. For the low frequency case, we suppose that Neumann condition is imposed on the exterior boundary of the cell. We build approximated boundary condition, which is valid for an insulating (even perfectly insulating) thin membrane. For Helmholtz equation, we suppose that the cell is embedded in an ambiant medium and we build transmission conditions equivalent to the thin membrane. All these results are proved and numerically computed. For the high-frequency assumption (the frequency tends to infinity), we build generalized impedance condition using pseudo-differential approach equivalent to the membrane. We conclude this thesis with a work with Michael Vogelius on the high frequency scattering by a small circular inhomogeneity
Achouch, Ayman. "Analyse économétrique des prix des métaux : une application multivariée des méthodes statistiques aux séries temporelles." Montpellier 1, 1998. http://www.theses.fr/1998MON10025.
Ionescu, Adrian M. "Modèles et méthodes associes a la caractérisation électrique du tmos : application aux technologies SOI." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0009.
Le, Hoang Bao. "Contribution aux méthodes de synthèse de correcteurs d’ordres réduits sous contraintes de robustesse et aux méthodes de réduction de modèles pour la synthèse robuste en boucle fermée." Grenoble INPG, 2010. http://www.theses.fr/2010INPG0135.
LTI systems are subject to physical and technological constraints. We have shown that these constraints limit the achievable bandwidth in closed loop. Consequently, it is sufficient to model and analyze these systems on a limited frequency band, and not on all frequencies, and that low order controllers may give full satisfaction. In this thesis, we are interested to the design of reduced fixed-order controller with robustness constraints, and to the model reduction for such systems. The proposed method consists in determining a fixed structure controller that optimizes the rejection of a step load disturbance, with respect to robustness constraints, i. E. Minimum module margin, minimum phase margin and maximum amplification of measurement noise. Based on a base of reduced-order generic models, the method yields a semi-analytical mixed H2/H-infinity optimization formulation with objective function and inequality constraints in terms of controller gains. When the system to be controlled is of high order, to obtain the parameters of generic models, we propose a model reduction method with the guarantee of closed-loop robustness margins. If the system model is not available, we present an experimental identification method by the relay feedback relied on the base of generic models. In order to make the proposed approach more accessible to industrial use, our developments have been incorporated into software tools to help the control system design
Raffard, Delphine. "Modélisation de structures maçonnées par homogénéisation numérique non-linéaire : application aux ouvrages d'intérêt archéologique." Vandoeuvre-les-Nancy, INPL, 2000. http://www.theses.fr/2000INPL134N.
Galicher, Hervé. "Analyse mathématique de modèles en nanophysique." Paris 6, 2009. http://www.theses.fr/2009PA066641.
Genty, Joël. "Modélisation et simulation dynamique du couplage bioréaction-filtration : application aux fermentations alcoolique et lactique." Châtenay-Malabry, Ecole centrale de Paris, 1992. http://www.theses.fr/1992ECAP0262.
Gallo, Yves. "Contribution aux méthodes de modifications structurales en dynamique : réanalyse modale de modèles enrichis, procédure modale combinée." Valenciennes, 1992. https://ged.uphf.fr/nuxeo/site/esupversions/90d61773-5e5d-4e1d-b6a6-dc9a65a6d41f.
Antonios, Joe. "Développement de modèles et de méthodes de calculs électriques et thermiques appliqués aux onduleurs à IGBT." Nantes, 2011. https://archive.bu.univ-nantes.fr/pollux/show/show?id=0535288e-f5c2-4601-8af7-68529afe93c2.
This work focuses on the electrical and thermal modeling of IGBT-based power inverters (Insulated Gate Bipolar Transistor). In order to design the best cooling systems without overestimating the size of the heat sink, a precise calculation of electrical losses must be achieved and a thermal model must be developed leading to a good prediction of the junction temperature. The work presented in this report is oriented toward applications at very low frequencies. In such situations the junction temperature variation is important, and the calculation must be as a function of time. Regarding the losses determination, an approach was proposed using simple current and voltage profiles and at the same time producing a sufficiently detailed signal losses. For the thermal part, a three-dimensional network RC-3D was developed. This model is based on physical parameters of the module and takes into account the vertical and the lateral heat diffusion in the module. In addition, this network allows representing the temperature changes on two different time domains (nanoseconds and seconds), taking place in the junction and in the assembly. Results obtained by simulation using Matlab/Simulink were validated by experimental measurements. Finally, a brief example of how the electro-thermal model can be used is presented. It corresponds to a cycle of an electric vehicle from the startup till a predefined steady state. This application allows determining how the IGBT module is heating up
Gasc, Thibault. "Modèles de performance pour l'adaptation des méthodes numériques aux architectures multi-coeurs vectorielles. Application aux schémas Lagrange-Projection en hydrodynamique compressible." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN063/document.
This works are dedicated to hydrodynamics. For decades, numerous numerical methods has been developed to deal with this type of problems. However, both the evolution and the complexity of computing make us rethink or redesign our numerical solver in order to use efficiently massively parallel computers. Using performance modeling, we perform an analysis of a reference Lagrange-Remap solver in order to deeply understand its behavior on current supercomputer and to optimize its implementation. Thanks to the conclusions of this analysis, we derive a new numerical solver which by design has a better performance. We call it the Lagrange-Flux solver. The accuracy obtained with this solver is similar to the reference one. The derivation of this method also leads to rethink the Remap step
Wane, Bocar Amadou. "Adaptation de maillages et méthodes itératives avec applications aux écoulements à surfaces libres turbulents." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29353/29353.pdf.
Franck, Emmanuel. "Construction et analyse numérique de schémas asymptotic preserving sur maillages non structurés : Application au transport linéaire et aux systèmes de Friedrichs." Paris 6, 2012. http://www.theses.fr/2012PA066393.
The transport equation in highly scattering regimes has a limit in which the dominant behavior is given by the solution of a diffusion equation. The angular discretizations like the discrete ordinate method Sn or the truncated spherical harmonic expansion Pn have the same property. For such systems it would be interesting to construct finite volume schemes on unstructured meshes which have the same dominant behavior even if the mesh is coarse (these schemes are called asymptotic preserving schemes). Indeed these models can be coupled with Lagrangian hydrodynamics codes which generate very distorted meshes. To begin we consider the lowest order angular discretization of the transport equation that is the P1 model also called the hyperbolic heat equation. After an introduction of 1D methods, we start by modify the classical edge scheme with the Jin-Levermore procedure, this scheme is not valid in the diffusion regime because the limit diffusion scheme (Two Points Flux Approximation) is not consistent on unstructured meshes. To solve this problem we propose news schemes valid on unstructured meshes. These methods are based on the nodal scheme (GLACE scheme) designed for the acoustic and dynamic gas problems, coupled with the Jin-Levermore procedure. We obtain two schemes valid on unstructured meshes which give in 1D on the Jin-Levermore scheme an Gosse-Toscani scheme. The limit diffusion scheme obtained is a new nodal scheme. Convergence and stability proofs have been exhibited for these schemes. In a second time, these methods have been extended to higher order angular discretisation like the Pn and Sn models using a splitting strategy between the lowest order angular discretization and the higher order angular discretization. To finish we will propose to study the discretization of the absorption/emision problem in radiative transfer and a non-linear moment model called M1 model. To treat the M1 model we propose to use a formulation like a dynamic gas system coupled with a Lagrange+remap nodal scheme and the Jin-Levermore method. The numerical method obtained preserve the asymptotic limit, the maximum principle, and the entropy inequality on unstructured meshes
Ould, Isselmou Yahya. "Interpolation de niveaux d’exposition aux émissions radioélectriques in situ à l’aide de méthodes géostatistiques." Paris, ENMP, 2007. http://www.theses.fr/2007ENMP1484.
Radioelectric norms give different limit values of the radioelectric exposure. Exposure levels measured near radio and telecommunication antennas are very small compared to values recommended by the “International Commission on Non-Ionizing Radiation Protection”. Today, persons near radio transmitters are seeking for the evaluation of exposure levels and the probability to exceed some threshold and not only the conformity to norms. Probabilistic framework with geostatistical methods is proposed that permits this evaluation. In this thesis we present an application of linear geostatistical methods, in particular the kriging method, for the estimation of the radioelectric levels of exposure starting from the output of a numerical model. The application of kriging with external drift to measurements obtained by a dosimeter is presented in this thesis. In these case studies, the Cauchy variogram model shows a good adequacy with the variability of the power density. A final application aims to evaluate the probability with which the exposure can exceed a determined threshold. We use two methods of non linear geostatistics, which require a Gaussian random function framework. . The practical implementation of these methods involves a transformation of the exposure to Gaussian values. A comparison between the probability of threshold exceedance obtained by application of the two methods on measurements is presented
Charlette, Fabrice. "Simulation numérique de la combustion turbulente prémélangée par méthodes aux grandes échelles." Châtenay-Malabry, Ecole centrale de Paris, 2002. http://www.theses.fr/2002ECAP0861.
Ribière-Tharaud, Nicolas. "Amélioration des méthodes de qualification des véhicules automobiles en CEM : applications aux faisceaux de câbles." Paris 11, 2001. http://www.theses.fr/2001PA112213.
Lesage, David. "Modèles, primitives et méthodes de suivi pour la segmentation vasculaire : application aux coronaires en imagerie tomodensitométrique 3D." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005908.
Emonot, Philippe. "Méthodes de volumes éléments finis : applications aux équations de Navier Stokes et résultats de convergence." Lyon 1, 1992. http://www.theses.fr/1992LYO10280.
Rouger, Frédéric. "Application des méthodes numériques aux problèmes d'identification des lois de comportement du matériau bois." Compiègne, 1988. http://www.theses.fr/1988COMPD121.
Ndanou, Serge. "Etude mathématique et numérique des modèles hyperélastiques et visco-plastiques : applications aux impacts hypervéloces." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4347/document.
A mathematical model of diffuse interface for the interaction of N elasto-plastic solidS was built. It is an extension of the model developed by Favrie & Gavrilyuk (2012) for a fluid-solid interaction. Despite the large number of equations present in this model, two remarkable properties have been demonstrated: it is hyperbolic for any admissible deformations and satisfies the second principle of thermodynamics. In this model, the internal energy of each solid is taken in separable form: it is the sum of a hydrodynamic energy (which depends only on the density and entropy) and shear energy. The equation of state of each solid is such that if we take the shear modulus of the solid vanishes, we find the equations of fluid mechanics. This model allows, in particular:- predict the deformation of elastic-plastic solids in small and very large deformations.- predict the interaction of an arbitrary number of elasto-plastic solids and fluids.The ability of this model to solve complex problems has been demonstrated. Without being exhaustive, one can mention:- the spall phenomenon in solids.- fracturing and fragmentation in solids
Persoons, Renaud. "Etude des méthodes et modèles de caractérisation de l'exposition atmosphérique aux polluants chimiques pour l'évaluation des risques sanitaires." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00747456.
Adeline, Romain. "Méthodes pour la validation de modèles formels pour la sûreté de fonctionnement et extension aux problèmes multi-physiques." Toulouse, ISAE, 2011. http://www.theses.fr/2011ESAE0003.
Vérant, Jean-Luc. "Etude de méthodes numériques et de modèles physico-chimiques pour des écoulements hypersoniques réactifs : application aux véhicules spatiaux." Aix-Marseille 1, 1990. http://www.theses.fr/1990AIX11314.
Réau, Manon. "Importance des données inactives dans les modèles : application aux méthodes de criblage virtuel en santé humaine et environnementale." Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1251/document.
Virtual screening is widely used in early stages of drug discovery and to build toxicity prediction models. Commonly used protocols include an evaluation of the performances of different tools on benchmarking databases before applying them for prospective studies. The content of benchmarking tools is a critical point; most benchmarking databases oppose active data to putative inactive due to the scarcity of published inactive data in the literature. Nonetheless, experimentally validated inactive data also bring information. Therefore, we constructed the NR-DBIND, a database dedicated to nuclear receptors that contains solely experimentally validated active and inactive data. The importance of the integration of inactive data in docking and pharmacophore models construction was evaluated using the NR-DBIND data. Virtual screening protocols were used to resolve the potential binding mode of small molecules on FXR, NRP-1 et TNF⍺
Hamza, Ghazoi. "Contribution aux développements des modèles analytiques compacts pour l’analyse vibratoire des systèmes mécatroniques." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC018/document.
This thesis focuses on the development of a method for the preliminary design of mechatronic systems, taking into account the vibratory aspect, without going through costly design techniques, such as 3D CAD and finite element method.In an early stage of the design process of mechatronic systems, simple analytical models are necessary to the architect engineer in Mechatronics, for important conceptual decisions related to multi-physics coupling and vibration. For this purpose, a library of flexible elements, based on analytical models, was developed in this thesis, using the Modelica modeling language.To demonstrate the possibilities of this approach, we conducted a study of the vibration response of some mechatronic systems. Therefore, the pre-sizing approach was applied in a first phase to a simple mechatronic system, formed with a rectangular plate supporting electrical components such as electric motors and electronic cards, and in a second phase the approach was applied to a wind turbine, considered as a complete mechatronic system. Simulation results were compared with the finite elements method and other studies found in the scientific literature. Simulation results have enabled us to prove that the developed compact models assist the mechatronic architect to find results of simulation with an important accuracy and a low computational cost
Galié, Thomas. "Couplage interfacial de modèles en dynamique des fluides : application aux écoulements diphasiques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://tel.archives-ouvertes.fr/tel-00395593.
Perot, Thomas. "Quel est le niveau de détail pertinent pour modéliser la croissance d'une forêt mélangée ? Comparaison d'une famille de modèles et application aux peuplements mélangés chêne sessile - pin sylvestre." Paris, AgroParisTech, 2009. http://tel.archives-ouvertes.fr/docs/00/43/25/73/PDF/ManuscritTheseTPerotVF.pdf.
Appropriate tools and models are needed for the management of mixed forests. The aim of this thesis is to show how the construction and the comparison of models with different levels of detail can help us to choose the most appropriate level to model the growth of a mixed stand. We developed a family of models at different levels of detail from data collected in mixed stands of sessile oak (Quercus petraea L. ) and Scots pine (Pinus sylvestris L. ) : a tree distance independent model (MAID), a tree distance dependent model (MADD), three stand models and an intermediate model bridging the MAID and the MADD. We ensured consistency between models using several approaches in order to make relevant comparisons. These models have given us some knowledge on the growth and dynamics of these forests, in particular on the spatial and temporal interactions between oaks and pines. Thus, we showed a compensatory growth phenomenon between the two species using the MAID. The MADD made it possible to show that, in these stands, the intraspecific competition was stronger than interspecific competition. A stand model developed from the MADD helped us to study the influence of mixing rate on production. To assess the quality of models predictions, we used an independent data set obtained by splitting our data. For example, we have shown that the MAID was more efficient than the MADD to predict individual increments. The models were also compared on examples of applications with short or medium term simulations. The proposed approach is of interest for both understanding the studied phenomenon and developing predictive tools. The different results of this work, allowed us to assess the relevance of a type of model for different contexts of use. This very general approach could be applied to the modeling of other processes such as mortality or regeneration
Sabat, Macole. "Modèles euleriens et méthodes numériques pour la description des sprays polydisperses turbulents." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC086.
In aeronautical combustion chambers, the ability to simulate two-phase flows gains increasing importance nowadays since it is one of the elements needed for the full understanding and prediction of the combustion process. This matter is motivated by the objective of improving the engine performance and better predicting the pollutant emissions. On the industrial scale, the description of the fuel spray found downstream of the injector is preferably done through Eulerian methods. This is due to the intrinsic statistical convergence of these methods, their natural coupling to the gas phase and their efficiency in terms of High Performance Computing compared to Lagrangian methods. In this thesis, the use of Kinetic-Based Moment Method with an Anisotropic Gaussian (AG) closure is investigated. By solving all velocity moments up to second order, this model reproduces statistically the main features of small scale Particles Trajectories Crossing (PTC). The resulting hyperbolic system of equations is mathematically well-posed and satisfies the realizability properties. This model is compared to the first order model in the KBMM hierarchy, the monokinetic model MK which is suitable of low inertia particles. The latter leads to a weakly hyperbolic system that can generate δ-shocks. Several schemes are compared for the resolution of the hyperbolic and weakly hyperbolic system of equations. These methods are assessed based on their ability to handle the naturally en- countered singularities due to the moment closures, especially without globally degenerating to lower order or violating the realizability constraints. The AG is evaluated for the Direct Numerical Simulation of 3D turbulent particle-laden flows by using ASPHODELE solver for the gas phase, and MUSES3D solver for the Eulerian spray in which the new model is implemented. The results are compared to the reference Lagrangian simulation as well as the MK results. Through the qualitative and quantitative results, the AG is found to be a predictive method for the description of moderately inertial particles and is a good candidate for complex simulations in realistic configurations where small scale PTC occurs. Finally, within the framework of industrial turbulence simulations a fully kinetic Large Eddy Simulation formalism is derived based on the AG model. This strategy of directly applying the filter on the kinetic level is helpful to devise realizability conditions. Preliminary results for the AG-LES model are evaluated in 2D, in order to investigate the sensitivity of the LES result on the subgrid closures
Chaudret, Robin. "Compréhension et modélisation multi-échelle du comportement des cations métalliques dans des milieux complexes : des méthodes interprétatives aux champs de forces polarisables." Paris 6, 2011. http://www.theses.fr/2011PA066126.
Gaudel, Romaric. "Paramètres d'ordre et sélection de modèles en apprentissage : caractérisation des modèles et sélection d'attributs." Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00549090.
Tabourier, Lionel. "Méthode de comparaison des topologies de graphes complexes : applications aux réseaux sociaux." Paris 6, 2010. http://www.theses.fr/2010PA066335.
Poulin, Cyndie. "Etudes des matériaux, composants et systèmes dans le domaine térahertz par analogie aux méthodes optiques." Thesis, Ecole centrale de Marseille, 2018. http://www.theses.fr/2018ECDM0010/document.
The aim of my thesis is to extend the electromagnetic models already existing at the Institut Fresnel for the optical frequencies towards the terahertz (THz) range, to have a better knowledge of the physical phenomena involved in THz light-matter interactions. This understanding would allow to improve the analysis of the THz images acquired and to have a better definition of the optical systems configurations that we use. To achieve this work, we compare the results coming from the model with those from the experiments led by THz imaging by Terahertz Waves Technologies. In the future, the modelling could become a predictive tool for the characterization of materials in the THz domain.THz waves are located between far infrared and microwaves in the electromagnetic spectrum going from 0.01 mm to 3 mm (or 100 GHz to 30 THz). These waves benefit from advantages of the optical waves and from microwaves depending on used frequencies. THz imaging presents a high potential one for the characterization on the material, because these waves can penetrate a lot of materials which are opaque in the visible and the infrared lights. Detection of defects, delaminations, the presence of humidity, etc…, are examples of the problems which can be investigated with THz light.At first, I was able to model the optical response of planar, homogenous, isotropic and polymeric samples with good agreements between the calculation and the measurement. These results allowed to realize first modellings of images which are consistent with THz imaging. Therefore, the study is enlarged to anisotropic materials which exist in the current industrial environment as well as the objects of full cylindrical shape. The developed models consider the complex refractive index of a sample and its thickness, that is why a chapter is devoted to the method of estimation of these parameters from measurements coming from THz Time-Domain Spectrocopy signals which was implemented
Alexandre, Ludovic. "Méthode de flux normal pour le traitement des conditions aux bords dans le cadre des volumes finis : application aux écoulements monophasiques et diphasiques." Paris 11, 2006. http://www.theses.fr/2006PA112062.
This thesis presents a study of the normal flux method for the treatment of boundary conditions in the finite volume framework applied to hyperbolic systems. The first chapter is devoted to the construction of the normal flux method based on the finite volume with characteristic flux scheme. Treating directly the conservation of the normal flux for outgoing waves, this method is more precise than the method of characteristic variables and does not require ad hoc proceddure like the Riemann invariants. This method allows also a simple processing of nonreflective boundary conditions. Under certain assumptions, we show that the choice of the boundary conditions verifies the concept of well posed problem. In the following chapter, numerical examples validate the normal flux method for Euler equations. The normal flux method gives results equivalent to those obtained with a method based on the resolution of a partial Riemann problem to the edge, and closer to the reference solutions than with mirror treatment. The processing of the nonreflectice boundary conditions corresponds to the expected results, namely to avoid the reflexion of disturbing wavex. In the third chapter, we consider a diphasic two-fluid system to study the adaptability of the normal flux method to this nonconservative and nonhyperbolic complex system. We presetn the characteristics of the diphasic system and the solutions proposed to make it hyperbolic. Numericla simulations show that by the normal flux method, it is possible to treat diphasic flows and confirm the undertaken studies, like the comparison with mirror treatment of wall conditions
Deaconu, Madalina. "Processus stochastiques associés aux équations d'évolution linéaires ou non-linéaires et méthodes numériques probabilistes." Habilitation à diriger des recherches, Université Henri Poincaré - Nancy I, 2008. http://tel.archives-ouvertes.fr/tel-00590778.
Lu, Ye. "Construction d’abaques numériques dédiés aux études paramétriques du procédé de soudage par des méthodes de réduction de modèles espace-temps." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI103/document.
The use of standard numerical simulations for studies of the influence of input parameters (materials, loading, boundary conditions, geometry, etc.) on the quantities of interest in welding (residual stresses, distortion, etc.) proves to be too long and costly due to the multiparametric aspect of welding. In order to explore high-dimensional parametric spaces, with cheaper calculations, it seems to be appropriate to use model reduction approaches. In this work, in an a posteriori way, a non-intrusive strategy is developed to construct computational vademecum dedicated to parametric studies of welding. In an offline phase, a snapshots database is pre-computed with an optimal choice of input parameters given by a “multi-grids” approach (in parameter space). To explore other parameter values, an interpolation method based on Grassmann manifolds is proposed to adapt both the space and time reduced bases derived from the SVD. This method seems more efficient than standard interpolation methods, especially in non-linear cases. In order to explore highdimensional parametric spaces, a tensor decomposition method (i.e. HOPGD) has also been studied. For the optimality aspect of the computational vademecum, we propose a convergence acceleration technique for HOPGD and a “sparse grids” approach which allows efficient sampling of the parameter space. Finally, computational vademecums of dimension up to 10 with controlled accuracy have been constructed for different types of welding parameters (materials, loading, geometry)
Feuardent, Valérie. "Amélioration des modèles par recalage : application aux structures spatiales." Cachan, Ecole normale supérieure, 1997. http://www.theses.fr/1997DENS0019.
Echague, Eugénio. "Optimisation globale sans dérivées par minimisation de modèles simplifiés." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0016.
In this thesis, we study two global derivative-free optimization methods: the method of moments and the surrogate methods. Concerning the method of moments, it is implemented as solver of the sub-problems in a derivative-free optimization method and tested for an engine calibration problem with succes. We also explore its dual approach, and we study the approximation of a function by a sum of squares of polynomials plus a constant. Concerning the surrogate methods, we construct a new approximation by using the Sparse Grid interpolation method, which builds an accurate model from a limited number of function evaluations. This model is then locally refined near the points with low function value. The numerical performance of this new method, called GOSgrid, is tested for classical optimisation test functions and finally for an inverse parameter identification problem, showing good results compared to some of the others existing methods, in terms of number of function evaluations
Diallo, Abdourahmane. "Théorie et estimation des modèles spatiaux à choix discret : application aux modèles d'occupation du sol en région PACA." Paris, EHESS, 2014. http://www.theses.fr/2014EHES0167.
In this thesis, we propose and discuss some spatial discrete choice models that rely on the theory of random utility model, as well as some limit theorems. Although several estimation methods already exist - like likelihood maximization methods (LM) which consider ail the available information in the samples - we propose an approach by the generalized method of moments (GMM) to estimate the unknown parameters of these models. We start by recalling the theoretical results and estimate approaches of spatial discrete choice models that are going to be essential in the other chapters. In Chapter 2, we provide results of a central limit theorem in order to prove the consistency and the asymptotic normality of the estimators proposed in Chapters 3 and 4. The Chapter 3 extends KLIER and MCMILLEN's method in the case of a model that includes both a dependent variable and disturbances terms spatially lagged. Chapter 4 derives the multinomial case and panels data spatially lagged. Chapter 4 derives the multinomial case and panels data