Thèses sur le sujet « Algorithme de simulation stochastique »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Algorithme de simulation stochastique ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Makhlouf, Azmi. « Régularité fractionnaire et analyse stochastique de discrétisations ; Algorithme adaptatif de simulation en risque de crédit ». Phd thesis, Grenoble INPG, 2009. http://www.theses.fr/2009INPG0154.
This thesis deals with three issues from numerical probability and mathematical finance. First, we study the L2-time regularity modulus of the Z-component of a Markovian BSDE with Lipschitz-continuous coefficients, but with irregular terminal function g. This modulus is linked to the approximation error of the Euler scheme. We show, in an optimal way, that the order of convergence is explicitly connected to the fractional regularity of g. Second, we propose a sequential Monte-Carlo method in order to efficiently compute the price of a CDO tranche, based on sequential control variates. The recoveries are supposed to be i. I. D. Random variables. Third, we analyze the tracking error related to the Delta-Gamma hedging strategy. The fractional regularity of the payoff function plays a crucial role in the choice of the trading dates, in order to achieve optimal rates of convergence
Makhlouf, Azmi. « Régularité fractionnaire et analyse stochastique de discrétisations ; Algorithme adaptatif de simulation en risque de crédit ». Phd thesis, Grenoble INPG, 2009. http://tel.archives-ouvertes.fr/tel-00460269.
Phi, Tien Cuong. « Décomposition de Kalikow pour des processus de comptage à intensité stochastique ». Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4029.
The goal of this thesis is to construct algorithms which are able to simulate the activity of a neural network. The activity of the neural network can be modeled by the spike train of each neuron, which are represented by a multivariate point processes. Most of the known approaches to simulate point processes encounter difficulties when the underlying network is large.In this thesis, we propose new algorithms using a new type of Kalikow decomposition. In particular, we present an algorithm to simulate the behavior of one neuron embedded in an infinite neural network without simulating the whole network. We focus on mathematically proving that our algorithm returns the right point processes and on studying its stopping condition. Then, a constructive proof shows that this new decomposition holds for on various point processes.Finally, we propose algorithms, that can be parallelized and that enables us to simulate a hundred of thousand neurons in a complete interaction graph, on a laptop computer. Most notably, the complexity of this algorithm seems linear with respect to the number of neurons on simulation
Panloup, Fabien. « Approximation récursive du régime stationnaire d'une équation différentielle stochastique avec sauts ». Paris 6, 2006. http://www.theses.fr/2006PA066397.
Panloup, Fabien. « Approximation récursive du régime stationnaire d'une Equation Differentielle Stochastique avec sauts ». Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00120508.
d'Euler à pas décroissant, « exacts » ou « approchés », permettent de simuler efficacement la probabilité invariante mais également la loi globale d'un tel processus en régime stationnaire.
Ce travail possède des applications théoriques et pratiques diverses dont certaines
sont développées ici (TCL p.s. pour les lois stables, théorème limite relatif aux valeurs extrêmes, pricing d'options pour des modèles à volatilité stochastique stationnaire...).
Abdelghani, Maher. « Identification temporelle des structures : approche des algorithmes sous-espace dans l'espace état ». Montpellier 2, 1995. http://www.theses.fr/1995MON20185.
Reutenauer, Victor. « Algorithmes stochastiques pour la gestion du risque et l'indexation de bases de données de média ». Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4018/document.
This thesis proposes different problems of stochastic control and optimization that can be solved only thanks approximation. On one hand, we develop methodology aiming to reduce or suppress approximations to obtain more accurate solutions or something exact ones. On another hand we develop new approximation methodology in order to solve quicker larger scale problems. We study numerical methodology to simulated differential equations and enhancement of computation of expectations. We develop quantization methodology to build control variate and gradient stochastic methods to solve stochastic control problems. We are also interested in clustering methods linked to quantization, and principal composant analysis or compression of data thanks neural networks. We study problems motivated by mathematical finance, like stochastic control for the hedging of derivatives in incomplete market but also to manage huge databases of media commonly known as big Data in chapter 5. Theoretically we propose some upper bound for convergence of the numerical method used. This is the case of optimal hedging in incomplete market in chapter 3 but also an extension of Beskos-Roberts methods of exact simulation of stochastic differential equations in chapter 4. We present an original application of karhunen-Loève decomposition for a control variate of computation of expectation in chapter 2
Lemaire, Vincent. « Estimation récursive de la mesure invariante d'un processus de diffusion ». Phd thesis, Université de Marne la Vallée, 2005. http://tel.archives-ouvertes.fr/tel-00011281.
La principale hypothèse sur ces solutions (diffusions) est l'existence d'une fonction de Lyapounov garantissant une condition de stabilité. Par le théorème ergodique on sait que les mesures empiriques de la diffusion convergent vers une mesure invariante. Nous étudions une convergence similaire lorsque la diffusion est discrétisée par un schéma d'Euler de pas décroissant. Nous prouvons que les mesures empiriques pondérées de ce schéma convergent vers la mesure invariante de la diffusion, et qu'il est possible d'intégrer des fonctions exponentielles lorsque le coefficient de diffusion est suffisamment petit. De plus, pour une classe de diffusions plus restreinte, nous prouvons la convergence presque sûre et dans Lp du schéma d'Euler vers la diffusion.
Nous obtenons des vitesses de convergence pour les mesures empiriques pondérées et donnons les paramètres permettant une vitesse optimale. Nous finissons l'étude de ce schéma lorsqu'il y a présence de multiples mesures invariantes. Cette étude se fait en dimension 1, et nous permet de mettre en évidence un lien entre classification de Feller et fonctions de Lyapounov.
Dans la dernière partie, nous exposons un nouvel algorithme adaptatif permettant de considérer des problèmes plus généraux tels que les systèmes Hamiltoniens ou les systèmes monotones. Il s'agit de considérer les mesures empiriques d'un schéma d'Euler construit à partir d'une suite de pas aléatoires adaptés dominée par une suite décroissant vers 0.
Bouttier, Clément. « Optimisation globale sous incertitude : algorithmes stochastiques et bandits continus avec application aux performances avion ». Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30176.
This PhD thesis is dedicated to the theoretical and numerical analysis of stochastic algorithms for the stochastic flight planning problem. Optimizing the fuel consumption and flight time is a key factor for airlines to be competitive. These companies thus look for flight optimization tools with higher and higher accuracy requirements. However, nowadays available methodologies for flight planning are based on simplified aircraft performance models. In this PhD, we propose to fulfill the accuracy requirements by adapting our methodology to both the constraints induced by the utilization of an industrial aircraft performance computation code and the consideration of the uncertainty about the real flight conditions, i.e., air traffic and weather conditions. Our proposal is supported by three main contributions. First, we design a numerical framework for benchmarking aircraft trajectory optimization tools. This provides us a unified testing procedure for all aircraft performance models. Second, we propose and study (both theoretically and numerically) two global derivative-free algorithms for stochastic optimization problems. The first approach, the NSA algorithm, is highly generic and does not use any prior knowledge about the aircraft performance model. It is an extension of the simulated annealing algorithm adapted to noisy cost functions. We provide an upper bound on the convergence speed of NSA to globally optimal solutions. The second approach, the SPY algorithm, is a Lipschitz bandit algorithm derived from Piyavskii's algorithm. It is more specific as it requires the knowledge of some Lipschitz regularity property around the optimum, but it is therefore far more efficient. We also provide a theoretical study of this algorithm through an upper bound on its simple regret
Berro, Julien. « Du monomère à la cellule : modèle de la dynamique de l'actine ». Université Joseph Fourier (Grenoble), 2006. http://www.theses.fr/2006GRE10226.
Actin filaments are biological polymers that are very abundant in eucaryot cytoskeleton. Their auto-assembly and auto-organization are highly dynami. And are essential in cell motility and membrane deformations. Ln this thesis we propose three approaches, on different scales, in order to enlighten mechanisms for the regulation ofassembly of, organization of and production of force by biological filaments such as actin filaments. First, we have developed a stochastic multi-agent simulation tool for studying biological filaments taking into consideration interactions on the nanometer scale. This new tool allowed us to bring out the acceleration of actin monomer turnover due to fragmentation of filaments by ADF/Cofilin and the symmetry breaking induced by thisprotein, which agree weil with experimental data from L. Blanchoin team (CEA Grenoble). Secondly, we studied a continuou model for filament buckling, providing, on the one hand, an estimation of forces exerted in vitro or in vivo with respect to extremity attachment conditions and, on the other hand, limit conditions for buckling. Thirdly, we developed a framework for organizing kinetic biochemical data from reaction networks, which was used for the regulation of actin polymerization. These three modeling approaches improved the knowledge on actin dynamics and are useful complements for experimental approaches in biology
Hessami, Mohammad Hessam. « Modélisation multi-échelle et hybride des maladies contagieuses : vers le développement de nouveaux outils de simulation pour contrôler les épidémies ». Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS036/document.
Theoretical studies in epidemiology mainly use differential equations, often under unrealistic assumptions (e.g. spatially homogeneous populations), to study the development and spreading of contagious diseases. Such models are not, however, well adapted understanding epidemiological processes at different scales, nor are they efficient for correctly predicting epidemics. Yet, such models should be closely related to the social and spatial structure of populations. In the present thesis, we propose a series of new models in which different levels of spatiality (e.g. local structure of population, in particular group dynamics, spatial distribution of individuals in the environment, role of resistant people, etc) are taken into account, to explain and predict how communicable diseases develop and spread at different scales, even at the scale of large populations. Furthermore, the manner in which our models are parametrised allow them to be connected together so as to describe the epidemiological process at a large scale (population of a big town, country ...) and with accuracy in limited areas (office buildings, schools) at the same time.We first succeed in including the notion of groups in SIR (Susceptible, Infected, Recovered) differential equation systems by a rewriting of the SIR dynamics in the form of an enzymatic reaction in which group-complexes of different composition in S, I and R individuals form and where R people behave as non-competitive inhibitors. Then, global group dynamics simulated by stochastic algorithms in a homogeneous space, as well emerging ones obtained in multi-agent systems, are coupled to such SIR epidemic models. As our group-based models provide fine-grain information (i.e. microscopical resolution of time, space and population) we propose an analysis of criticality of epidemiological processes. We think that diseases in a given social and spatial environment present characteristic signatures and that such measurements could allow the identification of the factors that modify their dynamics.We aim here to extract the essence of real epidemiological systems by using various methods based on different computer-oriented approaches. As our models can take into account individual behaviours and group dynamics, they are able to use big-data information yielded from smart-phone technologies and social networks. As a long term objective derived from the present work, one can expect good predictions in the development of epidemics, but also a tool to reduce epidemics by guiding new environmental architectures and by changing human health-related behaviours
Mercier, Quentin. « Optimisation multicritère sous incertitudes : un algorithme de descente stochastique ». Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4076/document.
This thesis deals with unconstrained multiobjective optimization when the objectives are written as expectations of random functions. The randomness is modelled through random variables and we consider that this does not impact the problem optimization variables. A descent algorithm is proposed which gives the Pareto solutions without having to estimate the expectancies. Using convex analysis results, it is possible to construct a common descent vector that is a descent vector for all the objectives simultaneously, for a given draw of the random variables. An iterative sequence is then built and consists in descending following this common descent vector calculated at the current point and for a single independent draw of the random variables. This construction avoids the costly estimation of the expectancies at each step of the algorithm. It is then possible to prove the mean square and almost sure convergence of the sequence towards Pareto solutions of the problem and at the same time, it is possible to obtain a speed rate result when the step size sequence is well chosen. After having proposed some numerical enhancements of the algorithm, it is tested on multiple test cases against two classical algorithms of the literature. The results for the three algorithms are then compared using two measures that have been devised for multiobjective optimization and analysed through performance profiles. Methods are then proposed to handle two types of constraint and are illustrated on mechanical structure optimization problems. The first method consists in penalising the objective functions using exact penalty functions when the constraint is deterministic. When the constraint is expressed as a probability, the constraint is replaced by an additional objective. The probability is then reformulated as an expectation of an indicator function and this new problem is solved using the algorithm proposed in the thesis without having to estimate the probability during the optimization process
Hurbain, Julien. « Modélisation de la réponse métabolique à un stress oxydant : rôle des régulations ». Electronic Thesis or Diss., Université de Lille (2022-....), 2022. http://www.theses.fr/2022ULILR045.
Living cells such as mammalian cells in particular, are continuously exposed to multiple and varied types of stress. These stresses can perturb the cellular homeostasis and induce damages on the cellular components which could induce several types of diseases. It is particularly the case for a change of cellular redox state called oxidative stress induced by an excessive production or insufficient consumption of reactive oxygen species such as hydrogen peroxide (H2O2).Cells have developed efficient defence mechanisms against oxidative stress that involve anti-oxidant systems such as glutathiones which reduce the oxidizing molecules, but also metabolic pathways such as Pentose Phosphate Pathway (PPP) and glycolysis. These metabolic pathways are known to reroute the carbon flux resources from the glycolysis toward the PPP which induces high NADPH recycling that is required for efficient detoxification rate of the anti-oxidant systems. It remains however unclear how regulatory mechanisms (i) contribute to such reallocation of metabolic flux resources during oxidative stress and (ii) give rise to observed adaptation profiles of intracellular H2O2 concentrations. In the thesis, the role of regulations in the metabolic response to oxidative stress is addressed using a comprehensive kinetic modeling framework. First, a model is built from a set of metabolomics and 13C labeling data, using conventional parameter estimation methods but also a novel metabolic flux analysis techniques based on a stochastic simulation algorithm. Systematic analysis of the model reveals that many metabolic inhibitions, especially on G6PD, PGI and GAPD, can favour flux rerouting for NADPH production. In particular, we show that all these regulations work in a dose-dependent and complementary manner, which explains some paradoxes and controversies, and is consistent with observed adaptation phenotypes. A more phenomenological model has also been developed to show how such adaptation phenotype could contribute to cell-fate heterogeneity, such as fractional killing, as a long-term outcome of oxidative stress
Geisweiller, Nil. « Étude sur la modélisation et la vérification probabiliste d'architectures de simulations distribuées pour l'évaluation de performances ». Toulouse, ENSAE, 2006. http://www.theses.fr/2006ESAE0003.
Vagne, Quentin. « Stochastic models of intra-cellular organization : from non-equilibrium clustering of membrane proteins to the dynamics of cellular organelles ». Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC205/document.
This thesis deals with cell biology, and particularly with the internal organization of eukaryotic cells. Although many of the molecular players contributing to the intra-cellular organization have been identified, we are still far from understanding how the complex and dynamical intra-cellular architecture emerges from the self-organization of individual molecules. One of the goals of the different studies presented in this thesis is to provide a theoretical framework to understand such self-organization. We cover specific problems at different scales, ranging from membrane organization at the nanometer scale to whole organelle structure at the micron scale, using analytical work and stochastic simulation algorithms. The text is organized to present the results from the smallest to the largest scales. In the first chapter, we study the membrane organization of a single compartment by modeling the dynamics of membrane heterogeneities. In the second chapter we study the dynamics of one membrane-bound compartment exchanging vesicles with the external medium. Still in the same chapter, we investigate the mechanisms by which two different compartments can be generated by vesicular sorting. Finally in the third chapter, we develop a global model of organelle biogenesis and dynamics in the specific context of the Golgi apparatus
Terrier, Pierre. « Algorithmes stochastiques pour simuler l'évolution microstructurale d'alliages ferritiques : une étude de la dynamique d'amas ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1135/document.
We study ageing of materials at a microstructural level. In particular, defects such as vacancies, interstitials and solute atoms are described by a model called Cluster Dynamics (CD), which characterize the evolution of the concentrations of such defects, on period of times as long as decades. CD is a set of ordinary differential equations (ODEs), which might contain up to hundred of billions of equations. Therefore, classical methods used for solving system of ODEs are not suited in term of efficiency. We first show that CD is well-posed and that physical properties such as the conservation of matter and the preservation of the sign of the solution are verified. We also study an approximation of CD, namely the Fokker--Planck approximation, which is a partial differential equation. We quantify the error between CD and its approximation. We then introduce an algorithm for simulating CD. The algorithm is based on a splitting of the dynamics and couples a deterministic and a stochastic approach of CD. The stochastic approach interprets directly CD as a jump process or its approximation as a Langevin process. The aim is to reduce the number of equations to solve, hence reducing the computation time. We finally apply this algorithm to physical models. The interest of this approach is validated on complex models. Moreover, we show that CD can be efficiently improved thanks to the versatility of the algorithm
Frantz, Yves. « Simulation stochastique des réseaux karstiques ». Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0215.
Despite intensive explorations by speleologists, karstic networks remain only partially described as many conduits are not accessible to humans. The classical exploration techniques produce sparse data subject to uncertainties concerning the conduit position and their dimensions, which are essential parameters for flow simulations.Stochastic simulations make it possible to better handle and assess these uncertainties by offering several equally probable karstic system representations.The ideal simulator should allow for the construction of tridimensional karstic drain networks, respecting the field observations (karstification markers), the knowledge brought by tracer tests, the information collected in the accessible parts of the network and those obtained by the study of other networks.In this context, this PhD thesis offers 3 main contributions.The first contribution is the statistical analysis of a database of 49 karstic networks.It focuses on the study of conduit geometry, through the analysis of two metrics : the equivalent radius and the width-height ratio.No generic statistical law describing the network geometry was found.Nonetheless, the spatial variability of the geometrical properties at different scales was characterized, mostly through the development of 1D-curvilinear variograms.The widespread hypothesis of a hierarchical organization of the conduit geometries has also been analysed and rejected.The second contribution is the development of two methods allowing stochastic simulations of properties along karstic networks and based on the results of the statistical analysis. The first method focuses on the reproduction of the property variability at the network scale, while the second one focuses on the reproduction of the variability within and between the network branches.Both are based on the Sequential Gaussian Simulation methods and are adapted to 1D-curvilinear objects.The third contribution is the prototype of a method aiming to stochastically simulate discrete karstic networks as graphs (known as network skeletons}). We hope that once completed, it would allow the simulation of different network types, while taking directly into account field data and geological information. It is divided in three main steps : i) the generation of a point cloud, ii) the computation of the point connectivity and iii) their connection to create the skeleton of the karstic network.These contributions open new prospects regarding the simulation of karstic drain networks usable for flow simulation (e.g., SWMM, Epanet, Modflow-CFP), which should allow a better characterization of the associated flows
Picot, Romain. « Amélioration de la fiabilité numérique de codes de calcul industriels ». Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS242.
Many studies are devoted to performance of numerical simulations. However it is also important to take into account the impact of rounding errors on the results produced. These rounding errors can be estimated with Discrete Stochastic Arithmetic (DSA), implemented in the CADNA library. Compensated algorithms improve the accuracy of results, without changing the numerical types used. They have been designed to be generally executed with rounding to nearest. We have established error bounds for these algorithms with directed rounding and shown that they can be used successfully with the random rounding mode of DSA. We have also studied the impact of a target precision of the results on the numerical types of the different variables. We have developed the PROMISE tool which automatically performs these type changes while validating the results thanks to DSA. The PROMISE tool has thus provided new configurations of types combining single and double precision in various programs and in particular in the MICADO code developed at EDF. We have shown how to estimate with DSA rounding errors generated in quadruple precision. We have proposed a version of CADNA that integrates quadruple precision and that allowed us in particular to validate the computation of multiple roots of polynomials. Finally we have used this new version of CADNA in the PROMISE tool so that it can provide configurations with three types (single, double and quadruple precision)
Gavra, Iona Alexandra. « Algorithmes stochastiques d'optimisation sous incertitude sur des structures complexes : convergence et applications ». Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30141/document.
The main topics of this thesis involve the development of stochastic algorithms for optimization under uncertainty, the study of their theoretical properties and applications. The proposed algorithms are modified versions of simulated an- nealing that use only unbiased estimators of the cost function. We study their convergence using the tools developed in the theory of Markov processes: we use properties of infinitesimal generators and functional inequalities to measure the distance between their probability law and a target one. The first part is concerned with quantum graphs endowed with a probability measure on their vertex set. Quantum graphs are continuous versions of undirected weighted graphs. The starting point of the present work was the question of finding Fréchet means on such a graph. The Fréchet mean is an extension of the Euclidean mean to general metric spaces and is defined as an element that minimizes the sum of weighted square distances to all vertices. Our method relies on a Langevin formulation of a noisy simulated annealing dealt with using homogenization. In order to establish the convergence in probability of the process, we study the evolution of the relative entropy of its law with respect to a convenient Gibbs measure. Using functional inequalities (Poincare and Sobolev) and Gronwall's Lemma, we then show that the relative entropy goes to zero. We test our method on some real data sets and propose an heuristic method to adapt the algorithm to huge graphs, using a preliminary clustering. In the same framework, we introduce a definition of principal component analysis for quantum graphs. This implies, once more, a stochastic optimization problem, this time on the space of the graph's geodesics. We suggest an algorithm for finding the first principal component and conjecture the convergence of the associated Markov process to the wanted set. On the second part, we propose a modified version of the simulated annealing algorithm for solving a stochastic global optimization problem on a finite space. Our approach is inspired by the general field of Monte Carlo methods and relies on a Markov chain whose probability transition at each step is defined with the help of mini batches of increasing (random) size. We prove the algorithm's convergence in probability towards the optimal set, provide convergence rate and its optimized parametrization to ensure a minimal number of evaluations for a given accuracy and a confidence level close to 1. This work is completed with a set of numerical experiments and the assessment of the practical performance both on benchmark test cases and on real world examples
Viseur, Sophie. « Simulation stochastique basée-objet de chenaux ». Vandoeuvre-les-Nancy, INPL, 2001. http://www.theses.fr/2001INPL036N.
Benoit, Anne. « Méthodes et algorithmes pour l'évaluation des performances des systèmes ». Phd thesis, Grenoble INPG, 2003. http://tel.archives-ouvertes.fr/tel-00004361.
Reuillon, Romain. « Simulations stochastiques en environnements distribués : application aux grilles de calcul ». Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00731242.
Xayaphoummine, Alain. « Simulations et expériences sur le repliement de l'ARN : prédictions statistiques des pseudonœuds in silico et réalisation de commutateurs ARN par transcription in vitro ». Université Louis Pasteur (Strasbourg) (1971-2008), 2004. https://publication-theses.unistra.fr/public/theses_doctorat/2004/XAYAPHOUMMINE_Alain_2004.pdf.
Minvielle-Larrousse, Pierre. « Méthodes de simulation stochastique pour le traitement de l’information ». Thesis, Pau, 2019. http://www.theses.fr/2019PAUU3005.
When a quantity of interest is not directly observed, it is usual to observe other quantities that are linked by physical laws. They can provide information about the quantity of interest if it is able to solve the inverse problem, often ill posed, and infer the value. Bayesian inference is a powerful tool for inversion that requires the computation of high dimensional integrals. Sequential Monte Carlo (SMC) methods, a.k.a. interacting particles methods, are a type of Monte Carlo methods that are able to sample from a sequence of probability densities of growing dimension. They are many applications, for instance in filtering, in global optimization or rare event simulation.The work has focused in particular on the extension of SMC methods in a dynamic context where the system, governed by a hidden Markov process, is also determined by static parameters that we seek to estimate. In sequential Bayesian estimation, the determination of fixed parameters causes particular difficulties: such a process is non-ergodic, the system not forgetting its initial conditions. It is shown how it is possible to overcome these difficulties in an application of tracking and identification of geometric shapes by CCD digital camera. Markov Monte Carlo Chain (MCMC) sampling steps are introduced to diversify the samples without altering the posterior distribution. For another material control application, which mixes static and dynamic parameters, we proposed an original offline approach. It consists of a Particle Marginal Metropolis-Hastings (PMMH) algorithm that integrates Rao-Blackwellized SMC, based on a bank of interacting Ensemble Kalman filters.Other information processing works has been conducted: particle filtering for atmospheric reentry vehicle tracking, 3D radar imaging by sparse regularization and image registration by mutual information
Tauvel, Claire. « Optimisation stochastique à grande échelle ». Phd thesis, Grenoble 1, 2008. http://www.theses.fr/2008GRE10305.
In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex optimization problems, variational inequalities with monotone operators and saddle point problems. We study these problems when the dimension of the search space is high and when the values of the functions of interest are unknown and we just can deal with a stochastic oracle. The algorithms we study are stochastic adaptation of two algorithms : the first one is a variant of the mirror descent algorithm proposed by Nemirovski and Yudin and the second one a variant of the dual extrapolation algorithm by Nesterov. For both of them, we provide bounds for the expected value and bounds for moderate deviations of the approximation error with different regularity hypothesis for all the unconstrained problems we study and we propose adaptative versions of the algorithms in order to get rid of the knowledge of some parameters depending on the problem studied and unavailable in practice. At last we show how to solve constrained stochastic optimization problems thanks to an auxiliary algorithm inspired by the Newton descent one and thanks to the results we obtained for the saddle point problems
Tauvel, Claire. « Optimisation stochastique à grande échelle ». Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00364777.
Saadane, Sofiane. « Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire ». Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.
In this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
Cartier, Manuel. « La dynamique de l'adaptation d'industries : simulation par algorithme génétique ». Paris 9, 2003. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2003PA090057.
Ben, Hamida Elyiès. « Modélisation stochastique et simulation des réseaux sans fil multi-sauts ». Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0063/these.pdf.
A wireless multi-hop network is a self-organizing network where entities communicate wirelessly without the need of a centralized base station. The end-to-end connection is accomplished through multi-hop communication. This thesis relates to the problem of modeling and simulation of wireless multi-hop networks and is divided into three main parts. First, we address the problem of the design of neighbor discovery protocols. We propose a stochastic modeling of the network, the nodes and the radio channel. We then analytically analyse the impact of the physical layer modeling on the performance of hello protocols and we propose a method to adapt the protocol parameters to meet application constraints. A real scenario from the MOSAR project is analyzed. In the second part, we consider the problem of simulation of wireless multi-hop networks. We first provide a detailed comparative study of various existing simulators and we introduce the WSNet simulation environment. Using WSNet we investigate the impact of the physical layer modeling on the behavior of high-level protocols. Finally, in the last part, we address the problem of data dissemination in wireless sensor networks with mobile sinks. We introduce the LBDD protocol which is based on the concept of virtual infrastructure. We then evaluate and compare LBDD to different approaches using theoretical analysis and simulation
Jourdan, Laetitia. « Métaheuristiques Coopératives : du déterministe au stochastique ». Habilitation à diriger des recherches, Université des Sciences et Technologie de Lille - Lille I, 2010. http://tel.archives-ouvertes.fr/tel-00523274.
Chatenet-Laurent, Nathalie. « Algorithme d'apprentissage de la politique optimale d'un processus stochastique : application à un réseau d'alimentation en eau potable ». Bordeaux 1, 1997. http://www.theses.fr/1997BOR10540.
Nguyen, Thi Thao. « Approximation et simulation d'équations différentielles stochastiques singulières ». Orléans, 2003. http://www.theses.fr/2003ORLE2032.
Chotin-Avot, Roselyne. « Architectures matérielles pour l'arithmétique stochastique discrète ». Paris 6, 2003. http://hal.upmc.fr/tel-01267458.
Fallot, Pierre. « Etude d'un modèle stochastique du rayonnement solaire ». Grenoble 1, 1992. http://www.theses.fr/1992GRE10146.
Pereira, De Oliveira Luís Carlos. « Développement d'une méthodologie de modélisation cinétique de procédés de raffinage traitant des charges lourdes ». Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00839871.
Sbaï, Mohamed. « Modélisation de la dépendance et simulation de processus en finance ». Thesis, Paris Est, 2009. http://www.theses.fr/2009PEST1046/document.
The first part of this thesis deals with probabilistic numerical methods for simulating the solution of a stochastic differential equation (SDE). We start with the algorithm of Beskos et al. [13] which allows exact simulation of the solution of a one dimensional SDE. We present an extension for the exact computation of expectations and we study the application of these techniques for the pricing of Asian options in the Black & Scholes model. Then, in the second chapter, we propose and study the convergence of two discretization schemes for a family of stochastic volatility models. The first one is well adapted for the pricing of vanilla options and the second one is efficient for the pricing of path-dependent options. We also study the particular case of an Orstein-Uhlenbeck process driving the volatility and we exhibit a third discretization scheme which has better convergence properties. Finally, in the third chapter, we tackle the trajectorial weak convergence of the Euler scheme by providing a simple proof for the estimation of the Wasserstein distance between the solution and its Euler scheme, uniformly in time. The second part of the thesis is dedicated to the modelling of dependence in finance through two examples : the joint modelling of an index together with its composing stocks and intensity-based credit portfolio models. In the forth chapter, we propose a new modelling framework in which the volatility of an index and the volatilities of its composing stocks are connected. When the number of stocks is large, we obtain a simplified model consisting of a local volatility model for the index and a stochastic volatility model for the stocks composed of an intrinsic part and a systemic part driven by the index. We study the calibration of these models and show that it is possible to fit the market prices of both the index and the stocks. Finally, in the last chapter of the thesis, we define an intensity-based credit portfolio model. In order to obtain stronger dependence levels between rating transitions, we extend it by introducing an unobservable random process (frailty) which acts multiplicatively on the intensities of the firms of the portfolio. Our approach is fully historical and we estimate the parameters of our model to past rating transitions using maximum likelihood techniques
Culioli, Jean-Christophe. « Algorithmes de decomposition/coordination en optimisation stochastique ». Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0059.
Diabaté, Modibo. « Modélisation stochastique et estimation de la croissance tumorale ». Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM040.
This thesis is about mathematical modeling of cancer dynamics ; it is divided into two research projects.In the first project, we estimate the parameters of the deterministic limit of a stochastic process modeling the dynamics of melanoma (skin cancer) treated by immunotherapy. The estimation is carried out with a nonlinear mixed-effect statistical model and the SAEM algorithm, using real data of tumor size. With this mathematical model that fits the data well, we evaluate the relapse probability of melanoma (using the Importance Splitting algorithm), and we optimize the treatment protocol (doses and injection times).We propose in the second project, a likelihood approximation method based on an approximation of the Belief Propagation algorithm by the Expectation-Propagation algorithm, for a diffusion approximation of the melanoma stochastic model, noisily observed in a single individual. This diffusion approximation (defined by a stochastic differential equation) having no analytical solution, we approximate its solution by using an Euler method (after testing the Euler method on the Ornstein Uhlenbeck diffusion process). Moreover, a moment approximation method is used to manage the multidimensionality and the non-linearity of the melanoma mathematical model. With the likelihood approximation method, we tackle the problem of parameter estimation in Hidden Markov Models
Comte, Fabienne. « Causalite, cointegration, memoire longue : modelisation stochastique en temps continu, estimation et simulation ». Paris 1, 1994. http://www.theses.fr/1994PA010084.
Aupetit, Benjamin. « Calcul d'indicateurs de sûreté de fonctionnement de modèles AltaRica 3.0 par simulation stochastique ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASC004.
Safety assessment of a critical and complex system allows choices of technical solutions.The chosen modelling language for those assessments must have sufficient power of expression to represent the different behaviours envisaged: AltaRica 3.0 is here used.But computation of dependability indicators on complex models is then difficult: stochastic simulation is a solution, but only allow to estimate values.It is then necessary for the analyst to describe which indicators he wishes to estimate, and what their relations with the model are: a set of measures, covering the conventional needs in dependability, is proposed.The estimation quality is related to the number of measurements, and therefore to the performances of the stochastic simulator software tool: improvements of stochastic simulation of AltaRica 3.0 models have been implemented in the tool of the OpenAltaRica platform.The use of software computation tools in a certification context must be evaluated on their quality: a stochastic simulator reliability evaluation methodology, not limited to AltaRica 3.0, is presented.Finally, a case study of a complex mechatronic system presents the possibilities of stochastic simulation and AltaRica 3.0 for a safety study of this class of systems
Besbes, Mariem. « Modélisation et résolution du problème d’implantation des ateliers de production : proposition d’une approche combinée Algorithme Génétique – Algorithme A* ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC094.
To face the competition, companies seek to improve their industrial performance. One of the solutions to this challenge lies in determining the best configuration of the production workshops. This type of problem is known in English by Facility Layout Problem "FLP". In this context, our work proposes a methodology for the definition of the workshop configuration through a realistic approach. More precisely, our goal is to take into account the actual distances traveled by the parts in the workshop and system-related constraints that have not yet been incorporated into the models proposed in the literature. To do this, our first scientific contribution is to develop a new methodology that uses the A* algorithm to identify the shortest distances between workstations in a realistic way. The proposed methodology combines the Genetic Algorithm (GA) and the algorithm A* to explore solution spaces. To get closer to real cases, our second contribution is to present a new generalized formulation of FLP initially studied, taking into account different shapes and dimensions of the equipment and the workshop. The results obtained prove the applicability and the feasibility of this approach in various situations. A comparative study of the proposed approach with particle swarms integrated with A * proved the quality of the first approach in terms of transport cost. Finally, our third contribution is to treat the FLP in a 3D space where spatial constraints are integrated into the modeling phase. The resolution is an extension of the proposed methodology for the 2D problem, which therefore integrates the A * algorithm and the AG to generate various configurations in the 3D space. For each of these contributions, a sensitivity analysis of the different AG parameters used was made using Monte Carlo simulations
Moarefvand, Parviz. « Méthode des éléments distincts stochastique ». Vandoeuvre-les-Nancy, INPL, 1998. http://www.theses.fr/1998INPL047N.
Luu, Keurfon. « Optimisation numérique stochastique évolutionniste : application aux problèmes inverses de tomographie sismique ». Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM077/document.
Seismic traveltime tomography is an ill-posed optimization problem due to the non-linear relationship between traveltime and velocity model. Besides, the solution is not unique as many models are able to explain the observed data. The non-linearity and non-uniqueness issues are typically addressed by using methods relying on Monte Carlo Markov Chain that thoroughly sample the model parameter space. However, these approaches cannot fully handle the computer resources provided by modern supercomputers. In this thesis, I propose to solve seismic traveltime tomography problems using evolutionary algorithms which are population-based stochastic optimization methods inspired by the natural evolution of species. They operate on concurrent individuals within a population that represent independent models, and evolve through stochastic processes characterizing the different mechanisms involved in natural evolution. Therefore, the models within a population can be intrinsically evaluated in parallel which makes evolutionary algorithms particularly adapted to the parallel architecture of supercomputers. More specifically, the works presented in this manuscript emphasize on the three most popular evolutionary algorithms, namely Differential Evolution, Particle Swarm Optimization and Covariance Matrix Adaptation - Evolution Strategy. The feasibility of evolutionary algorithms to solve seismic tomography problems is assessed using two different data sets: a real data set acquired in the context of hydraulic fracturing and a synthetic refraction data set generated using the Marmousi velocity model that presents a complex geology structure
Seck, Babacar. « Optimisation stochastique sous contrainte de risque et fonctions d'utilité ». Phd thesis, Ecole des Ponts ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004576.
Luu, Keurfon. « Optimisation numérique stochastique évolutionniste : application aux problèmes inverses de tomographie sismique ». Electronic Thesis or Diss., Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM077.
Seismic traveltime tomography is an ill-posed optimization problem due to the non-linear relationship between traveltime and velocity model. Besides, the solution is not unique as many models are able to explain the observed data. The non-linearity and non-uniqueness issues are typically addressed by using methods relying on Monte Carlo Markov Chain that thoroughly sample the model parameter space. However, these approaches cannot fully handle the computer resources provided by modern supercomputers. In this thesis, I propose to solve seismic traveltime tomography problems using evolutionary algorithms which are population-based stochastic optimization methods inspired by the natural evolution of species. They operate on concurrent individuals within a population that represent independent models, and evolve through stochastic processes characterizing the different mechanisms involved in natural evolution. Therefore, the models within a population can be intrinsically evaluated in parallel which makes evolutionary algorithms particularly adapted to the parallel architecture of supercomputers. More specifically, the works presented in this manuscript emphasize on the three most popular evolutionary algorithms, namely Differential Evolution, Particle Swarm Optimization and Covariance Matrix Adaptation - Evolution Strategy. The feasibility of evolutionary algorithms to solve seismic tomography problems is assessed using two different data sets: a real data set acquired in the context of hydraulic fracturing and a synthetic refraction data set generated using the Marmousi velocity model that presents a complex geology structure
Wahl, François. « Un environnement d'aide aux ingénieurs basé sur une architecture en tâches et sur un module de visualisation de courbes. Application à la conception de procédés de raffinage ». Phd thesis, Ecole Nationale des Ponts et Chaussées, 1994. http://tel.archives-ouvertes.fr/tel-00529958.
Bouzid, Makram. « Contribution à la modélisation de l'interaction agent / environnement : modélisation stochastique et simulation parallèle ». Nancy 1, 2001. http://www.theses.fr/2001NAN10271.
This thesis belongs at the same time to the multi-agent system (MAS) and parallelism domains, and more precisely to the parallel simulation of MAS. Two problems are tackled: First one concerns the modeling and simulation of situated agents, together with the unreliability of their sensors and effectors, in order to perform simulations which results will be more realistic. The second problem relates to the exploitation of the inherent parallelism of multi-agent systems, in order to obtain good parallel performances, by reducing the execution time and/or processing problems of bigger sizes. Two models are proposed: a formal model of multi-agent systems, including a stochastic model of the agent/environment interaction, and a parallel simulation model for multi-agent systems based on the distribution of the conflicts occurring between the agents, and on a dynamic load balancing mechanism between the processors. The satisfactory results we have obtained are presented
Khuu, Minh-Thang. « Contribution à l'accélération de la simulation stochastique sur des modèles AltaRica Data Flow ». Aix-Marseille 2, 2008. http://theses.univ-amu.fr.lama.univ-amu.fr/2008AIX22021.pdf.
This thesis relates to the study of stochastic simulation acceleration which applied to states/transitions models. In system dependability studies, stochastic simulation is practically the only method accessible for large states/transitions models. However, simulation processes are likely to be very long in order to obtain statistically stable results. To reduce simulation execution time, we examine the representation of the studied system by instructions of a programming language. The AltaRica Data Flow (ADF) language is the starting point of this thesis. This language generalizes the most used formalisms in system dependability studies. We implement a transformation of an ADF description into instructions of the C language, and an automated generation of a stochastic simulator for the studied system. The carried out experiments justify the use of the generated simulators with compared to the traditional simulators
Akrour, Nawal. « Simulation stochastique des précipitations à fine échelle : application à l'observation en milieu urbain ». Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLV014/document.
Precipitations are highly variable across a wide range of both spatial and temporal scales. This variability is a major source of uncertainty for the measurement and modeling, also for the simulation and prediction. Moreover, rainfall is an extremely intermittent process with multiple scale invariance regimes. The rain-field generator developed during the thesis is based on the fine-scale statistic modeling of rain by the mean of its heterogeneity and intermittency. The modeling originality partially rest on the analysis of fine-scale disdrometer data. This model differs from other existing models whose resolution is roughly a minute or even an hour or a day. It provides simulations with realistic properties across a wide range ofscales. This simulator produces time series with statistical characteristics almost identical to the observations both at the 15s resolution and, after degradation, at hourly or daily resolutions. The multi-scale properties of our simulator are obtained through a hybrid approach that relies on a fine scale simulation of rain events using a multifractal generator associated with a rain support simulation based on a Poissonian-type hypothesis. A final re-normalization step of the rain rate is added in order to adapt the generator to the relevant climate area. The simulator allows the generation of 2D water-sheets. The methodology developed in the first part is extended to the 2 Dimension case. The multi-scale 2D stochastic simulator thus developed can reproduce geostatistical and topological characteristics at the spatial resolution of 1x1 km2.This generator is used in the scope of the feasability study of a new observation system for urban area. The principle of this system is based on the opportunistic use of attenuation measurements provided by geostationary TV satellites which radio waves lay in the 10.7 to 12.7 GHz bandwidth. More specifically it is assumed that the SAT-TV reception terminals installed in private homes are able to measure such attenuations. At this stage of the study we do not have such observations. The study is therefore based on rainfall maps generated using the 2D generator in addition to a hypothetical sensor network. The considered observation system will allow to estimate precipitation fields (30 x 30 km2) with a spatial resolution of 0.5x0.5 km2
Akrour, Nawal. « Simulation stochastique des précipitations à fine échelle : application à l'observation en milieu urbain ». Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLV014.
Precipitations are highly variable across a wide range of both spatial and temporal scales. This variability is a major source of uncertainty for the measurement and modeling, also for the simulation and prediction. Moreover, rainfall is an extremely intermittent process with multiple scale invariance regimes. The rain-field generator developed during the thesis is based on the fine-scale statistic modeling of rain by the mean of its heterogeneity and intermittency. The modeling originality partially rest on the analysis of fine-scale disdrometer data. This model differs from other existing models whose resolution is roughly a minute or even an hour or a day. It provides simulations with realistic properties across a wide range ofscales. This simulator produces time series with statistical characteristics almost identical to the observations both at the 15s resolution and, after degradation, at hourly or daily resolutions. The multi-scale properties of our simulator are obtained through a hybrid approach that relies on a fine scale simulation of rain events using a multifractal generator associated with a rain support simulation based on a Poissonian-type hypothesis. A final re-normalization step of the rain rate is added in order to adapt the generator to the relevant climate area. The simulator allows the generation of 2D water-sheets. The methodology developed in the first part is extended to the 2 Dimension case. The multi-scale 2D stochastic simulator thus developed can reproduce geostatistical and topological characteristics at the spatial resolution of 1x1 km2.This generator is used in the scope of the feasability study of a new observation system for urban area. The principle of this system is based on the opportunistic use of attenuation measurements provided by geostationary TV satellites which radio waves lay in the 10.7 to 12.7 GHz bandwidth. More specifically it is assumed that the SAT-TV reception terminals installed in private homes are able to measure such attenuations. At this stage of the study we do not have such observations. The study is therefore based on rainfall maps generated using the 2D generator in addition to a hypothetical sensor network. The considered observation system will allow to estimate precipitation fields (30 x 30 km2) with a spatial resolution of 0.5x0.5 km2