Dissertations / Theses on the topic 'Echantillonnage et estimation Monte Carlo'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Echantillonnage et estimation Monte Carlo.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Argouarc, h. Elouan. "Contributions to posterior learning for likelihood-free Bayesian inference." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS021.
Full textBayesian posterior inference is used in many scientific applications and is a prevalent methodology for decision-making under uncertainty. It enables practitioners to confront real-world observations with relevant observation models, and in turn, infer the distribution over an explanatory variable. In many fields and practical applications, we consider ever more intricate observation models for their otherwise scientific relevance, but at the cost of intractable probability density functions. As a result, both the likelihood and the posterior are unavailable, making posterior inference using the usual Monte Carlo methods unfeasible.In this thesis, we suppose that the observation model provides a recorded dataset, and our aim is to bring together Bayesian inference and statistical learning methods to perform posterior inference in a likelihood-free setting. This problem, formulated as learning an approximation of a posterior distribution, includes the usual statistical learning tasks of regression and classification modeling, but it can also be an alternative to Approximate Bayesian Computation methods in the context of simulation-based inference, where the observation model is instead a simulation model with implicit density.The aim of this thesis is to propose methodological contributions for Bayesian posterior learning. More precisely, our main goal is to compare different learning methods under the scope of Monte Carlo sampling and uncertainty quantification.We first consider the posterior approximation based on the likelihood-to-evidence ratio, which has the main advantage that it turns a problem of conditional density learning into a problem of binary classification. In the context of Monte Carlo sampling, we propose a methodology for sampling from such a posterior approximation. We leverage the structure of the underlying model, which is conveniently compatible with the usual ratio-based sampling algorithms, to obtain straightforward, parameter-free, and density-free sampling procedures.We then turn to the problem of uncertainty quantification. On the one hand, normalized models such as the discriminative construction are easy to apply in the context of Bayesian uncertainty quantification. On the other hand, while unnormalized models, such as the likelihood-to-evidence-ratio, are not easily applied in uncertainty-aware learning tasks, a specific unnormalized construction, which we refer to as generative, is indeed compatible with Bayesian uncertainty quantification via the posterior predictive distribution. In this context, we explain how to carry out uncertainty quantification in both modeling techniques, and we then propose a comparison of the two constructions under the scope of Bayesian learning.We finally turn to the problem of parametric modeling with tractable density, which is indeed a requirement for epistemic uncertainty quantification in generative and discriminative modeling methods. We propose a new construction of a parametric model, which is an extension of both mixture models and normalizing flows. This model can be applied to many different types of statistical problems, such as variational inference, density estimation, and conditional density estimation, as it benefits from rapid and exact density evaluation, a straightforward sampling scheme, and a gradient reparameterization approach
Gajda, Dorota. "Optimisation des méthodes algorithmiques en inférence bayésienne. Modélisation dynamique de la transmission d'une infection au sein d'une population hétérogène." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00659618.
Full textCarpentier, Alexandra. "De l' echantillonnage optimal en grande et petite dimension." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00844361.
Full textOunaissi, Daoud. "Méthodes quasi-Monte Carlo et Monte Carlo : application aux calculs des estimateurs Lasso et Lasso bayésien." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10043/document.
Full textThe thesis contains 6 chapters. The first chapter contains an introduction to linear regression, the Lasso and the Bayesian Lasso problems. Chapter 2 recalls the convex optimization algorithms and presents the Fista algorithm for calculating the Lasso estimator. The properties of the convergence of this algorithm is also given in this chapter using the entropy estimator and Pitman-Yor estimator. Chapter 3 is devoted to comparison of Monte Carlo and quasi-Monte Carlo methods in numerical calculations of Bayesian Lasso. It comes out of this comparison that the Hammersely points give the best results. Chapter 4 gives a geometric interpretation of the partition function of the Bayesian lasso expressed as a function of the incomplete Gamma function. This allowed us to give a convergence criterion for the Metropolis Hastings algorithm. Chapter 5 presents the Bayesian estimator as the law limit a multivariate stochastic differential equation. This allowed us to calculate the Bayesian Lasso using numerical schemes semi-implicit and explicit Euler and methods of Monte Carlo, Monte Carlo multilevel (MLMC) and Metropolis Hastings algorithm. Comparing the calculation costs shows the couple (semi-implicit Euler scheme, MLMC) wins against the other couples (scheme method). Finally in chapter 6 we found the Lasso convergence rate of the Bayesian Lasso when the signal / noise ratio is constant and when the noise tends to 0. This allowed us to provide a new criteria for the convergence of the Metropolis algorithm Hastings
Gilquin, Laurent. "Échantillonnages Monte Carlo et quasi-Monte Carlo pour l'estimation des indices de Sobol' : application à un modèle transport-urbanisme." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM042/document.
Full textLand Use and Transportation Integrated (LUTI) models have become a norm for representing the interactions between land use and the transportation of goods and people in a territory. These models are mainly used to evaluate alternative planning scenarios, simulating their impact on land cover and travel demand.LUTI models and other mathematical models used in various fields are most of the time based on complex computer codes. These codes often involve poorly-known inputs whose uncertainty can have significant effects on the model outputs.Global sensitivity analysis methods are useful tools to study the influence of the model inputs on its outputs. Among the large number of available approaches, the variance based method introduced by Sobol' allows to calculate sensitivity indices called Sobol' indices. These indices quantify the influence of each model input on the outputs and can detect existing interactions between inputs.In this framework, we favor a particular method based on replicated designs of experiments called replication method. This method appears to be the most suitable for our application and is advantageous as it requires a relatively small number of model evaluations to estimate first-order or second-order Sobol' indices.This thesis focuses on extensions of the replication method to face constraints arising in our application on the LUTI model Tranus, such as the presence of dependency among the model inputs, as far as multivariate outputs.Aside from that, we propose a recursive approach to sequentially estimate Sobol' indices. The recursive approach is based on the iterative construction of stratified designs, latin hypercubes and orthogonal arrays, and on the definition of a new stopping criterion. With this approach, more accurate Sobol' estimates are obtained while recycling previous sets of model evaluations. We also propose to combine such an approach with quasi-Monte Carlo sampling.An application of our contributions on the LUTI model Tranus is presented
Lamberti, Roland. "Contributions aux méthodes de Monte Carlo et leur application au filtrage statistique." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLL007/document.
Full textThis thesis deals with integration calculus in the context of Bayesian inference and Bayesian statistical filtering. More precisely, we focus on Monte Carlo integration methods. We first revisit the importance sampling with resampling mechanism, then its extension to the dynamic setting known as particle filtering, and finally conclude our work with a multi-target tracking application. Firstly, we consider the problem of estimating some moment of a probability density, known up to a constant, via Monte Carlo methodology. We start by proposing a new estimator affiliated with the normalized importance sampling estimator but using two proposition densities rather than a single one. We then revisit the importance sampling with resampling mechanism as a whole in order to produce Monte Carlo samples that are independent, contrary to the classical mechanism, which enables us to develop two new estimators. Secondly, we consider the dynamic aspect in the framework of sequential Bayesian inference. We thus adapt to this framework our new independent resampling technique, previously developed in a static setting. This yields the particle filtering with independent resampling mechanism, which we reinterpret as a special case of auxiliary particle filtering. Because of the increased cost required by this technique, we next propose a semi independent resampling procedure which enables to control this additional cost. Lastly, we consider an application of multi-target tracking within a sensor network using a new Bayesian model, and empirically analyze the results from our new particle filtering algorithm as well as a sequential Markov Chain Monte Carlo algorithm
Chesneau, Héléna. "Estimation personnalisée de la dose délivrée au patient par l’imagerie embarquée kV-CBCT et réflexions autour de la prise en charge clinique." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS059.
Full textProtocols for cancer treatment using intensity-modulated radiation therapy (IMRT) allow to target the tumor with an increased precision. They require accurate anatomical information of the patient just before the treatment, which can be obtained using on-board imaging systems mounted on the medical linear accelerator delivering the treatment beam. These systems, composed of an X-ray tube and a 2D planar detector, are called kV-Cone Beam CT (kV-CBCT). Nowadays, they are widely used in the context of IMRT treatments. However, these kV-CBCT examinations are also responsible for an additional dose of ionizing radiations which is far to be negligible and could be the cause for secondary effects, such as radiation-induced second cancers for treated patients. During this PhD work, a simulator based on the Monte Carlo method was developed in order to calculate accurately the doses delivered to organs during kV-CBCT examinations. Then, this tool was used to study several strategies to take in account for the imaging additional doses in clinical environment. The study reported here includes, in particular, a fast and personalized method to estimate the doses delivered to organs. This strategy was developed using a cohort of 50 patients including 40 children and 10 adults. This work has been done in collaboration with the medical physics unit of the Eugène Marquis medical center in Rennes, which has collected the clinical data used for this study
Bidon, Stéphanie Tourneret Jean-Yves Besson Olivier. "Estimation et détection en milieu non-homogène application au traitement spatio-temporel adaptatif /." Toulouse : INP Toulouse, 2009. http://ethesis.inp-toulouse.fr/archive/00000683.
Full textIchir, Mahieddine Mehdi. "Estimation bayésienne et approche multi-résolution en séparation de sources." Paris 11, 2005. http://www.theses.fr/2005PA112370.
Full textLamberti, Roland. "Contributions aux méthodes de Monte Carlo et leur application au filtrage statistique." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLL007.
Full textThis thesis deals with integration calculus in the context of Bayesian inference and Bayesian statistical filtering. More precisely, we focus on Monte Carlo integration methods. We first revisit the importance sampling with resampling mechanism, then its extension to the dynamic setting known as particle filtering, and finally conclude our work with a multi-target tracking application. Firstly, we consider the problem of estimating some moment of a probability density, known up to a constant, via Monte Carlo methodology. We start by proposing a new estimator affiliated with the normalized importance sampling estimator but using two proposition densities rather than a single one. We then revisit the importance sampling with resampling mechanism as a whole in order to produce Monte Carlo samples that are independent, contrary to the classical mechanism, which enables us to develop two new estimators. Secondly, we consider the dynamic aspect in the framework of sequential Bayesian inference. We thus adapt to this framework our new independent resampling technique, previously developed in a static setting. This yields the particle filtering with independent resampling mechanism, which we reinterpret as a special case of auxiliary particle filtering. Because of the increased cost required by this technique, we next propose a semi independent resampling procedure which enables to control this additional cost. Lastly, we consider an application of multi-target tracking within a sensor network using a new Bayesian model, and empirically analyze the results from our new particle filtering algorithm as well as a sequential Markov Chain Monte Carlo algorithm
Desrumaux, Pierre-François. "Méthodes statistiques pour l’estimation du rendement paramétrique des circuits intégrés analogiques et RF." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20126/document.
Full textSemiconductor device fabrication is a complex process which is subject to various sources of variability. These variations can impact the functionality and performance of analog integrated circuits, which leads to yield loss, potential chip modifications, delayed time to market and reduced profit. Statistical circuit simulation methods enable to estimate the parametric yield of the circuit early in the design stage so that corrections can be done before manufacturing. However, traditional methods such as Monte Carlo method and corner simulation have limitations. Therefore an accurate analog yield estimate based on a small number of circuit simulations is needed. In this thesis, existing statistical methods from electronics and non-Electronics publications are first described. However, these methods suffer from sever drawbacks such as the need of initial time-Consuming circuit simulations, or a poor scaling with the number of random variables. Second, three novel statistical methods are proposed to accurately estimate the parametric yield of analog/RF integrated circuits based on a moderate number of circuit simulations: An automatically sorted quasi-Monte Carlo method, a kernel-Based control variates method and an importance sampling method. The three methods rely on a mathematical model of the circuit performance metric which is constructed based on a truncated first-Order Taylor expansion. This modeling technique is selected as it requires a minimal number of SPICE-Like circuit simulations. Both theoretical and simulation results show that the proposed methods lead to significant speedup or improvement in accuracy compared to other existing methods
Kakakhail, Syed Shahkar. "Prédiction et estimation de très faibles taux d'erreurs pour les chaînes de communication codées." Cergy-Pontoise, 2010. http://www.theses.fr/2010CERG0437.
Full textThe time taken by standard Monte Carlo (MC) simulation to calculate the Frame Error Rate (FER) increases exponentially with the increase in Signal-to-Noise Ratio (SNR). Importance Sampling (IS) is one of the most successful techniques used to reduce the simulation time. In this thesis, we investigate an advanced version of IS, called Adaptive Importance Sampling (AIS) algorithm to efficiently evaluate the performance of Forward Error Correcting (FEC) codes at very low error rates. First we present the inspirations and motivations behind this work by analyzing different approaches currently in use, putting an emphasis on methods inspired by Statistical Physics. Then, based on this qualitative analysis, we present an optimized method namely Fast Flat Histogram (FFH) method, for the performance evaluation of FEC codes which is generic in nature. FFH method employs Wang Landau algorithm and is based on Markov Chain Monte Carlo (MCMC). It operates in an AIS framework and gives a good simulation gain. Sufficient statistical accuracy is ensured through different parameters. Extension to other types of error correcting codes is straight forward. We present the results for LDPC codes and turbo codes with different code lengths and rates showing that the FFH method is generic and is applicable for different families of FEC codes having any length, rate and structure. Moreover, we show that the FFH method is a powerful tool to tease out the pseudo-codewords at high SNR region using Belief Propagation as the decoding algorithm for the LDPC codes
Philippe, Anne. "Contribution à la théorie des lois de référence et aux méthodes de Monte Carlo." Rouen, 1997. http://www.theses.fr/1997ROUES005.
Full textSadiki, Wafaa. "Estimation et validation a posteriori des statistiques d'erreur pour une assimilation à aire limitée." Toulouse 3, 2005. http://www.theses.fr/2005TOU30019.
Full textData assimilation methods perform a combination between a background state of the atmosphere and observations. The formulation of any assimilation system requires the knowledge of the weights attributed to each source of information. The system of interest is the limited area 3d-Var analysis of ALADIN. The aim is, on the one hand, to study the properties of background error covariances in a limited area model and, on the other hand, to apply the a posteriori diagnostics in a real data observation environment, in order to calibrate the background and observational error standard deviations. Firstly, we show that, for the large scales, the background errors are controlled by the ARP\`EGE global model. Secondly, through a posteriori validation, we have found an underestimation of the background error variance, and an overestimation of the observational error variance. Moreover, we have adapted these diagnostics to the frame of a limited amount of observations using ergodic properties of the signals
Bréhard, Thomas Le Cadre Jean-Pierre. "Estimation séquentielle et analyse de performances pour un problème de filtrage non linéaire partiellement observé application à la trajectographie par mesure d'angles /." [S.l.] : [s.n.], 2005. ftp://ftp.irisa.fr/techreports/theses/2005/brehard.pdf.
Full textBen, Aïssa Anis. "Estimation et prévision des temps de parcours sur autoroutes : exploitation des transactions de péages, méthode des stocks corrigée, filtrage particulaire." Lyon 1, 2007. http://www.theses.fr/2007LYO10048.
Full textThe travel time is considered as a key indicator for the traffic conditions characterization on a road network, especially regarding quality of service, traffic management and user information. Its estimation and its prediction, using all available data, raise theoretical, technical and methodological issues. In order to fulfil efficiently the above issues, this thesis proposes travel time estimation and prediction methodologies on motorways, for an operational context. More precisely, three strategies have been studied. The first one is based on the off-line and on-line processing of toll data collection. The second one corrects the principal drawbacks of the classically used queuing method. Finally, the third one is based on the macroscopic traffic modelling coupled with particle filter techniques
Bréhard, Thomas. "Estimation séquentielle et analyse de performances pour un problème de filtrage non linéaire partiellement observé : application à la trajectographie par mesure d'angles." Rennes 1, 2005. http://www.theses.fr/2005REN1S118.
Full textCastillo-Effen, Mauricio. "Cooperative localization in wireless networked systems." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002220.
Full textRollet, Yannis. "Vers une maîtrise des incertitudes en calcules des structures composites." Palaiseau, Ecole polytechnique, 2007. http://www.theses.fr/2007EPXX0045.
Full textSecurity requirements in the aeronautic field require to take account of the various uncertainties affecting the structures, in particular the material variability. Despite their expansion, numerical simulations consider most of the time this topic in a simplify way, for example applying penalty to the material properties used in the calculations. But the increasing use of composite materials, intrinsically more sensitive to uncertainties, necessitates the development of sharper methods that could insure a better reliability of the design. Thus, the new Variability Analysis approach has been developed, to deal with numerical simulations constraints, such as computational code independence and limited number of calculations. Choice has been made to build an approach using response surface techniques. This approach is progressive for a better use of the various type of approximation selected (polynomials, polynomial chaos, kriging). Cross-validation techniques (leave-k-out, bootstrap) have been used to estimate the quality of the approximation. So that it is possible to display an assessment of the effects of uncertainties (with error bars) but also to evaluate the confidence of this assessment. Mathematical and analytic mechanical examples have permitted the validation of the approach which shows good agreements with Monte-Carlo simulations for a lower computation cost. Uncertain parameters used concern as well the geometry or the material properties as specific composite parameters (orientation and thickness of plies). Applied on various examples of finite elements calculations, the approach has shown good performances and a reasonable computing cost. Finally, the question of the reduction of the uncertainties effects has been considered. Solutions such as reduction of the uncertainties on the given parameters or improvement of models were investigated. In the end the new data basis improvement method, using correlation between parameters at the various scaled of composite materials, has been proposed
Bidon, Stéphanie. "Estimation et détection en milieu non-homogène : application au traitement spatio-temporel adaptatif." Phd thesis, Toulouse, INPT, 2008. https://hal.science/tel-04426860.
Full textSpace-time adaptive processing is required in future airborne radar systems to improve the detection of targets embedded in clutter. Performance of detectors based on the assumption of an homogeneous environment can be severely degraded in practical applications. Indeed real world clutter features can vary significantly in both angle and range. So far, different strategies have been proposed to overcome the deleterious effect of heterogeneity. This dissertation proposes to study two of these strategies. More precisely a new data model is introduced in a Bayesian framework: it allows to incorporate both an original relation of heterogeneity and a priori knowledge. New estimation and detection schemes are derived according to the model ; their performances are also studied through numerical simulations. Results show that the proposed model and algorithms allow to incorporate in an appropriate way a priori information in the detection scheme
Bidon, Stéphanie. "Estimation et détection en milieu non-homogène : application au traitement spatio-temporel adaptatif." Phd thesis, Toulouse, INPT, 2008. http://oatao.univ-toulouse.fr/7737/1/bidon.pdf.
Full textNguyen, Thi Ngoc Minh. "Lissage de modèles linéaires et gaussiens à régimes markoviens. : Applications à la modélisation de marchés de matières premières." Electronic Thesis or Diss., Paris, ENST, 2016. https://pastel.hal.science/tel-03689917.
Full textThe work presented in this thesis focuses on Sequential Monte Carlo methods for general state space models. These procedures are used to approximate any sequence of conditional distributions of some hidden state variables given a set observations. We are particularly interested in two-filter based methods to estimate the marginal smoothing distribution of a state variable given past and future observations. We first prove convergence results for the estimators produced by all two-filter based Sequential Monte Carlo methods under weak assumptions on the hidden Markov model. Under additional strong mixing assumptions which are more restrictive but still standard in this context, we show that the constants of the deviation inequalities and the asymptotic variances are uniformly bounded in time. Then, a Conditionally Linear and Gaussian hidden Markov model is introduced to explain commodity markets regime shifts. The markets are modeled by extending the Gibson-Schwartz model on the spot price and the convenience yield. It is assumed that the dynamics of these variables is controlled by a discrete hidden Markov chain identifying the regimes. Each regime corresponds to a set of parameters driving the state space model dynamics. We propose a Monte Carlo Expectation Maximization algorithm to estimate the parameters of the model based on a two-filter method to approximate the intermediate quantity. This algorithm uses explicit marginalization (Rao Blackwellisation) of the linear states to reduce Monte Carlo variance. The algorithm performance is illustrated using Chicago Mercantile Exchange (CME) crude oil data
Barembruch, Steffen. "Méthodes approchées de maximum de vraisemblances pour la classification et identification aveugles en communications numériques." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00574365.
Full textElie, Romuald. "Contrôle stochastique et méthodes numériques en finance mathématique." Phd thesis, Paris 9, 2006. http://tel.archives-ouvertes.fr/tel-00122883.
Full textNous présentons dans la première partie une méthode non-paramétrique d'estimation des sensibilités des prix d'options. A l'aide d'une perturbation aléatoire du paramètre d'intérêt, nous représentons ces sensibilités sous forme d'espérance conditionnelle, que nous estimons à l'aide de simulations Monte Carlo et de régression par noyaux. Par des arguments d'intégration par parties, nous proposons plusieurs estimateurs à noyaux de ces sensibilités, qui ne nécessitent pas la connaissance de la densité du sous-jacent, et nous obtenons leurs propriétés asymptotiques. Lorsque la fonction payoff est irrégulière, ils convergent plus vite que les estimateurs par différences finies, ce que l'on vérifie numériquement.
La deuxième partie s'intéresse à la résolution numérique de systèmes découplés d'équations différentielles stochastiques progressives rétrogrades. Pour des coefficients Lipschitz, nous proposons un schéma de discrétisation qui converge plus vite que $n^{-1/2+e}$, pour tout $e>0$, lorsque le pas de temps $1/n$ tends vers $0$, et sous des hypothèses plus fortes de régularité, le schéma atteint la vitesse de convergence paramétrique. L'erreur statistique de l'algorithme dûe a l'approximation non-paramétrique d'espérances conditionnelles est également controlée et nous présentons des exemples de résolution numérique de systèmes couplés d'EDP semi-linéaires.
Enfin, la dernière partie de cette thèse étudie le comportement d'un gestionnaire de fond, maximisant l'utilité intertemporelle de sa consommation, sous la contrainte que la valeur de son portefeuille ne descende pas en dessous d'une fraction fixée de son maximum courant. Nous considérons une classe générale de fonctions d'utilité, et un marché financier composé d'un actif risqué de dynamique black-Scholes. Lorsque le gestionnaire se fixe un horizon de temps infini, nous obtenons sous forme explicite sa stratégie optimale d'investissement et de consommation, ainsi que la fonction valeur du problème. En horizon fini, nous caractérisons la fonction valeur comme unique solution de viscosité de l'équation d'Hamilton-Jacobi-Bellman correspondante.
Kakakhail, Shahkar. "Prédiction et estimation de très faibles taux d'erreur pour des chaînes de communication codées." Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00819416.
Full textLu, Zhiping. "Analyse des processus longue mémoire stationnaires et non-stationnaires : estimations, applications et prévisions." Phd thesis, Cachan, Ecole normale supérieure, 2009. https://theses.hal.science/tel-00422376/fr/.
Full textIn this thesis, we consider two classes of long memory processes: the stationary long memory processes and the non-stationary long memory processes. We are devoted to the study of their probabilistic properties, estimation methods, forecast methods and the statistical tests. Stationary long memory processes have been extensively studied over the past decades. It has been shown that some long memory processes have the properties of self-similarity, which are important for parameter estimation. We review the self-similar properties of continuous-time and discrete-time long memory processes. We establish the propositions that stationary long memory process is asymptotically second-order self-similar, while stationary short memory process is not asymptotically second-order self-similar. Then we extend the results to specific long memory processes such as k-factor GARMA processes and k-factor GIGARCH processes. We also investigate the self-similar properties of some heteroscedastic models and the processes with switches and jumps. We make a review for the stationary long memory processes’ parameter estimation methods, including the parametric methods (for example, maximum likelihood estimation, approximate maximum likelihood estimation) and the semiparametric methods (for example, GPH method, Whittle method, Robinson method). The consistency and asymptotic normality behaviors are also investigated for the estimators. Testing the fractionally integrated order of seasonal and non-seasonal unit roots of the stochastic stationary long memory process is quite important for the economic and financial time series modeling. The widely used Robinson test (1994) is applied to various well-known long memory models. Via Monte Carlo experiments, we study and compare the performances of this test using several sample sizes, which provide a good reference for the practitioners who want to apply Robinson’s test. In practice, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity exists inside the economic and financial data sets. To take into account this kind of phenomena, we review the existing non-stationary processes and we propose a new class of non-stationary stochastic process: the locally stationary k-factor Gegenbauer process. We describe a procedure of estimating consistently the time-varying parameters with the help of the discrete wavelet packet transform (DWPT). The consistency and asymptotic normality of the estimates are proved. The robustness of the algorithm is investigated through simulation study. We also propose the forecast method for this new non-stationary long memory processes. Applications and forecasts based on the error correction term in the error correction model of the Nikkei Stock Average 225 (NSA 225) index and theWest Texas Intermediate (WTI) crude oil price are followed
Giremus, Audrey. "Apport des techniques de filtrage particulaire pour la navigation avec les systèmes de navigation inertiels et le GPS." Toulouse, ENSAE, 2005. http://www.theses.fr/2005ESAE0026.
Full textLucas, Jean-Paul. "Contamination des logements par le plomb : prévalence des logements à risque et identication des déterminants de la contamination." Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=ee1e7f1b-e1e9-455c-afa8-ea7f68143c8e.
Full textResidential lead levels were estimated for the first time in mainland France. For this, tools of the theory of survey sampling were applied to the data of the Plomb-Habitat survey (2008-2009). A sample of 484 dwelling was drawn to study the population (N = 3 581 991) of the main residences (as opposed to second home) where at least one child aged 6 months to 6 years was present. Approximately 2. 9% of housing units have a lead concentration in tap water higher or equal than the regulatory threshold (RT) of 10 µg/L; in approximately 0. 21% of dwellings and in 4. 1% of common areas the American RT of 430 µg/m² (40 µg/ft²) was exceeded for interior floor dust lead; 1. 4% of exterior play area soils exceed the American RT of 300 mg/kg of lead; 24. 5% of housing units have still lead-based paint. Lead in floor dust was pointed out as the main predictor of blood lead level in children. A multilevel model with 2 levels was fitted to explain the floor dust lead loadings of the 1834 rooms as level-1 units investigated in the homes considered as level-2 units. No weights was used in the estimation method (pseudolikelihood) used for this kind of modeling on survey data. Dust of the landing of an apartment is the main contributor to the contamination of dust by lead. A simulation study was carried out from our data to compare the different weights for the level-2 units of a multilevel model. Its result enabled us to confirm the fitting of an unweighted model to explain the dust lead loadings. Until now, only the level-1 weights had been studied in the literature for this kind of model
Cisse, Papa Ousmane. "Étude de modèles spatiaux et spatio-temporels." Thesis, Paris 1, 2018. http://www.theses.fr/2018PA01E060/document.
Full textThis thesis focuses on the time series in addition to being observed over time, also have a spatial component. By definition, a spatiotemporal phenomenon is a phenomenon which involves a change in space and time. The spatiotemporal model-ling therefore aims to construct representations of systems taking into account their spatial and temporal dimensions. It has applications in many fields such as meteorology, oceanography, agronomy, geology, epidemiology, image processing or econometrics etc. It allows them to address the important issue of predicting the value of a random field at a given location in a region. Assume that the value depends predict observations in neighbouring regions. This shows the need to consider, in addition to their statistical characteristics, relations of spatial dependence between neighbouring locations, to account for all the inherent data structures. In the exploration of spatiotemporal data, refinement of time series models is to explicitly incorporate the systematic dependencies between observations for a given region, as well as dependencies of a region with neighboring regions. In this context, the class of spatial models called spatiotemporal auto-regressive models (Space-Time Autoregressive models) or STAR was introduced in the early 1970s. It will then be generalized as GSTAR model (Generalized Space-Time Autoregressive models). In most fields of applications, one is often confronted by the fact that one of the major sources of fluctuations is seasonality. In our work we are particularly interested in the phenomenon of seasonality in spatiotemporal data. We develop a new class of models and investigates the properties and estimation methods. Make a mathematical model taking into account the spatial inter-action of different points or locations of an entire area would be a significant contribution. Indeed, a statistical treatment that takes into account this aspect and integrates appropriate way can correct a loss of information, errors in predictions, non-convergent and inefficient estimates
Lu, Zhiping. "Analyse des processus longue mémoire stationnaires et non-stationnaires : estimations, applications et prévisions." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00422376.
Full textThajeel, Jawad. "Kriging-based Approaches for the Probabilistic Analysis of Strip Footings Resting on Spatially Varying Soils." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4111/document.
Full textThe probabilistic analysis of geotechnical structures involving spatially varying soil properties is generally performed using Monte Carlo Simulation methodology. This method is not suitable for the computation of the small failure probabilities encountered in practice because it becomes very time-expensive in such cases due to the large number of simulations required to calculate accurate values of the failure probability. Three probabilistic approaches (named AK-MCS, AK-IS and AK-SS) based on an Active learning and combining Kriging and one of the three simulation techniques (i.e. Monte Carlo Simulation MCS, Importance Sampling IS or Subset Simulation SS) were developed. Within AK-MCS, a Monte Carlo simulation without evaluating the whole population is performed. Indeed, the population is predicted using a kriging meta-model which is defined using only a few points of the population thus significantly reducing the computation time with respect to the crude MCS. In AK-IS, a more efficient sampling technique ‘IS’ is used instead of ‘MCS’. In the framework of this approach, the small failure probability is estimated with a similar accuracy as AK-MCS but using a much smaller size of the initial population, thus significantly reducing the computation time. Finally, in AK-SS, a more efficient sampling technique ‘SS’ is proposed. This technique overcomes the search of the design points and thus it can deal with arbitrary shapes of the limit state surfaces. All the three methods were applied to the case of a vertically loaded strip footing resting on a spatially varying soil. The obtained results are presented and discussed
Vathanakhool, Khoollapath. "Estimation de la sécurité des poteaux en béton armé : compte tenu des variations aléatoires de leurs caractéristiques géométriques et mécaniques le long de leur ligne moyenne." Toulouse, INSA, 1987. http://www.theses.fr/1987ISAT0015.
Full textPuengnim, Anchalee. "Classification de modulations linéaires et non-linéaires à l'aide de méthodes bayésiennes." Toulouse, INPT, 2008. http://ethesis.inp-toulouse.fr/archive/00000676/.
Full textThis thesis studies classification of digital linear and nonlinear modulations using Bayesian methods. Modulation recognition consists of identifying, at the receiver, the type of modulation signals used by the transmitter. It is important in many communication scenarios, for example, to secure transmissions by detecting unauthorized users, or to determine which transmitter interferes the others. The received signal is generally affected by a number of impairments. We propose several classification methods that can mitigate the effects related to imperfections in transmission channels. More specifically, we study three techniques to estimate the posterior probabilities of the received signals conditionally to each modulation
Le, Son Khanh. "Modélisation probabiliste du pronostic : application à un cas d'étude et à la prise de décision en maintenance." Troyes, 2012. http://www.theses.fr/2012TROY0035.
Full textRemaining useful life (RUL) estimation is a major scientific challenge and a principal topic in the scientific community which takes an interest to prognosis problems. The use of tools and methods collected under the terms of prognostic is widely developed in many domains as aerospace industry, electronics, medicine, etc. The common underlying problem is the implementation of models which can take into account on-line the data histories of system and its environment, the diagnosis on its current state and possibly the future operational conditions for predicting the residual lifetime. In this context, the principal problem of our works is the use of probabilistic approaches (type of non-stationary stochastic process) to construct the innovatory prognostic models from a degradation indicator of system and to use the residual lifetime prediction for maintenance implementation. The advantage of these models is to have the regularity proprieties which make easy the probability calculation and RUL estimation. In order to test the performances of our models, a comparative study is carried out on the data provided by the 2008 IEEE Prognostic and Health Management (PHM)
Elvira, Clément. "Modèles bayésiens pour l’identification de représentations antiparcimonieuses et l’analyse en composantes principales bayésienne non paramétrique." Thesis, Ecole centrale de Lille, 2017. http://www.theses.fr/2017ECLI0016/document.
Full textThis thesis proposes Bayesian parametric and nonparametric models for signal representation. The first model infers a higher dimensional representation of a signal for sake of robustness by enforcing the information to be spread uniformly. These so called anti-sparse representations are obtained by solving a linear inverse problem with an infinite-norm penalty. We propose in this thesis a Bayesian formulation of anti-sparse coding involving a new probability distribution, referred to as the democratic prior. A Gibbs and two proximal samplers are proposed to approximate Bayesian estimators. The algorithm is called BAC-1. Simulations on synthetic data illustrate the performances of the two proposed samplers and the results are compared with state-of-the art methods. The second model identifies a lower dimensional representation of a signal for modelisation and model selection. Principal component analysis is very popular to perform dimension reduction. The selection of the number of significant components is essential but often based on some practical heuristics depending on the application. Few works have proposed a probabilistic approach to infer the number of significant components. We propose a Bayesian nonparametric principal component analysis called BNP-PCA. The proposed model involves an Indian buffet process to promote a parsimonious use of principal components, which is assigned a prior distribution defined on the manifold of orthonormal basis. Inference is done using MCMC methods. The estimators of the latent dimension are theoretically and empirically studied. The relevance of the approach is assessed on two applications
Pastel, Rudy. "Estimation de probabilités d'évènements rares et de quantiles extrêmes : applications dans le domaine aérospatial." Phd thesis, Université Européenne de Bretagne, 2012. http://tel.archives-ouvertes.fr/tel-00728108.
Full textLe, Corff Sylvain. "Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0052.
Full textThis document is dedicated to inference problems in hidden Markov models. The first part is devoted to an online maximum likelihood estimation procedure which does not store the observations. We propose a new Expectation Maximization based method called the Block Online Expectation Maximization (BOEM) algorithm. This algorithm solves the online estimation problem for general hidden Markov models. In complex situations, it requires the introduction of Sequential Monte Carlo methods to approximate several expectations under the fixed interval smoothing distributions. The convergence of the algorithm is shown under the assumption that the Lp mean error due to the Monte Carlo approximation can be controlled explicitly in the number of observations and in the number of particles. Therefore, a second part of the document establishes such controls for several Sequential Monte Carlo algorithms. This BOEM algorithm is then used to solve the simultaneous localization and mapping problem in different frameworks. Finally, the last part of this thesis is dedicated to nonparametric estimation in hidden Markov models. It is assumed that the Markov chain (Xk) is a random walk lying in a compact set with increment distribution known up to a scaling factor a. At each time step k, Yk is a noisy observations of f(Xk) where f is an unknown function. We establish the identifiability of the statistical model and we propose estimators of f and a based on the pairwise likelihood of the observations
Le, Corff Sylvain. "Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées." Phd thesis, Telecom ParisTech, 2012. http://tel.archives-ouvertes.fr/tel-00773405.
Full textKebaier, Ahmed. "Réduction de variance et discrétisation d'équations différentielles stochastiques.Théorèmes limites presque sûre pour les martingales quasi-continues à gauche." Phd thesis, Université de Marne la Vallée, 2005. http://tel.archives-ouvertes.fr/tel-00011947.
Full textLa première Partie est composée de trois chapitres: Le premier chapitre introduit le cadre de l'étude et présente les résultats obtenus. Le deuxième chapitre est consacré à l'étude d'une nouvelle méthode d'accélération de convergence, appelée méthode de Romberg statistique, pour le calcul d'espérances de fonctions ou de fonctionnelles d'une diffusion.
Ce chapitre est la version augmentée d'un article à paraître dans la revue Annals of Applied Probability.
Le troisième chapitre traite de l'application de cette méthode à l'approximation de densité par des méthodes de noyaux.
Ce chapitre est basé sur un travail en collaboration avec Arturo Kohatsu-Higa.
La deuxième partie de la thèse est composée de deux chapitres: le premier chapitre présente la littérature récente concernant le théorème de la limite centrale presque sûre et ses extensions. Le deuxième chapitre, basé sur un travail en collaboration avec Faouzi Chaâbane, étend divers résultats de type TLCPS à des martingales quasi-continues à gauche.
Rottner, Lucie. "Reconstruction de l'atmosphère turbulente à partir d'un lidar doppler 3D et étude du couplage avec Meso-NH." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30373/document.
Full textOur work aims to improve the turbulent phenomena detection and forecast in the atmospheric boundary layer. First, we suggest a new stochastic method to reconstruct locally the turbulent atmosphere. Particle systems are used to model the atmospheric flow and its internal variability. To update particles and lean the turbulent parameters 3D Doppler lidar measurements are used. Then, a new stochastic downscaling technic for sub-grid turbulence forecast is presented. From the grid point model Meso-NH, a sub-grid particle system is forced. Here, the particles evolve freely in the simulated domain. Our downscaling method allows to model sub-grid fields coherent with the grid point model. Next, we introduce the upscaling issue. The atmosphere reconstruction covers at best few cells of meteorological grid point models. The issue is to assimilate the reconstructed atmosphere in such models. Using the back and forth nudging algorithm, we explore the problems induced by the size of the observed domain. Finally we suggest a new way to use the back and forth nudging algorithm for parameter identification
Blanchet, David. "Développements méthodologiques et qualification de schémas de calcul pour la modélisation des échauffements photoniques dans les dispositifs expérimentaux du futur réacteur d'irradiation technologique Jules Horowitz (RJH)." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2006. http://tel.archives-ouvertes.fr/tel-00689995.
Full textLin, Chao. "P and T wave analysis in ECG signals using Bayesian methods." Phd thesis, Toulouse, INPT, 2012. http://oatao.univ-toulouse.fr/8990/1/lin.pdf.
Full textFarges, Olivier. "Conception optimale de centrales solaires à concentration : application aux centrales à tour et aux installations "beam down"." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2014. http://www.theses.fr/2014EMAC0006/document.
Full textSince the early 40's, world energy consumption has grown steadly. While this energy mainly came from fossil fuel, its use has included an increase in temperatures. It has become urgent to reduce greenhouse gas emissions to halt climate change. In this context, the development of concentrated solar power (CSP) is a promising solution. The scientific community related to this topic has to focus on efficiency enhancement and economic competitiveness of CSP technologies. To this end, this thesis aims at providing an optimal design method applied to central receiver power plants. It takes advantage of methods developed over many years by the research group StaRWest. Both RAPSODEE (Albi), LAPLACE (Toulouse) and PROMES (Odeillo) researchers take an active part in this group. Coupling high performance Monte Carlo algorithms and stochastic optimization methods, the code we developed allows an optimal design of concentrated solar systems. This code is used to highlight the potential of an uncommon type of central receiver plants: reflective towers, also called "beam down" central receiver systems
Alata, Olivier. "Contributions à la description de signaux, d'images et de volumes par l'approche probabiliste et statistique." Habilitation à diriger des recherches, Université de Poitiers, 2010. http://tel.archives-ouvertes.fr/tel-00573224.
Full textWang, Tairan. "Decision making and modelling uncertainty for the multi-criteria analysis of complex energy systems." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0036/document.
Full textThis Ph. D. work addresses the vulnerability analysis of safety-critical systems (e.g., nuclear power plants) within a framework that combines the disciplines of risk analysis and multi-criteria decision-making. The scientific contribution follows four directions: (i) a quantitative hierarchical model is developed to characterize the susceptibility of safety-critical systems to multiple types of hazard, within the needed `all-hazard' view of the problem currently emerging in the risk analysis field; (ii) the quantitative assessment of vulnerability is tackled by an empirical classification framework: to this aim, a model, relying on the Majority Rule Sorting (MR-Sort) Method, typically used in the decision analysis field, is built on the basis of a (limited-size) set of data representing (a priori-known) vulnerability classification examples; (iii) three different approaches (namely, a model-retrieval-based method, the Bootstrap method and the leave-one-out cross-validation technique) are developed and applied to provide a quantitative assessment of the performance of the classification model (in terms of accuracy and confidence in the assignments), accounting for the uncertainty introduced into the analysis by the empirical construction of the vulnerability model; (iv) on the basis of the models developed, an inverse classification problem is solved to identify a set of protective actions which effectively reduce the level of vulnerability of the critical system under consideration. Two approaches are developed to this aim: the former is based on a novel sensitivity indicator, the latter on optimization.Applications on fictitious and real case studies in the nuclear power plant risk field demonstrate the effectiveness of the proposed methodology
Marhaba, Bassel. "Restauration d'images Satellitaires par des techniques de filtrage statistique non linéaire." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0502/document.
Full textSatellite image processing is considered one of the more interesting areas in the fields of digital image processing. Satellite images are subject to be degraded due to several reasons, satellite movements, weather, scattering, and other factors. Several methods for satellite image enhancement and restoration have been studied and developed in the literature. The work presented in this thesis, is focused on satellite image restoration by nonlinear statistical filtering techniques. At the first step, we proposed a novel method to restore satellite images using a combination between blind and non-blind restoration techniques. The reason for this combination is to exploit the advantages of each technique used. In the second step, novel statistical image restoration algorithms based on nonlinear filters and the nonparametric multivariate density estimation have been proposed. The nonparametric multivariate density estimation of posterior density is used in the resampling step of the Bayesian bootstrap filter to resolve the problem of loss of diversity among the particles. Finally, we have introduced a new hybrid combination method for image restoration based on the discrete wavelet transform (DWT) and the proposed algorithms in step two, and, we have proved that the performance of the combined method is better than the performance of the DWT approach in the reduction of noise in degraded satellite images
Ben, Abdellah Amal. "Randomized Quasi-Monte Carlo Methods for Density Estimation and Simulation of Markov Chains." Thesis, 2021. http://hdl.handle.net/1866/25579.
Full textThe Randomized Quasi Monte Carlo method (RQMC) is often used to estimate an integral over the s-dimensional unit cube (0,1)^s. This integral is interpreted as the mathematical expectation of some random variable X. It is well known that RQMC estimators can, under some conditions, converge at a faster rate than crude Monte Carlo estimators of the integral. For Markov chains simulation on a large number of steps by using RQMC, little exists. The most promising approach proposed to date is the array-RQMC method. This method simulates n copies of the chain in parallel using a set of independent RQMC points at each step, and sorts the chains using a specific sorting function after each step. This method has given empirically significant results in terms of convergence rates on a few examples (i.e. a much better convergence rate than that observed with Monte Carlo standard). However, the convergence rates observed empirically have not yet been theoretically proven. In the first part of this thesis, we examine how RQMC can improve the convergence rate when estimating not only X's expectation, but also its distribution. In the second part, we examine how RQMC can be used for Markov chains simulation on a large number of steps using the array-RQMC method. Our thesis contains four articles. In the first article, we study the effectiveness of replacing Monte Carlo (MC) by either randomized quasi Monte Carlo (RQMC) or stratification to show how they can be applied to make samples more representative. Furthermore, we show how these methods can help to reduce the integrated variance (IV) and the mean integrated square error (MISE) for the kernel density estimators (KDEs). We provide both theoretical and empirical results on the convergence rates and show that the RQMC and stratified sampling estimators can achieve significant IV and MISE reductions with even faster convergence rates compared to MC in some situations, while leaving the bias unchanged. In the second article, we examine the combination of RQMC with a conditional Monte Carlo approach to density estimation. This approach is defined by taking the stochastic derivative of a conditional CDF of X and provides a large improvement when applied. Using array-RQMC in order to price an Asian option under an ordinary geometric Brownian motion process with fixed volatility has already been attempted in the past and a convergence rate of O(n⁻²) was observed for the variance. In the third article, we study the pricing of Asian options when the underlying process has stochastic volatility. More specifically, we examine the variance-gamma, Heston, and Ornstein-Uhlenbeck stochastic volatility models. We show how applying the array-RQMC method for pricing Asian and European options can significantly reduce the variance. An efficient sample path algorithm called (fixed-step) t-leaping can be used to simulate stochastic biological systems as well as well-stirred chemical reaction systems. The crude Monte Carlo (MC) method is a feasible approach when it comes to simulating these sample paths. Simulating the Markov chain for fixed-step t-leaping via ordinary randomized quasi-Monte Carlo (RQMC) has already been explored empirically and, when the dimension of the problem increased, the convergence rate of the variance was realigned with those observed in several numerical experiments using MC. In the last article, we study the combination of array-RQMC with this algorithm and empirically demonstrate that array-RQMC provides a significant reduction in the variance compared to the standard MC algorithm.
Jouini, Tarek. "Inférence exacte simulée et techniques d'estimation dans les modèles VAR et VARMA avec applications macroéconomiques." Thèse, 2008. http://hdl.handle.net/1866/2236.
Full textKebaier, Ahmed. "Réduction de variance et discrétisation d'équations différentielles stochastiques. Théorèmes limites presque sûre pour les martingales quasi-continues à gauche." Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00011946/en/.
Full textLa première Partie est composée de trois chapitres: Le premier chapitre introduit le cadre de l'étude et présente les résultats obtenus. Le deuxième chapitre est consacré à l'étude d'une nouvelle méthode d'accélération de convergence, appelée méthode de Romberg statistique, pour le calcul d'espérances de fonctions ou de fonctionnelles d'une diffusion.
Ce chapitre est la version augmentée d'un article à paraître dans la revue Annals of Applied Probability.
Le troisième chapitre traite de l'application de cette méthode à l'approximation de densité par des méthodes de noyaux.
Ce chapitre est basé sur un travail en collaboration avec Arturo Kohatsu-Higa.
La deuxième partie de la thèse est composée de deux chapitres: le premier chapitre présente la littérature récente concernant le théorème de la limite centrale presque sûre et ses extensions. Le deuxième chapitre, basé sur un travail en collaboration avec Faouzi Chaâbane, étend divers résultats de type TLCPS à des martingales quasi-continues à gauche.
DONG, Jia. "Estimation du taux d'erreurs binaires pour n'importe quel système de communication numérique." Phd thesis, 2013. http://tel.archives-ouvertes.fr/tel-00978950.
Full text