To see the other types of publications on this topic, follow the link: Monte Carlo sampling and estimation.

Dissertations / Theses on the topic 'Monte Carlo sampling and estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo sampling and estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Wen-shiang. "Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systems." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1086146309.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiv, 117 p. : ill. (some col.). Advisors: Bhavik R. Bakshi and Prem K. Goel, Department of Chemical Engineering. Includes bibliographical references (p. 114-117).
APA, Harvard, Vancouver, ISO, and other styles
2

Gilabert, Navarro Joan Francesc. "Estimation of binding free energies with Monte Carlo atomistic simulations and enhanced sampling." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669282.

Full text
Abstract:
The advances in computing power have motivated the hope that computational methods can accelerate the pace of drug discovery pipelines. For this, fast, reliable and user-friendly tools are required. One of the fields that has gotten more attentions is the prediction of binding affinities. Two main problems have been identified for such methods: insufficient sampling and inaccurate models. This thesis is focused on tackling the first problem. To this end, we present the development of efficient methods for the estimation of protein-ligand binding free energies. We have developed a protocol that combines enhanced sampling with more standard simulations methods to achieve higher efficiency. First, we run an exploratory enhanced sampling simulation, starting from the bound conformation and partially biased towards unbound poses. The we leverage the information gained from this short simulation to run, longer unbiased simulations to collect statistics. Thanks to the modularity and automation that the protocol offers we were able to test three different methods for the long simulations: PELE, molecular dynamics and AdaptivePELE. PELE and molecular dynamics showed similar results, although PELE used less computational resources. Both seemed to work well with small protein-fragment systems or proteins with not very flexible binding sites. Both failed to accurately reproduce the binding of a kinase, the Mitogen-activated protein kinase 1 (ERK2). On the other hand, AdaptivePELE did not show a great improvement over PELE, with positive results for the Urokinase-type plasminogen activator (URO) and a clear lack of sampling for the Progesterone receptor (PR). We demonstrated the importance of well-designed suite of test systems for the development of new methods. Through the use of a diverse benchmark of protein systems we have established the cases in which the protocol is expected to give accurate results, and which areas require further development. This benchmark consisted of four proteins, and over 30 ligands, much larger than the test systems typically used in the development of pathway-based free energy methods. In summary, the methodology developed in this work can contribute to the drug discovery process for a limited range of protein systems. For many other, we have observed that regular unbiased simulations are not efficient enough and more sophisticated, enhanced sampling methods are required.
Els grans avenços en la capacitat de computació han motivat l'esperança que els mètodes de simulacions per ordinador puguin accelerar el ritme de descobriment de nous fàrmacs. Per a què això sigui possible, es necessiten eines ràpides, acurades i fàcils d'utilitzar. Un dels problemes que han rebut més atenció és el de la predicció d'energies lliures d'unió entre proteïna i lligand. Dos grans problemes han estat identificats per a aquests mètodes: la falta de mostreig i les aproximacions dels models. Aquesta tesi està enfocada a resoldre el primer problema. Per a això, presentem el desenvolupament de mètodes eficients per a l'estimació de d'energies lliures d'unió entre proteïna i lligand. Hem desenvolupat un protocol que combina mètodes anomenats enhanced sampling amb simulació clàssiques per a obtenir una major eficiència. Els mètodes d'enhanced sampling són una classe d'eines que apliquen algun tipus de pertorbació externa al sistema que s'està estudiant per tal d'accelerar-ne el mostreig. En el nostre protocol, primer correm una simulació exploratòria d'enhanced sampling, començant per una mostra de la unió de la proteïna i el lligand. Aquesta simulació esta parcialment esbiaixada cap a aquells estats del sistema on els dos components es troben més separats. Després utilitzem la informació obtinguda d'aquesta primera simulació més curta per a córrer una segona simulació més llarga, amb mètodes sense biaix per obtenir una estadística fidedigna del sistema. Gràcies a la modularitat i el grau d'automatització que la implementació del protocol ofereix, hem pogut provar tres mètodes diferents per les simulacions llargues: PELE, dinàmica molecular i AdaptivePELE. PELE i dinàmica molecular han mostrat resultats similars, tot i que PELE utilitza menys recursos. Els dos han mostrat bons resultats en l'estudi de sistemes de fragments o amb proteïnes amb llocs d'unió poc flexibles. Però, els dos han fallat a l'hora de reproduir els resultats experimentals per a una quinasa, la Mitogen-activated protein kinase 1 (ERK2). D'altra banda, AdaptivePELE no ha mostrat una gran millora respecte a PELE, amb resultats positius per a la proteïna Urokinase-type plasminogen activator (URO) i una clara falta de mostreig per al receptor de progesterona (PR). En aquest treball hem demostrat la importància d'establir un banc de proves equilibrat durant el desenvolupament de nous mètodes. Mitjançant l'ús d'un banc de proves divers hem pogut establir en quins casos es pot esperar que el protocol obtingui resultats acurats, i quines àrees necessiten més desenvolupament. El banc de proves ha consistit de quatre proteïnes i més de trenta lligands, molt més dels que comunament s'utilitzen en el desenvolupament de mètodes per a la predicció d'energies d'unió mitjançant mètodes basats en camins (pathway-based). En resum, la metodologia desenvolupada durant aquesta tesi pot contribuir al procés de recerca de nous fàrmacs per a certs tipus de sistemes de proteïnes. Per a la resta, hem observat que els mètodes de simulació no esbiaixats no són prou eficients i tècniques més sofisticades són necessàries.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Nuo. "A Monte Carlo Study of Single Imputation in Survey Sampling." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-30541.

Full text
Abstract:
Missing values in sample survey can lead to biased estimation if not treated. Imputation was posted asa popular way to deal with missing values. In this paper, based on Särndal (1994, 2005)’s research, aMonte-Carlo simulation is conducted to study how the estimators work in different situations and howdifferent imputation methods work for different response distributions.
APA, Harvard, Vancouver, ISO, and other styles
4

Ptáček, Martin. "Spatial Function Estimation with Uncertain Sensor Locations." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449288.

Full text
Abstract:
Tato práce se zabývá úlohou odhadování prostorové funkce z hlediska regrese pomocí Gaussovských procesů (GPR) za současné nejistoty tréninkových pozic (pozic senzorů). Nejdříve je zde popsána teorie v pozadí GPR metody pracující se známými tréninkovými pozicemi. Tato teorie je poté aplikována při odvození výrazů prediktivní distribuce GPR v testovací pozici při uvážení nejistoty tréninkových pozic. Kvůli absenci analytického řešení těchto výrazů byly výrazy aproximovány pomocí metody Monte Carlo. U odvozené metody bylo demonstrováno zlepšení kvality odhadu prostorové funkce oproti standardnímu použití GPR metody a také oproti zjednodušenému řešení uvedenému v literatuře. Dále se práce zabývá možností použití metody GPR s nejistými tréninkovými pozicemi v~kombinaci s výrazy s dostupným analytickým řešením. Ukazuje se, že k dosažení těchto výrazů je třeba zavést značné předpoklady, což má od počátku za následek nepřesnost prediktivní distribuce. Také se ukazuje, že výsledná metoda používá standardní výrazy GPR v~kombinaci s upravenou kovarianční funkcí. Simulace dokazují, že tato metoda produkuje velmi podobné odhady jako základní GPR metoda uvažující známé tréninkové pozice. Na druhou stranu prediktivní variance (nejistota odhadu) je u této metody zvýšena, což je žádaný efekt uvážení nejistoty tréninkových pozic.
APA, Harvard, Vancouver, ISO, and other styles
5

Diop, Khadim. "Estimation de la fiabilité d'un palier fluide." Thesis, Angers, 2015. http://www.theses.fr/2015ANGE0029/document.

Full text
Abstract:
Les travaux de recherche constituent une contribution au développement de la théorie de la fiabilité en mécanique des fluides. Pour la conception de machines et de systèmes mécatroniques complexes, de nombreux composants fluides, difficiles à dimensionner, sont utilisés. Ces derniers ont des caractéristiques intrinsèques statiques et dynamiques sensibles et ont donc une grande importance sur la fiabilité et la durée de vie de la plupart des machines et des systèmes.Le développement effectué se concentre spécialement sur l'évaluation de la fiabilité d’un palier fluide grâce à un couplage « mécanique des fluides - fiabilité ». Ce couplage exige une définition propre de la fonction d’état limite permettant d’estimer la probabilité de défaillance d’un palier fluide. La modélisation par l'équation de Reynolds modifiée permet de déterminer la capacité de charge d’un palier fluide en fonction des conditions de fonctionnement. Plusieurs formes simples de paliers fluides ont été modélisées analytiquement et leurs probabilités de défaillance ont été estimées grâce à des méthodes d'approximation FORM/SORM (First Order Reliability, Second Order Reliability) et de simulation Monte Carlo
These research is a contribution to the development of reliability theory in fluid mechanics. For the machines design and complex mechatronic systems, many fluid components are used. These components have static and dynamic sensitive characteristics and thus have a great significance on the reliabilityand lifetime of the machines and systems. Development performed focuses specifically on the reliability evaluation of a fluid bearing using a"fluid mechanics - reliability" interaction approach. This coupling requires a specific definition of the limit state function for estimating the failure probability of a fluid bearing. The Reynolds equation permits to determine the fluid bearing load capacity according to the operating conditions. Several simple geometries of fluid bearings were modeled analytically and their failure probabilities were estimated using the approximation methods FORM / SORM (First Order Reliability Method,Second Order Reliability Method) and Monte Carlo simulation
APA, Harvard, Vancouver, ISO, and other styles
6

Greer, Brandi A. Young Dean M. "Bayesian and pseudo-likelihood interval estimation for comparing two Poisson rate parameters using under-reported data." Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thajeel, Jawad. "Kriging-based Approaches for the Probabilistic Analysis of Strip Footings Resting on Spatially Varying Soils." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4111/document.

Full text
Abstract:
L’analyse probabiliste des ouvrages géotechniques est généralement réalisée en utilisant la méthode de simulation de Monte Carlo. Cette méthode n’est pas adaptée pour le calcul des faibles probabilités de rupture rencontrées dans la pratique car elle devient très coûteuse dans ces cas en raison du grand nombre de simulations requises pour obtenir la probabilité de rupture. Dans cette thèse, nous avons développé trois méthodes probabilistes (appelées AK-MCS, AK-IS et AK-SS) basées sur une méthode d’apprentissage (Active learning) et combinant la technique de Krigeage et l’une des trois méthodes de simulation (i.e. Monte Carlo Simulation MCS, Importance Sampling IS ou Subset Simulation SS). Dans AK-MCS, la population est prédite en utilisant un méta-modèle de krigeage qui est défini en utilisant seulement quelques points de la population, ce qui réduit considérablement le temps de calcul par rapport à la méthode MCS. Dans AK-IS, une technique d'échantillonnage plus efficace 'IS' est utilisée. Dans le cadre de cette approche, la faible probabilité de rupture est estimée avec une précision similaire à celle de AK-MCS, mais en utilisant une taille beaucoup plus petite de la population initiale, ce qui réduit considérablement le temps de calcul. Enfin, dans AK-SS, une technique d'échantillonnage plus efficace 'SS' est proposée. Cette technique ne nécessite pas la recherche de points de conception et par conséquent, elle peut traiter des surfaces d’état limite de forme arbitraire. Toutes les trois méthodes ont été appliquées au cas d'une fondation filante chargée verticalement et reposant sur un sol spatialement variable. Les résultats obtenus sont présentés et discutés
The probabilistic analysis of geotechnical structures involving spatially varying soil properties is generally performed using Monte Carlo Simulation methodology. This method is not suitable for the computation of the small failure probabilities encountered in practice because it becomes very time-expensive in such cases due to the large number of simulations required to calculate accurate values of the failure probability. Three probabilistic approaches (named AK-MCS, AK-IS and AK-SS) based on an Active learning and combining Kriging and one of the three simulation techniques (i.e. Monte Carlo Simulation MCS, Importance Sampling IS or Subset Simulation SS) were developed. Within AK-MCS, a Monte Carlo simulation without evaluating the whole population is performed. Indeed, the population is predicted using a kriging meta-model which is defined using only a few points of the population thus significantly reducing the computation time with respect to the crude MCS. In AK-IS, a more efficient sampling technique ‘IS’ is used instead of ‘MCS’. In the framework of this approach, the small failure probability is estimated with a similar accuracy as AK-MCS but using a much smaller size of the initial population, thus significantly reducing the computation time. Finally, in AK-SS, a more efficient sampling technique ‘SS’ is proposed. This technique overcomes the search of the design points and thus it can deal with arbitrary shapes of the limit state surfaces. All the three methods were applied to the case of a vertically loaded strip footing resting on a spatially varying soil. The obtained results are presented and discussed
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Yi-Fang. "Accuracy and variability of item parameter estimates from marginal maximum a posteriori estimation and Bayesian inference via Gibbs samplers." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/5879.

Full text
Abstract:
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and variability of the item parameter estimates from the marginal maximum a posteriori estimation via an expectation-maximization algorithm (MMAP/EM) and the Markov chain Monte Carlo Gibbs sampling (MCMC/GS) approach. In the study, the various factors which have an impact on the accuracy and variability of the item parameter estimates are discussed, and then further evaluated through a large scale simulation. The factors of interest include the composition and length of tests, the distribution of underlying latent traits, the size of samples, and the prior distributions of discrimination, difficulty, and pseudo-guessing parameters. The results of the two estimation methods are compared to determine the lower limit--in terms of test length, sample size, test characteristics, and prior distributions of item parameters--at which the methods can satisfactorily recover item parameters and efficiently function in reality. For practitioners, the results help to define limits on the appropriate use of the BILOG-MG (which implements MMAP/EM) and also, to assist in deciding the utility of OpenBUGS (which carries out MCMC/GS) for item parameter estimation in practice.
APA, Harvard, Vancouver, ISO, and other styles
9

Argouarc, h. Elouan. "Contributions to posterior learning for likelihood-free Bayesian inference." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS021.

Full text
Abstract:
L'inférence bayésienne a posteriori est utilisée dans de nombreuses applications scientifiques et constitue une méthodologie répandue pour la prise de décision en situation d'incertitude. Elle permet aux praticiens de confronter les observations du monde réel à des modèles d'observation pertinents, et d'inférer en retour la distribution d'une variable explicative. Dans de nombreux domaines et applications pratiques, nous considérons des modèles d'observation complexes pour leur pertinence scientifique, mais au prix de densités de probabilité incalculables. En conséquence, à la fois la vraisemblance et la distribution a posteriori sont indisponibles, rendant l'inférence Bayésienne à l'aide des méthodes de Monte Carlo habituelles irréalisable. Dans ce travail nous supposons que le modèle d'observation nous génère un jeu de données, et le contexte de cette thèse est de coupler les méthodes Bayésienne à l'apprentissage statistique afin de pallier cette limitation et permettre l'inférence a posteriori dans le cadre likelihood-free. Ce problème, formulé comme l'apprentissage d'une distribution de probabilité, inclut les tâches habituelles de classification et de régression, mais il peut également être une alternative aux méthodes “Approximate Bayesian Computation” dans le contexte de l'inférence basée sur la simulation, où le modèle d'observation est un modèle de simulation avec une densité implicite. L'objectif de cette thèse est de proposer des contributions méthodologiques pour l'apprentissage Bayésien a posteriori. Plus précisément, notre objectif principal est de comparer différentes méthodes d'apprentissage dans le cadre de l'échantillonnage Monte Carlo et de la quantification d'incertitude. Nous considérons d'abord l'approximation a posteriori basée sur le “likelihood-to-evidence-ratio”, qui a l'avantage principal de transformer un problème d'apprentissage de densité conditionnelle en un problème de classification. Dans le contexte de l'échantillonnage Monte Carlo, nous proposons une méthodologie pour échantillonner la distribution résultante d'une telle approximation. Nous tirons parti de la structure sous-jacente du modèle, compatible avec les algorithmes d'échantillonnage usuels basés sur un quotient de densités, pour obtenir des procédures d'échantillonnage simples, sans hyperparamètre et ne nécessitant d'évaluer aucune densité. Nous nous tournons ensuite vers le problème de la quantification de l'incertitude épistémique. D'une part, les modèles normalisés, tels que la construction discriminante, sont faciles à appliquer dans le contexte de la quantification de l'incertitude bayésienne. D'autre part, bien que les modèles non normalisés, comme le likelihood-to-evidence-ratio, ne soient pas facilement applicables dans les problèmes de quantification d'incertitude épistémique, une construction non normalisée spécifique, que nous appelons générative, est effectivement compatible avec la quantification de l'incertitude bayésienne via la distribution prédictive a posteriori. Dans ce contexte, nous expliquons comment réaliser cette quantification de l'incertitude dans les deux techniques de modélisation, générative et discriminante, puis nous proposons une comparaison des deux constructions dans le cadre de l'apprentissage bayésien. Enfin nous abordons le problème de la modélisation paramétrique avec densité tractable, qui est effectivement un prérequis pour la quantification de l'incertitude épistémique dans les méthodes générative et discriminante. Nous proposons une nouvelle construction d'un modèle paramétrique, qui est une double extension des modèles de mélange et des flots normalisant. Ce modèle peut être appliqué à de nombreux types de problèmes statistiques, tels que l'inférence variationnelle, l'estimation de densité et de densité conditionnelle, car il bénéficie d'une évaluation rapide et exacte de la fonction de densité, d'un schéma d'échantillonnage simple, et d'une approche de reparamétrisassions des gradients
Bayesian posterior inference is used in many scientific applications and is a prevalent methodology for decision-making under uncertainty. It enables practitioners to confront real-world observations with relevant observation models, and in turn, infer the distribution over an explanatory variable. In many fields and practical applications, we consider ever more intricate observation models for their otherwise scientific relevance, but at the cost of intractable probability density functions. As a result, both the likelihood and the posterior are unavailable, making posterior inference using the usual Monte Carlo methods unfeasible.In this thesis, we suppose that the observation model provides a recorded dataset, and our aim is to bring together Bayesian inference and statistical learning methods to perform posterior inference in a likelihood-free setting. This problem, formulated as learning an approximation of a posterior distribution, includes the usual statistical learning tasks of regression and classification modeling, but it can also be an alternative to Approximate Bayesian Computation methods in the context of simulation-based inference, where the observation model is instead a simulation model with implicit density.The aim of this thesis is to propose methodological contributions for Bayesian posterior learning. More precisely, our main goal is to compare different learning methods under the scope of Monte Carlo sampling and uncertainty quantification.We first consider the posterior approximation based on the likelihood-to-evidence ratio, which has the main advantage that it turns a problem of conditional density learning into a problem of binary classification. In the context of Monte Carlo sampling, we propose a methodology for sampling from such a posterior approximation. We leverage the structure of the underlying model, which is conveniently compatible with the usual ratio-based sampling algorithms, to obtain straightforward, parameter-free, and density-free sampling procedures.We then turn to the problem of uncertainty quantification. On the one hand, normalized models such as the discriminative construction are easy to apply in the context of Bayesian uncertainty quantification. On the other hand, while unnormalized models, such as the likelihood-to-evidence-ratio, are not easily applied in uncertainty-aware learning tasks, a specific unnormalized construction, which we refer to as generative, is indeed compatible with Bayesian uncertainty quantification via the posterior predictive distribution. In this context, we explain how to carry out uncertainty quantification in both modeling techniques, and we then propose a comparison of the two constructions under the scope of Bayesian learning.We finally turn to the problem of parametric modeling with tractable density, which is indeed a requirement for epistemic uncertainty quantification in generative and discriminative modeling methods. We propose a new construction of a parametric model, which is an extension of both mixture models and normalizing flows. This model can be applied to many different types of statistical problems, such as variational inference, density estimation, and conditional density estimation, as it benefits from rapid and exact density evaluation, a straightforward sampling scheme, and a gradient reparameterization approach
APA, Harvard, Vancouver, ISO, and other styles
10

Kastner, Gregor, and Sylvia Frühwirth-Schnatter. "Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models." WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/3771/1/paper.pdf.

Full text
Abstract:
Bayesian inference for stochastic volatility models using MCMC methods highly depends on actual parameter values in terms of sampling efficiency. While draws from the posterior utilizing the standard centered parameterization break down when the volatility of volatility parameter in the latent state equation is small, non-centered versions of the model show deficiencies for highly persistent latent variable series. The novel approach of ancillarity-sufficiency interweaving has recently been shown to aid in overcoming these issues for a broad class of multilevel models. In this paper, we demonstrate how such an interweaving strategy can be applied to stochastic volatility models in order to greatly improve sampling efficiency for all parameters and throughout the entire parameter range. Moreover, this method of "combining best of different worlds" allows for inference for parameter constellations that have previously been infeasible to estimate without the need to select a particular parameterization beforehand.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
11

Chéhab, L'Émir Omar. "Advances in Self-Supervised Learning : applications to neuroscience and sample-efficiency." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG079.

Full text
Abstract:
L'apprentissage auto-supervisé a gagné en popularité en tant que méthode d'apprentissage à partir de données non annotées. Il s'agit essentiellement de créer puis de résoudre un problème de prédiction qui utilise les données; par exemple, de retrouver l'ordre de données qui ont été mélangées. Ces dernières années, cette approche a été utilisée avec succès pour entraîner des réseaux de neurones qui extraient des représentations utiles des données, le tout sans aucune annotation. Cependant, notre compréhension de ce qui est appris et de la qualité de cet apprentissage est limitée. Ce document éclaire ces deux aspects de l'apprentissage auto-supervisé.Empiriquement, nous évaluons ce qui est appris en résolvant des tâches auto-supervisés. Nous spécialisons des tâches de prédiction lorsque les données sont des enregistrements d'activité cérébrale, par magnétoencéphalographie (MEG) ou électroencephalographie (EEG). Ces tâches partagent un objectif commun: reconnaître la structure temporelle dans les ondes cérébrales. Nos résultats montrent que les représentations apprises en résolvant ces tâches-là comprennent des informations neurophysiologiques, cognitives et cliniques, interprétables.Théoriquement, nous explorons également la question de la qualité de l'appretissage, spécifiquement pour les tâches de prédiction qui peuvent s'écrire comme un problème de classification binaire. Nous poursuivons une trâme de recherche qui utilise des problèmes de classification binaire pour faire de l'inférence statistique, alors que cela peut nécessiter de sacrifier une notion d'efficacité statistique pour une autre notion d'efficacité computationnelle. Nos contributions visent à améliorer l'efficacité statistique. Nous analysons théoriquement l'erreur d'estimation statistique et trouvons des situations lorsque qu'elle peut rigoureusement être réduite. Spécifiquement, nous caractérisons des hyperparametres optimaux de la tâche de classification binaire et prouvons également que la populaire heuristique de "recuit" peut rendre l'estimation plus efficace, même en grandes dimensions
Self-supervised learning has gained popularity as a method for learning from unlabeled data. Essentially, it involves creating and then solving a prediction task using the data, such as reordering shuffled data. In recent years, this approach has been successful in training neural networks to learn useful representations from data, without any labels. However, our understanding of what is actually being learned and how well it is learned is still somewhat limited. This document contributes to our understanding of self-supervised learning in these two key aspects.Empirically, we address the question of what is learned. We design prediction tasks specifically tailored to learning from brain recordings with magnetoencephalography (MEG) or electroencephalography (EEG). These prediction tasks share a common objective: recognizing temporal structure within the brain data. Our results show that representations learnt by solving these tasks contain interpretable cognitive and clinical neurophysiological features.Theoretically, we explore the quality of the learning procedure. Our focus is on a specific category of prediction tasks: binary classification. We extend prior research that has highlighted the utility of binary classification for statistical inference, though it may involve trading off some measure of statistical efficiency for another measure of computational efficiency. Our contributions aim to improve statistical efficiency. We theoretically analyze the statistical estimation error and find situations when it can be provably reduced. Specifically, we characterize optimal hyperparameters of the binary classification task and also prove that the popular heuristic of "annealing" can lead to more efficient estimation, even in high dimensions
APA, Harvard, Vancouver, ISO, and other styles
12

Vo, Brenda. "Novel likelihood-free Bayesian parameter estimation methods for stochastic models of collective cell spreading." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/99588/1/Brenda_Vo_Thesis.pdf.

Full text
Abstract:
Biological processes underlying skin cancer growth and wound healing are governed by various collective cell spreading mechanisms. This thesis develops new statistical methods to provide key insights into the mechanisms driving the spread of cell populations such as motility, proliferation and cell-to-cell adhesion, using experimental data. The new methods allow us to precisely estimate the parameters of such mechanisms, quantify the associated uncertainty and investigate how these mechanisms are influenced by various factors. The thesis provides a useful tool to measure the efficacy of medical treatments that aim to influence the spread of cell populations.
APA, Harvard, Vancouver, ISO, and other styles
13

Sebastian, Shalin. "Empirical evaluation of Monte Carlo sampling /." Available to subscribers only, 2005. http://proquest.umi.com/pqdweb?did=1075709431&sid=9&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ekwegh, Ijeoma W. "Newsvendor Models With Monte Carlo Sampling." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3125.

Full text
Abstract:
Newsvendor Models with Monte Carlo Sampling by Ijeoma Winifred Ekwegh The newsvendor model is used in solving inventory problems in which demand is random. In this thesis, we will focus on a method of using Monte Carlo sampling to estimate the order quantity that will either maximizes revenue or minimizes cost given that demand is uncertain. Given data, the Monte Carlo approach will be used in sampling data over scenarios and also estimating the probability density function. A bootstrapping process yields an empirical distribution for the order quantity that will maximize the expected profit. Finally, this method will be used on a newsvendor example to show that it works in maximizing profit.
APA, Harvard, Vancouver, ISO, and other styles
15

Karagulyan, Avetik. "Sampling with the Langevin Monte-Carlo." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAG002.

Full text
Abstract:
L’échantillonnage des lois aléatoires est un problème de taille en statistique et en machine learning. Les approches générales sur ce sujet sont souvent divisées en deux catégories: fréquentiste vs bayésienne. L’approche fréquentiste corresponds à la minimisation du risque empirique, c’est à dire à l’estimation du maximum vraisemblance qui est un problème d’optimisation, tandis que l’approche bayésienne revient à intégrer la loi postérieure. Cette dernière approche nécessite souvent des méthodes approximatives car l’intégrale n’est généralement pas tractable. Dans ce manuscrit, nous allons étudier la méthode de Langevin, basée sur la discrétisation de l’EDS de Langevin. La première partie de l’introduction pose le cadre mathématique et l’intérêt d’étudier plus avant la question de l'échantillonnage. La suite de l’introduction s’attache à la présentation des méthodes d’échantillonnage.Le premier article concerne les bornes non-asymptotiques sur la convergence en distance de Wasserstein de Langevin Monte-Carlo pour les fonctions de potentiel lisses et fortement convexes. Nous établissons d’abord des bornes explicites pour LMC avec des step-sizes variantes?. Puis nous étudions la convergence pour des fonctions de potentiel avec des gradients stochastiques. Enfin, deux types de discrétisation sont présentés, pour les potentiels plus réguliers.Dans la deuxième article nous abordons le problème d’échantillonnage de loi log-concave (pas fortement) en utilisant LMC, KLMC et KLMC2. Nous proposons une pénalisation quadratique constante de la fonction de potentiel. Puis nous prouvons des bornes non-asymptotiques sur l’erreur de Wasserstein de ces méthodes pour le choix de pénalisation optimale. Enfin, nous soulignons l’importance du choix de l’échelle pour le mesurage des complexités des différentes méthodes.La troisième contribution principales est concentrée sur la convergence de la diffusion de Langevin dans le case log-concave. Une pénalisation variable dans le temps est proposée pour la fonction de potentiel. Nous prouvons des bornes explicites pour cette méthode nommée Penalized Langevin Dynamics. A la fin, le lien entre les algorithmes de Langevin et l’optimisation convexe est établi, ce qui nous permet de prouver des bornes similaires pour le gradient flow
Sampling from probability distributions is a problem of significant importance in Statistics and Machine Learning. The approaches for the latter can be roughly classified into two main categories, that is the frequentist and the Bayesian. The first is the MLE or ERM which boils down to optimization, while the other requires the integration of the posterior distribution. Approximate sampling methods are hence applied to estimate the integral. In this manuscript, we focus mainly on Langevin sampling which is based on discretizations of Langevin SDEs. The first half of the introductory part presents the general mathematical framework of statistics and optimization, while the rest aims at the historical background and mathematical development of sampling algorithms.The first main contribution provides non-asymptotic bounds on convergence LMC in Wasserstein error. We first prove the bounds for LMC with the time-varying step. Then we establish bounds in the case when the gradient is available with a noise. In the end, we study the convergence of two versions of discretization, when the Hessian of the potential is regular.In the second main contribution, we study the sampling from log-concave (non-strongly) distributions using LMC, KLMC, and KLMC with higher-order discretization. We propose a constant square penalty for the potential function. We then prove non-asymptotic bounds in Wasserstein distances and provide the optimal choice of the penalization parameter. In the end, we highlight the importance of scaling the error for different error measures.The third main contribution focuses on the convergence properties of convex Langevin diffusions. We propose to penalize the drift with a linear term that vanishes over time. Explicit bounds on the convergence error in Wasserstein distance are proposed for the PenalizedLangevin Dynamics and Penalized Kinetic Langevin Dynamics. Also, similar bounds are proved for the Gradient Flow of convex functions
APA, Harvard, Vancouver, ISO, and other styles
16

Hörmann, Wolfgang, and Josef Leydold. "Monte Carlo Integration Using Importance Sampling and Gibbs Sampling." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1642/1/document.pdf.

Full text
Abstract:
To evaluate the expectation of a simple function with respect to a complicated multivariate density Monte Carlo integration has become the main technique. Gibbs sampling and importance sampling are the most popular methods for this task. In this contribution we propose a new simple general purpose importance sampling procedure. In a simulation study we compare the performance of this method with the performance of Gibbs sampling and of importance sampling using a vector of independent variates. It turns out that the new procedure is much better than independent importance sampling; up to dimension five it is also better than Gibbs sampling. The simulation results indicate that for higher dimensions Gibbs sampling is superior. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
17

Han, Xiao-liang. "Markov Chain Monte Carlo and sampling efficiency." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nilmeier, Jerome P. "Monte Carlo methods for sampling protein configurations." Diss., Search in ProQuest Dissertations & Theses. UC Only, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3324625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ahmed, Ilyas. "Importance Sampling for Least-Square Monte Carlo Methods." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193080.

Full text
Abstract:
Pricing American style options is challenging due to early exercise opportunities. The conditional expectation in the Snell envelope, known as the continuation value is approximated by basis functions in the Least-Square Monte Carlo-algorithm, giving robust estimation for the options price. By change of measure in the underlying Geometric Brownain motion using Importance Sampling, the variance of the option price can be reduced up to 9 times. Finding the optimal estimator that gives the minimal variance requires careful consideration on the reference price without adding bias in the estimator. A stochastic algorithm is used to find the optimal drift that minimizes the second moment in the expression of the variance after change of measure. The usage of Importance Sampling shows significant variance reduction in comparison with the standard Least-Square Monte Carlo. However, Importance Sampling method may be a better alternative for more complex instruments with early exercise opportunity.
Prissättning av amerikanska optioner är utmanande på grund av att man har rätten att lösa in optionen innan och fram till löptidens slut. Det betingade väntevärdet i Snell envelopet, känd som fortsättningsvärdet approximeras med basfunktioner i Least-Square Monte Carlo-algoritmen som ger robust uppskattning av optionspriset. Genom att byta mått i den underliggande geometriska Browniska rörelsen med Importance Sampling så kan variansen av optionspriset minskas med upp till 9 gånger. Att hitta den optimala skattningen som ger en minimal varians av optionspriset kräver en noggrann omtanke om referenspriset utan att skattningen blir för skev. I detta arbete används en stokastisk algoritm för att hitta den optimala driften som minimerar det andra momentet i uttrycket av variansen efter måttbyte. Användningen av Importance Sampling har visat sig ge en signifikant minskning av varians jämfört med den vanliga Least-Square Monte Carlometoden. Däremot kan importance sampling vara ett bättre alternativ för mer komplexa instrument där man har rätten at lösa in instrumentet fram till löptidens slut.
APA, Harvard, Vancouver, ISO, and other styles
20

Boualem, Abdelbassit. "Estimation de distribution de tailles de particules par techniques d'inférence bayésienne." Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2030/document.

Full text
Abstract:
Ce travail de recherche traite le problème inverse d’estimation de la distribution de tailles de particules (DTP) à partir des données de la diffusion dynamique de lumière (DLS). Les méthodes actuelles d’estimation souffrent de la mauvaise répétabilité des résultats d’estimation et de la faible capacité à séparer les composantes d’un échantillon multimodal de particules. L’objectif de cette thèse est de développer de nouvelles méthodes plus performantes basées sur les techniques d’inférence bayésienne et cela en exploitant la diversité angulaire des données de la DLS. Nous avons proposé tout d’abord une méthode non paramétrique utilisant un modèle « free-form » mais qui nécessite une connaissance a priori du support de la DTP. Pour éviter ce problème, nous avons ensuite proposé une méthode paramétrique fondée sur la modélisation de la DTP en utilisant un modèle de mélange de distributions gaussiennes. Les deux méthodes bayésiennes proposées utilisent des algorithmes de simulation de Monte-Carlo par chaînes de Markov. Les résultats d’analyse de données simulées et réelles montrent la capacité des méthodes proposées à estimer des DTPs multimodales avec une haute résolution et une très bonne répétabilité. Nous avons aussi calculé les bornes de Cramér-Rao du modèle de mélange de distributions gaussiennes. Les résultats montrent qu’il existe des valeurs d’angles privilégiées garantissant des erreurs minimales sur l’estimation de la DTP
This research work treats the inverse problem of particle size distribution (PSD) estimation from dynamic light scattering (DLS) data. The current DLS data analysis methods have bad estimation results repeatability and poor ability to separate the components (resolution) of a multimodal sample of particles. This thesis aims to develop new and more efficient estimation methods based on Bayesian inference techniques by taking advantage of the angular diversity of the DLS data. First, we proposed a non-parametric method based on a free-form model with the disadvantage of requiring a priori knowledge of the PSD support. To avoid this problem, we then proposed a parametric method based on modelling the PSD using a Gaussian mixture model. The two proposed Bayesian methods use Markov chain Monte Carlo simulation algorithms. The obtained results, on simulated and real DLS data, show the capability of the proposed methods to estimate multimodal PSDs with high resolution and better repeatability. We also computed the Cramér-Rao bounds of the Gaussian mixture model. The results show that there are preferred angle values ensuring minimum error on the PSD estimation
APA, Harvard, Vancouver, ISO, and other styles
21

Schäfer, Christian. "Monte Carlo methods for sampling high-dimensional binary vectors." Phd thesis, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00767163.

Full text
Abstract:
This thesis is concerned with Monte Carlo methods for sampling high-dimensional binary vectors from complex distributions of interest. If the state space is too large for exhaustive enumeration, these methods provide a mean of estimating the expected value with respect to some function of interest. Standard approaches are mostly based on random walk type Markov chain Monte Carlo, where the equilibrium distribution of the chain is the distribution of interest and its ergodic mean converges to the expected value. We propose a novel sampling algorithm based on sequential Monte Carlo methodology which copes well with multi-modal problems by virtue of an annealing schedule. The performance of the proposed sequential Monte Carlo sampler depends on the ability to sample proposals from auxiliary distributions which are, in a certain sense, close to the current distribution of interest. The core work of this thesis discusses strategies to construct parametric families for sampling binary vectors with dependencies. The usefulness of this approach is demonstrated in the context of Bayesian variable selection and combinatorial optimization of pseudo-Boolean objective functions.
APA, Harvard, Vancouver, ISO, and other styles
22

Giacomuzzi, Genny <1984&gt. "Local Earthquake Tomography by trans-dimensional Monte Carlo Sampling." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5160/1/giacomuzzi_genny_tesi.pdf.

Full text
Abstract:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
APA, Harvard, Vancouver, ISO, and other styles
23

Giacomuzzi, Genny <1984&gt. "Local Earthquake Tomography by trans-dimensional Monte Carlo Sampling." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5160/.

Full text
Abstract:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
APA, Harvard, Vancouver, ISO, and other styles
24

Vadrevu, Aditya M. "Random sampling estimates of fourier transforms antithetical stratified Monte Carlo /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p1450161.

Full text
Abstract:
Thesis (M.S.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed Mar. 27, 2008). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 33-34).
APA, Harvard, Vancouver, ISO, and other styles
25

Hörmann, Wolfgang, and Josef Leydold. "Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/284/1/document.pdf.

Full text
Abstract:
Importance sampling is a well known variance reduction technique for Monte Carlo simulation. For quasi-Monte Carlo integration with low discrepancy sequences it was neglected in the literature although it is easy to see that it can reduce the variation of the integrand for many important integration problems. For lattice rules importance sampling is of highest importance as it can be used to obtain a smooth periodic integrand. Thus the convergence of the integration procedure is accelerated. This can clearly speed up QMC algorithms for integration problems up to dimensions 10 to 12. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
26

Anderson, Luke (Luke James). "An embedded domain specific sampling language for Monte Carlo rendering." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111909.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 95-96).
Implementing Monte Carlo integration requires significant domain expertise. While simple algorithms, such as unidirectional path tracing, are relatively forgiving, more complex algorithms, such as bidirectional path tracing or Metropolis methods, are notoriously difficult to implement correctly. We propose a domain specific language for Monte Carlo rendering that offers primitives and data structures for writing concise and correct-by-construction sampling code. The compiler then automatically generates the necessary code for evaluating PDFs and combining multiple samples. Our language focuses on ease of implementation for rapid exploration and research, at the cost of run time performance. We demonstrate the effectiveness of the language by implementing several challenging rendering algorithms, as well as a new algorithm, which would otherwise be prohibitively difficult to implement.
by Luke Anderson.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
27

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Arnold, Andrea. "Sequential Monte Carlo Parameter Estimation for Differential Equations." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396617699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Motula, Paulo Fernando Nericke. "Estimation of DSGE Models: A Monte Carlo Analysis." reponame:Repositório Institucional do FGV, 2013. http://hdl.handle.net/10438/10961.

Full text
Abstract:
Submitted by Paulo Fernando Nericke Motula (pnericke@fgvmail.br) on 2013-06-29T15:45:20Z No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2013-07-03T13:29:49Z (GMT) No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5)
Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-07-09T19:35:20Z (GMT) No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5)
Made available in DSpace on 2013-07-09T19:40:59Z (GMT). No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) Previous issue date: 2013-06-18
We investigate the small sample properties and robustness of the parameter estimates of DSGE models. Our test ground is the Smets and Wouters (2007)'s model and the estimation procedures we evaluate are the Simulated Method of Moments (SMM) and Maximum Likelihood (ML). We look at the empirical distributions of the parameter estimates and their implications for impulse-response and variance decomposition in the cases of correct specification and two types of misspecification. Our results indicate an overall poor performance of SMM and some patterns of bias in impulse-response and variance decomposition for ML under the types of misspecification studied.
Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
APA, Harvard, Vancouver, ISO, and other styles
30

Hörmann, Wolfgang, and Josef Leydold. "Quasi Importance Sampling." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1394/1/document.pdf.

Full text
Abstract:
There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have singularities when the domain of the distribution is unbounded and it can be very expensive or even impossible to sample points from a general multivariate distribution. We show that importance sampling is a simple method to overcome both problems. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
31

VIEIRA, WILSON J. "A General study of undersampling problems in Monte Carlo calculations." reponame:Repositório Institucional do IPEN, 1989. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10258.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:36:37Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:59:11Z (GMT). No. of bitstreams: 1 03975.pdf: 1920852 bytes, checksum: 4a74905dfb7a4bb657984043110cfa4f (MD5)
Tese (Doutoramento)
IPEN/T
University of Tennessee, Knoxville, USA
APA, Harvard, Vancouver, ISO, and other styles
32

Karawatzki, Roman, and Josef Leydold. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/294/1/document.pdf.

Full text
Abstract:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
33

Karawatzki, Roman, Josef Leydold, and Klaus Pötzelberger. "Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1400/1/document.pdf.

Full text
Abstract:
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. An implementation of these algorithms in C is available from the authors. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
34

Pierre-Louis, Péguy. "Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/228433.

Full text
Abstract:
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
APA, Harvard, Vancouver, ISO, and other styles
35

Singh, Gurprit. "Sampling and Variance Analysis for Monte Carlo Integration in Spherical Domain." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10121/document.

Full text
Abstract:
Cette thèse introduit un cadre théorique pour l'étude de différents schémas d'échantillonnage dans un domaine sphérique, et de leurs effets sur le calcul d'intégrales pour l'illumination globale. Le calcul de l'illumination (du transport lumineux) est un composant majeur de la synthèse d'images réalistes, qui se traduit par l'évaluation d'intégrales multidimensionnelles. Les schémas d'intégration numériques de type Monte-Carlo sont utilisés intensivement pour le calcul de telles intégrales. L'un des aspects majeurs de tout schéma d'intégration numérique est l'échantillonnage. En effet, la façon dont les échantillons sont distribués dans le domaine d'intégration peut fortement affecter le résultat final. Par exemple, pour la synthèse d'images, les effets liés aux différents schémas d'échantillonnage apparaissent sous la forme d'artéfacts structurés ou, au contrire, de bruit non structuré. Dans de nombreuses situations, des résultats complètement faux (biaisés) peuvent être obtenus à cause du schéma d'échantillonnage utilisé pour réaliser l'intégration. La distribution d'un échantillonnage peut être caractérisée à l'aide de son spectre de Fourier. Des schémas d'échantillonnage peuvent être générés à partir d'un spectre de puissance dans le domaine de Fourier. Cette technique peut être utilisée pour améliorer l'erreur d'intégration, car un tel contrôle spectral permet d'adapter le schéma d'échantillonnage au spectre de Fourier de l'intégrande. Il n'existe cependant pas de relation directe entre l'erreur dans l'intégration par méthode de Monte-Carlo et le spectre de puissance de la distribution des échantillons. Dans ces travaux, nous proposons une formulation de la variance qui établit un lien direct entre la variance d'une méthode de Monte-Carlo, les spectres de puissance du schéma d'échantillonnage ainsi que de l'intégrande. Pour obtenir notre formulation de la variance, nous utilisons la notion d'homogénéité de la distribution des échantillons qui permet d'exprimer l'erreur de l'intégration par une méthode de Monte-Carlo uniquement sous forme de variance. À partir de cette formulation de la variance, nous développons un outil d'analyse pouvant être utilisé pour déterminer le taux de convergence théorique de la variance de différents schémas d'échantillonnage proposés dans la littérature. Notre analyse fournit un éclairage sur les bonnes pratiques à mettre en œuvre dans la définition de nouveaux schémas d'échantillonnage basés sur l'intégrande
This dissertation introduces a theoretical framework to study different sampling patterns in the spherical domain and their effects in the evaluation of global illumination integrals. Evaluating illumination (light transport) is one of the most essential aspect in image synthesis to achieve realism which involves solving multi-dimensional space integrals. Monte Carlo based numerical integration schemes are heavily employed to solve these high dimensional integrals. One of the most important aspect of any numerical integration method is sampling. The way samples are distributed on an integration domain can greatly affect the final result. For example, in images, the effects of various sampling patterns appear in the form of either structural artifacts or completely unstructured noise. In many cases, we may get completely false (biased) results due to the sampling pattern used in integration. The distribution of sampling patterns can be characterized using their Fourier power spectra. It is also possible to use the Fourier power spectrum as input, to generate the corresponding sample distribution. This further allows spectral control over the sample distributions. Since this spectral control allows tailoring new sampling patterns directly from the input Fourier power spectrum, it can be used to improve error in integration. However, a direct relation between the error in Monte Carlo integration and the sampling power spectrum is missing. In this work, we propose a variance formulation, that establishes a direct link between the variance in Monte Carlo integration and the power spectra of both the sampling pattern and the integrand involved. To derive our closed-form variance formulation, we use the notion of homogeneous sample distributions that allows expression of error in Monte Carlo integration, only in the form of variance. Based on our variance formulation, we develop an analysis tool that can be used to derive theoretical variance convergence rates of various state-of-the-art sampling patterns. Our analysis gives insights to design principles that can be used to tailor new sampling patterns based on the integrand
APA, Harvard, Vancouver, ISO, and other styles
36

Janati, el idrissi Yazid. "Monte Carlo Methods for Machine Learning : Practical and Theoretical Contributions for Importance Sampling and Sequential Methods." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS008.

Full text
Abstract:
Cette thèse contribue au vaste domaine des méthodes de Monte Carlo avec de nouveaux algorithmes visant à traiter l'inférence en grande dimension et la quantification de l'incertitude. Dans une première partie, nous développons deux nouvelles méthodes pour l'échantillonnage d'importance. Le premier algorithme est une nouvelle loi de proposition, basée sur sur des étapes d'optimisation et de coût de calcul faible, pour le calcul des constantes de normalisation. L'algorithme résultant est ensuite étendu en un nouvel algorithme MCMC. Le deuxième algorithme est un nouveau schéma pour l'apprentissage de propositions d'importance adaptées aux cibles complexes et multimodales. Dans une deuxième partie, nous nous concentrons sur les méthodes de Monte Carlo séquentielles. Nous développons de nouveaux estimateurs de la variance asymptotique du filtre à particules et fournissons le premier estimateur de la variance asymptotique d'un lisseur à particules. Ensuite, nous proposons une procédure d'apprentissage des paramètres dans les modèles de Markov cachés en utilisant un lisseur à particules dont le biais est réduit par rapport aux méthodes existantes. Enfin, nous concevons un algorithme de Monte Carlo séquentiel pour résoudre des problèmes inverses linéaires bayésiens avec des lois a priori obtenues par modèles génératifs
This thesis contributes to the vast domain of Monte Carlo methods with novel algorithms that aim at adressing high dimensional inference and uncertainty quantification. In a first part, we develop two novel methods for Importance Sampling. The first algorithm is a lightweight optimization based proposal for computing normalizing constants and which extends into a novel MCMC algorithm. The second one is a new scheme for learning sharp importance proposals. In a second part, we focus on Sequential Monte Carlo methods. We develop new estimators for the asymptotic variance of the particle filter and provide the first estimator of the asymptotic variance of a particle smoother. Next, we derive a procedure for parameter learning within hidden Markov models using a particle smoother with provably reduced bias. Finally, we devise a Sequential Monte Carlo algorithm for solving Bayesian linear inverse problems with generative model priors
APA, Harvard, Vancouver, ISO, and other styles
37

Quartetti, Douglas A. "A Monte Carlo assessment of estimation in utility analysis." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/29371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Song, Chenxiao. "Monte Carlo Variance Reduction Methods with Applications in Structural Reliability Analysis." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29801.

Full text
Abstract:
Monte Carlo variance reduction methods have attracted significant interest due to the continuous demand for reducing computational costs in various fields of application. This thesis is based on the content of a collection of six papers contributing to the theory and application of Monte Carlo methods and variance reduction techniques. For theoretical developments, we establish a novel framework of Monte Carlo integration over simplices, throughout from sampling to variance reduction. We also investigate the effect of batching for adaptive variance reduction, which aims at running the Monte Carlo simulation simultaneously with the parameter search algorithm using a common sequence of random realizations. Such adaptive variance reduction is moreover employed by strata in a newly proposed stratified sampling framework with dynamic budget allocation. For application in estimating the probability of failure in the context of structural reliability analysis, we formulate adaptive frameworks of stratified sampling with variance reduction by strata as well as stratified directional importance sampling, and survey a variety of numerical approaches employing Monte Carlo methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Hörmann, Wolfgang. "New Importance Sampling Densities." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1066/1/document.pdf.

Full text
Abstract:
To compute the expectation of a function with respect to a multivariate distribution naive Monte Carlo is often not feasible. In such cases importance sampling leads to better estimates than the rejection method. A new importance sampling distribution, the product of one-dimensional table mountain distributions with exponential tails, turns out to be flexible and useful for Bayesian integration problems. To obtain a heavy-tailed importance sampling distribution a new radius transform for the above distribution is suggested. Together with a linear transform the new importance sampling distributions lead to simple and fast integration algorithms with reliable error bounds. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Dali. "Stochastic equilibrium problems with equilibrium constraints, Monte Carlo sampling method and applications." Thesis, University of Southampton, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ustun, Berk (Tevfik Berk). "The Markov chain Monte Carlo approach to importance sampling in stochastic programming." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/85220.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-87).
Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the expected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it involves the evaluation of a multidimensional integral whose integrand is an optimization problem. Accordingly, the recourse function is estimated using quadrature rules or Monte Carlo methods. Although Monte Carlo methods present numerous computational benefits over quadrature rules, they require a large number of samples to produce accurate results when they are embedded in an optimization algorithm. We present an importance sampling framework for multistage stochastic programming that can produce accurate estimates of the recourse function using a fixed number of samples. Our framework uses Markov Chain Monte Carlo and Kernel Density Estimation algorithms to create a non-parametric importance sampling distribution that can form lower variance estimates of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using numerical experiments in which we solve variants of the Newsvendor problem. Our results show that even a simple implementation of our framework produces highly accurate estimates of the optimal solution and optimal cost for stochastic programming models, especially those with increased variance, multimodal or rare-event distributions.
by Berk Ustun.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
42

Hazelton, Martin Luke. "Method of density estimation with application to Monte Carlo methods." Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wong, Mei Ning. "Quasi-Monte Carlo sampling for computing the trace of a function of a matrix." HKBU Institutional Repository, 2002. http://repository.hkbu.edu.hk/etd_ra/436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bentley, Jason Phillip. "Exact Markov chain Monte Carlo and Bayesian linear regression." Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2534.

Full text
Abstract:
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
APA, Harvard, Vancouver, ISO, and other styles
45

Cline, David. "Sampling Methods in Ray-Based Global Illumination." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2056.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hudson-Curtis, Buffy L. "Generalizations of the Multivariate Logistic Distribution with Applications to Monte Carlo Importance Sampling." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20011101-224634.

Full text
Abstract:

Monte Carlo importance sampling is a useful numerical integration technique, particularly in Bayesian analysis. A successful importance sampler will mimic the behavior of the posterior distribution, not only in the center, where most of the mass lies, but also in the tails (Geweke, 1989). Typically, the Hessian of the importance sampler is set equal to the Hessian of the posterior distribution evaluated at the mode. Since the importance sampling estimates are weighted averages, their accuracy is assessed by assuming a normal limiting distribution. However, if this scaling of the Hessian leads to a poor match in the tails of the posterior, this assumption may be false (Geweke, 1989). Additionally, in practice, two commonly used importance samplers, the Multivariate Normal Distribution and the Multivariate Student-t Distribution, do not perform well for a number of posterior distributions (Monahan, 2000). A generalization of the Multivariate Logistic Distribution (the Elliptical Multivariate Logistic Distribution) is described and its properties explored. This distribution outperforms the Multivariate Normal distribution and the Multivariate Student-t distribution as an importance sampler for several posterior distributions chosen from the literature. A modification of the scaling by Hessians of the importance sampler and the posterior distribution is explained. Employing this alternate relationship increases the number of posterior distributions for which the Multivariate Normal, the Multivariate Student-t, and the Elliptical Multivariate Logistic Distribution can serve as importance samplers.

APA, Harvard, Vancouver, ISO, and other styles
47

Hu, Xinran. "On Grouped Observation Level Interaction and a Big Data Monte Carlo Sampling Algorithm." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51224.

Full text
Abstract:
Big Data is transforming the way we live. From medical care to social networks, data is playing a central role in various applications. As the volume and dimensionality of datasets keeps growing, designing effective data analytics algorithms emerges as an important research topic in statistics. In this dissertation, I will summarize our research on two data analytics algorithms: a visual analytics algorithm named Grouped Observation Level Interaction with Multidimensional Scaling and a big data Monte Carlo sampling algorithm named Batched Permutation Sampler. These two algorithms are designed to enhance the capability of generating meaningful insights and utilizing massive datasets, respectively.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
48

Gilquin, Laurent. "Échantillonnages Monte Carlo et quasi-Monte Carlo pour l'estimation des indices de Sobol' : application à un modèle transport-urbanisme." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM042/document.

Full text
Abstract:
Le développement et l'utilisation de modèles intégrés transport-urbanisme sont devenus une norme pour représenter les interactions entre l'usage des sols et le transport de biens et d'individus sur un territoire. Ces modèles sont souvent utilisés comme outils d'aide à la décision pour des politiques de planification urbaine.Les modèles transport-urbanisme, et plus généralement les modèles mathématiques, sont pour la majorité conçus à partir de codes numériques complexes. Ces codes impliquent très souvent des paramètres dont l'incertitude est peu connue et peut potentiellement avoir un impact important sur les variables de sortie du modèle.Les méthodes d'analyse de sensibilité globales sont des outils performants permettant d'étudier l'influence des paramètres d'un modèle sur ses sorties. En particulier, les méthodes basées sur le calcul des indices de sensibilité de Sobol' fournissent la possibilité de quantifier l'influence de chaque paramètre mais également d'identifier l'existence d'interactions entre ces paramètres.Dans cette thèse, nous privilégions la méthode dite à base de plans d'expériences répliqués encore appelée méthode répliquée. Cette méthode a l'avantage de ne requérir qu'un nombre relativement faible d'évaluations du modèle pour calculer les indices de Sobol' d'ordre un et deux.Cette thèse se focalise sur des extensions de la méthode répliquée pour faire face à des contraintes issues de notre application sur le modèle transport-urbanisme Tranus, comme la présence de corrélation entre paramètres et la prise en compte de sorties multivariées.Nos travaux proposent également une approche récursive pour l'estimation séquentielle des indices de Sobol'. L'approche récursive repose à la fois sur la construction itérative d'hypercubes latins et de tableaux orthogonaux stratifiés et sur la définition d'un nouveau critère d'arrêt. Cette approche offre une meilleure précision sur l'estimation des indices tout en permettant de recycler des premiers jeux d'évaluations du modèle. Nous proposons aussi de combiner une telle approche avec un échantillonnage quasi-Monte Carlo.Nous présentons également une application de nos contributions pour le calage du modèle de transport-urbanisme Tranus
Land Use and Transportation Integrated (LUTI) models have become a norm for representing the interactions between land use and the transportation of goods and people in a territory. These models are mainly used to evaluate alternative planning scenarios, simulating their impact on land cover and travel demand.LUTI models and other mathematical models used in various fields are most of the time based on complex computer codes. These codes often involve poorly-known inputs whose uncertainty can have significant effects on the model outputs.Global sensitivity analysis methods are useful tools to study the influence of the model inputs on its outputs. Among the large number of available approaches, the variance based method introduced by Sobol' allows to calculate sensitivity indices called Sobol' indices. These indices quantify the influence of each model input on the outputs and can detect existing interactions between inputs.In this framework, we favor a particular method based on replicated designs of experiments called replication method. This method appears to be the most suitable for our application and is advantageous as it requires a relatively small number of model evaluations to estimate first-order or second-order Sobol' indices.This thesis focuses on extensions of the replication method to face constraints arising in our application on the LUTI model Tranus, such as the presence of dependency among the model inputs, as far as multivariate outputs.Aside from that, we propose a recursive approach to sequentially estimate Sobol' indices. The recursive approach is based on the iterative construction of stratified designs, latin hypercubes and orthogonal arrays, and on the definition of a new stopping criterion. With this approach, more accurate Sobol' estimates are obtained while recycling previous sets of model evaluations. We also propose to combine such an approach with quasi-Monte Carlo sampling.An application of our contributions on the LUTI model Tranus is presented
APA, Harvard, Vancouver, ISO, and other styles
49

Kuhlenschmidt, Bernd. "On the stability of sequential Monte Carlo methods for parameter estimation." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Barata, Teresa Cordeiro Ferreira Nunes. "Two examples of curve estimation using Markov Chain Monte Carlo methods." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography