To see the other types of publications on this topic, follow the link: Gaussian; Markov chain Monte Carlo methods.

Dissertations / Theses on the topic 'Gaussian; Markov chain Monte Carlo methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Gaussian; Markov chain Monte Carlo methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Manrique, Garcia Aurora. "Econometric analysis of limited dependent time series." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vaičiulytė, Ingrida. "Study and application of Markov chain Monte Carlo method." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112440-55390.

Full text
Abstract:
Markov chain Monte Carlo adaptive methods by creating computationally effective algorithms for decision-making of data analysis with the given accuracy are analyzed in this dissertation. The tasks for estimation of parameters of the multivariate distributions which are constructed in hierarchical way (skew t distribution, Poisson-Gaussian model, stable symmetric vector law) are described and solved in this research. To create the adaptive MCMC procedure, the sequential generating method is applied for Monte Carlo samples, introducing rules for statistical termination and for sample size regulation of Markov chains. Statistical tasks, solved by this method, reveal characteristics of relevant computational problems including MCMC method. Effectiveness of the MCMC algorithms is analyzed by statistical modeling method, constructed in the dissertation. Tests made with sportsmen data and financial data of enterprises, belonging to health-care industry, confirmed that numerical properties of the method correspond to the theoretical model. The methods and algorithms created also are applied to construct the model for sociological data analysis. Tests of algorithms have shown that adaptive MCMC algorithm allows to obtain estimators of examined distribution parameters in lower number of chains, and reducing the volume of calculations approximately two times. The algorithms created in this dissertation can be used to test the systems of stochastic type and to solve other statistical... [to full text]
Disertacijoje nagrinėjami Markovo grandinės Monte-Karlo (MCMC) adaptavimo metodai, skirti efektyviems skaitiniams duomenų analizės sprendimų priėmimo su iš anksto nustatytu patikimumu algoritmams sudaryti. Suformuluoti ir išspręsti hierarchiniu būdu sudarytų daugiamačių skirstinių (asimetrinio t skirstinio, Puasono-Gauso modelio, stabiliojo simetrinio vektoriaus dėsnio) parametrų vertinimo uždaviniai. Adaptuotai MCMC procedūrai sukurti yra pritaikytas nuoseklaus Monte-Karlo imčių generavimo metodas, įvedant statistinį stabdymo kriterijų ir imties tūrio reguliavimą. Statistiniai uždaviniai išspręsti šiuo metodu leidžia atskleisti aktualias MCMC metodų skaitmeninimo problemų ypatybes. MCMC algoritmų efektyvumas tiriamas pasinaudojant disertacijoje sudarytu statistinio modeliavimo metodu. Atlikti eksperimentai su sportininkų duomenimis ir sveikatos industrijai priklausančių įmonių finansiniais duomenimis patvirtino, kad metodo skaitinės savybės atitinka teorinį modelį. Taip pat sukurti metodai ir algoritmai pritaikyti sociologinių duomenų analizės modeliui sudaryti. Atlikti tyrimai parodė, kad adaptuotas MCMC algoritmas leidžia gauti nagrinėjamų skirstinių parametrų įvertinius per mažesnį grandžių skaičių ir maždaug du kartus sumažinti skaičiavimų apimtį. Disertacijoje sukonstruoti algoritmai gali būti pritaikyti stochastinio pobūdžio sistemų tyrimui ir kitiems statistikos uždaviniams spręsti MCMC metodu.
APA, Harvard, Vancouver, ISO, and other styles
3

Lopez, lopera Andres Felipe. "Gaussian Process Modelling under Inequality Constraints." Thesis, Lyon, 2019. https://tel.archives-ouvertes.fr/tel-02863891.

Full text
Abstract:
Le conditionnement de processus gaussiens (PG) par des contraintes d’inégalité permet d’obtenir des modèles plus réalistes. Cette thèse s’intéresse au modèle de type PG proposé par maatouk (2015), obtenu par approximation finie, qui garantit que les contraintes sont satisfaites dans tout l’espace. Plusieurs contributions sont apportées. Premièrement, nous étudions l’emploi de méthodes de monte carlo par chaı̂nes de markov pour des lois multinormales tronquées. Elles fournissent un échantillonnage efficacpour des contraintes d’inégalité linéaires. Deuxièmement, nous explorons l’extension du modèle, jusque-làlimité à la dimension trois, à de plus grandes dimensions. Nous remarquons que l’introduction d’un bruit d’observations permet de monter à la dimension cinq. Nous proposons un algorithme d’insertion des nœuds, qui concentre le budget de calcul sur les dimensions les plus actives. Nous explorons aussi la triangulation de delaunay comme alternative à la tensorisation. Enfin, nous étudions l’utilisation de modèles additifs dans ce contexte, théoriquement et sur des problèmes de plusieurs centaines de variables. Troisièmement, nous donnons des résultats théoriques sur l’inférence sous contraintes d’inégalité. La consistance et la normalité asymptotique d’estimateurs par maximum de vraisemblance sont établies. L’ensemble des travaux a fait l’objet d’un développement logiciel en R. Ils sont appliqués à des problèmes de gestion des risques en sûreté nucléaire et inondations côtières, avec des contraintes de positivité et monotonie. Comme ouverture, nous montrons que la méthodologie fournit un cadre original pour l’étude de processus de Poisson d’intensité stochastique
Conditioning Gaussian processes (GPs) by inequality constraints gives more realistic models. This thesis focuses on the finite-dimensional approximation of GP models proposed by Maatouk (2015), which satisfies the constraints everywhere in the input space. Several contributions are provided. First, we study the use of Markov chain Monte Carlo methods for truncated multinormals. They result in efficient sampling for linear inequality constraints. Second, we explore the extension of the model, previously limited up tothree-dimensional spaces, to higher dimensions. The introduction of a noise effect allows us to go up to dimension five. We propose a sequential algorithm based on knot insertion, which concentrates the computational budget on the most active dimensions. We also explore the Delaunay triangulation as an alternative to tensorisation. Finally, we study the case of additive models in this context, theoretically and on problems involving hundreds of input variables. Third, we give theoretical results on inference under inequality constraints. The asymptotic consistency and normality of maximum likelihood estimators are established. The main methods throughout this manuscript are implemented in R language programming.They are applied to risk assessment problems in nuclear safety and coastal flooding, accounting for positivity and monotonicity constraints. As a by-product, we also show that the proposed GP approach provides an original framework for modelling Poisson processes with stochastic intensities
APA, Harvard, Vancouver, ISO, and other styles
4

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Full text
Abstract:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
APA, Harvard, Vancouver, ISO, and other styles
5

Vaičiulytė, Ingrida. "Markovo grandinės Monte-Karlo metodo tyrimas ir taikymas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112429-75205.

Full text
Abstract:
Disertacijoje nagrinėjami Markovo grandinės Monte-Karlo (MCMC) adaptavimo metodai, skirti efektyviems skaitiniams duomenų analizės sprendimų priėmimo su iš anksto nustatytu patikimumu algoritmams sudaryti. Suformuluoti ir išspręsti hierarchiniu būdu sudarytų daugiamačių skirstinių (asimetrinio t skirstinio, Puasono-Gauso modelio, stabiliojo simetrinio vektoriaus dėsnio) parametrų vertinimo uždaviniai. Adaptuotai MCMC procedūrai sukurti yra pritaikytas nuoseklaus Monte-Karlo imčių generavimo metodas, įvedant statistinį stabdymo kriterijų ir imties tūrio reguliavimą. Statistiniai uždaviniai išspręsti šiuo metodu leidžia atskleisti aktualias MCMC metodų skaitmeninimo problemų ypatybes. MCMC algoritmų efektyvumas tiriamas pasinaudojant disertacijoje sudarytu statistinio modeliavimo metodu. Atlikti eksperimentai su sportininkų duomenimis ir sveikatos industrijai priklausančių įmonių finansiniais duomenimis patvirtino, kad metodo skaitinės savybės atitinka teorinį modelį. Taip pat sukurti metodai ir algoritmai pritaikyti sociologinių duomenų analizės modeliui sudaryti. Atlikti tyrimai parodė, kad adaptuotas MCMC algoritmas leidžia gauti nagrinėjamų skirstinių parametrų įvertinius per mažesnį grandžių skaičių ir maždaug du kartus sumažinti skaičiavimų apimtį. Disertacijoje sukonstruoti algoritmai gali būti pritaikyti stochastinio pobūdžio sistemų tyrimui ir kitiems statistikos uždaviniams spręsti MCMC metodu.
Markov chain Monte Carlo adaptive methods by creating computationally effective algorithms for decision-making of data analysis with the given accuracy are analyzed in this dissertation. The tasks for estimation of parameters of the multivariate distributions which are constructed in hierarchical way (skew t distribution, Poisson-Gaussian model, stable symmetric vector law) are described and solved in this research. To create the adaptive MCMC procedure, the sequential generating method is applied for Monte Carlo samples, introducing rules for statistical termination and for sample size regulation of Markov chains. Statistical tasks, solved by this method, reveal characteristics of relevant computational problems including MCMC method. Effectiveness of the MCMC algorithms is analyzed by statistical modeling method, constructed in the dissertation. Tests made with sportsmen data and financial data of enterprises, belonging to health-care industry, confirmed that numerical properties of the method correspond to the theoretical model. The methods and algorithms created also are applied to construct the model for sociological data analysis. Tests of algorithms have shown that adaptive MCMC algorithm allows to obtain estimators of examined distribution parameters in lower number of chains, and reducing the volume of calculations approximately two times. The algorithms created in this dissertation can be used to test the systems of stochastic type and to solve other statistical... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
6

Puengnim, Anchalee. "Classification de modulations linéaires et non-linéaires à l'aide de méthodes bayésiennes." Toulouse, INPT, 2008. http://ethesis.inp-toulouse.fr/archive/00000676/.

Full text
Abstract:
La reconnaissance de modulations numériques consiste à identifier, au niveau du récepteur d'une chaîne de transmission, l'alphabet auquel appartiennent les symboles du message transmis. Cette reconnaissance est nécessaire dans de nombreux scénarios de communication, afin, par exemple, de sécuriser les transmissions pour détecter d'éventuels utilisateurs non autorisés ou bien encore de déterminer quel terminal brouille les autres. Le signal observé en réception est généralement affecté d'un certain nombre d'imperfections, dues à une synchronisation imparfaite de l'émetteur et du récepteur, une démodulation imparfaite, une égalisation imparfaite du canal de transmission. Nous proposons plusieurs méthodes de classification qui permettent d'annuler les effets liés aux imperfections de la chaîne de transmission. Les symboles reçus sont alors corrigés puis comparés à ceux du dictionnaire des symboles transmis
This thesis studies classification of digital linear and nonlinear modulations using Bayesian methods. Modulation recognition consists of identifying, at the receiver, the type of modulation signals used by the transmitter. It is important in many communication scenarios, for example, to secure transmissions by detecting unauthorized users, or to determine which transmitter interferes the others. The received signal is generally affected by a number of impairments. We propose several classification methods that can mitigate the effects related to imperfections in transmission channels. More specifically, we study three techniques to estimate the posterior probabilities of the received signals conditionally to each modulation
APA, Harvard, Vancouver, ISO, and other styles
7

Fang, Youhan. "Efficient Markov Chain Monte Carlo Methods." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.

Full text
Abstract:

Generating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.

One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.

Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.

APA, Harvard, Vancouver, ISO, and other styles
8

Murray, Iain Andrew. "Advances in Markov chain Monte Carlo methods." Thesis, University College London (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487199.

Full text
Abstract:
Probability distributions over many variables occur frequently in Bayesian inference, statistical physics and simulation studies. Samples from distributions give insight into their typical behavior and can allow approximation of any quantity of interest, such as expectations or normalizing constants. Markov chain Monte Carlo (MCMC), introduced by Metropolis et al. (1953), allows r sampling from distributions with intractable normalization, and remains one of most important tools for approximate computation with probability distributions. I While not needed by MCMC, normalizers are key quantities: in Bayesian statistics marginal likelihoods are needed for model comparison; in statistical physics many physical quantities relate to the partition function. In this thesis we propose and investigate several new Monte Carlo algorithms, both for evaluating normalizing constants and for improved sampling of distributions. Many MCMC correctness proofs rely on using reversible transition operators; this can lead to chains exploring by slow random walks. After reviewing existing MCMC algorithms, we develop a new framework for constructing non-reversible transition operators from existing reversible ones. Next we explore and extend MCMC-based algorithms for computing normalizing constants. In particular we develop a newMCMC operator and Nested Sampling approach for the Potts model. Our results demonstrate that these approaches can be superior to finding normalizing constants by annealing methods and can obtain better posterior samples. Finally we consider 'doubly-intractable' distributions with extra unknown normalizer terms that do not cancel in standard MCMC algorithms. We propose using several deterministic approximations for the unknown terms, and investigate their interaction with sampling algorithms. We then develop novel exact-sampling-based MCMC methods, the Exchange Algorithm and Latent Histories. For the first time these algorithms do not require separate approximation before sampling begins. Moreover, the Exchange Algorithm outperforms the only alternative sampling algorithm for doubly intractable distributions.
APA, Harvard, Vancouver, ISO, and other styles
9

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Jason Qian. "Markov Chain Monte Carlo and Non-Reversible Methods." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/244823.

Full text
Abstract:
The bulk of Markov chain Monte Carlo applications make use of reversible chains, relying on the Metropolis-Hastings algorithm or similar methods. While reversible chains have the advantage of being relatively easy to analyze, it has been shown that non-reversible chains may outperform them in various scenarios. Neal proposes an algorithm that transforms a general reversible chain into a non-reversible chain with a construction that does not increase the asymptotic variance. These modified chains work to avoid diffusive backtracking behavior which causes Markov chains to be trapped in one position for too long. In this paper, we provide an introduction to MCMC, and discuss the Metropolis algorithm and Neal’s algorithm. We introduce a decaying memory algorithm inspired by Neal’s idea, and then analyze and compare the performance of these chains on several examples.
APA, Harvard, Vancouver, ISO, and other styles
11

Pereira, Fernanda Chaves. "Bayesian Markov chain Monte Carlo methods in general insurance." Thesis, City University London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Cheal, Ryan. "Markov Chain Monte Carlo methods for simulation in pedigrees." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Durmus, Alain. "High dimensional Markov chain Monte Carlo methods : theory, methods and applications." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLT001/document.

Full text
Abstract:
L'objet de cette thèse est l'analyse fine de méthodes de Monte Carlopar chaînes de Markov (MCMC) et la proposition de méthodologies nouvelles pour échantillonner une mesure de probabilité en grande dimension. Nos travaux s'articulent autour de trois grands sujets.Le premier thème que nous abordons est la convergence de chaînes de Markov en distance de Wasserstein. Nous établissons des bornes explicites de convergence géométrique et sous-géométrique. Nous appliquons ensuite ces résultats à l'étude d'algorithmes MCMC. Nous nous intéressons à une variante de l'algorithme de Metropolis-Langevin ajusté (MALA) pour lequel nous donnons des bornes explicites de convergence. Le deuxième algorithme MCMC que nous analysons est l'algorithme de Crank-Nicolson pré-conditionné, pour lequel nous montrerons une convergence sous-géométrique.Le second objet de cette thèse est l'étude de l'algorithme de Langevin unajusté (ULA). Nous nous intéressons tout d'abord à des bornes explicites en variation totale suivant différentes hypothèses sur le potentiel associé à la distribution cible. Notre étude traite le cas où le pas de discrétisation est maintenu constant mais aussi du cas d'une suite de pas tendant vers 0. Nous prêtons dans cette étude une attention toute particulière à la dépendance de l'algorithme en la dimension de l'espace d'état. Dans le cas où la densité est fortement convexe, nous établissons des bornes de convergence en distance de Wasserstein. Ces bornes nous permettent ensuite de déduire des bornes de convergence en variation totale qui sont plus précises que celles reportées précédemment sous des conditions plus faibles sur le potentiel. Le dernier sujet de cette thèse est l'étude des algorithmes de type Metropolis-Hastings par échelonnage optimal. Tout d'abord, nous étendons le résultat pionnier sur l'échelonnage optimal de l'algorithme de Metropolis à marche aléatoire aux densités cibles dérivables en moyenne Lp pour p ≥ 2. Ensuite, nous proposons de nouveaux algorithmes de type Metropolis-Hastings qui présentent un échelonnage optimal plus avantageux que celui de l'algorithme MALA. Enfin, nous analysons la stabilité et la convergence en variation totale de ces nouveaux algorithmes
The subject of this thesis is the analysis of Markov Chain Monte Carlo (MCMC) methods and the development of new methodologies to sample from a high dimensional distribution. Our work is divided into three main topics. The first problem addressed in this manuscript is the convergence of Markov chains in Wasserstein distance. Geometric and sub-geometric convergence with explicit constants, are derived under appropriate conditions. These results are then applied to thestudy of MCMC algorithms. The first analyzed algorithm is an alternative scheme to the Metropolis Adjusted Langevin algorithm for which explicit geometric convergence bounds are established. The second method is the pre-Conditioned Crank-Nicolson algorithm. It is shown that under mild assumption, the Markov chain associated with thisalgorithm is sub-geometrically ergodic in an appropriated Wasserstein distance. The second topic of this thesis is the study of the Unadjusted Langevin algorithm (ULA). We are first interested in explicit convergence bounds in total variation under different kinds of assumption on the potential associated with the target distribution. In particular, we pay attention to the dependence of the algorithm on the dimension of the state space. The case of fixed step sizes as well as the case of nonincreasing sequences of step sizes are dealt with. When the target density is strongly log-concave, explicit bounds in Wasserstein distance are established. These results are then used to derived new bounds in the total variation distance which improve the one previously derived under weaker conditions on the target density.The last part tackles new optimal scaling results for Metropolis-Hastings type algorithms. First, we extend the pioneer result on the optimal scaling of the random walk Metropolis algorithm to target densities which are differentiable in Lp mean for p ≥ 2. Then, we derive new Metropolis-Hastings type algorithms which have a better optimal scaling compared the MALA algorithm. Finally, the stability and the convergence in total variation of these new algorithms are studied
APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Miaodan. "Markov chain Monte Carlo methods applied to Bayesian data analysis." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Paul, Rajib. "Theoretical And Algorithmic Developments In Markov Chain Monte Carlo." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218184168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pitt, Michael K. "Bayesian inference for non-Gaussian state space model using simulation." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Khedri, Shiler. "Markov chain Monte Carlo methods for exact tests in contingency tables." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/5579/.

Full text
Abstract:
This thesis is mainly concerned with conditional inference for contingency tables, where the MCMC method is used to take a sample of the conditional distribution. One of the most common models to be investigated in contingency tables is the independence model. Classic test statistics for testing the independence hypothesis, Pearson and likelihood chi-square statistics rely on large sample distributions. The large sample distribution does not provide a good approximation when the sample size is small. The Fisher exact test is an alternative method which enables us to compute the exact p-value for testing the independence hypothesis. For contingency tables of large dimension, the Fisher exact test is not practical as it requires counting all tables in the sample space. We will review some enumeration methods which do not require us to count all tables in the sample space. However, these methods would also fail to compute the exact p-value for contingency tables of large dimensions. \cite{DiacStur98} introduced a method based on the Grobner basis. It is quite complicated to compute the Grobner basis for contingency tables as it is different for each individual table, not only for different sizes of table. We also review the method introduced by \citet{AokiTake03} using the minimal Markov basis for some particular tables. \cite{BuneBesa00} provided an algorithm using the most fundamental move to make the irreducible Markov chain over the sample space, defining an extra space. The algorithm is only introduced for $2\times J \times K$ tables using the Rasch model. We introduce direct proof for irreducibility of the Markov chain achieved by the Bunea and Besag algorithm. This is then used to prove that \cite{BuneBesa00} approach can be applied for some tables of higher dimensions, such as $3\times 3\times K$ and $3\times 4 \times 4$. The efficiency of the Bunea and Besag approach is extensively investigated for many different settings such as for tables of low/moderate/large dimensions, tables with special zero pattern, etc. The efficiency of algorithms is measured based on the effective sample size of the MCMC sample. We use two different metrics to penalise the effective sample size: running time of the algorithm and total number of bits used. These measures are also used to compute the efficiency of an adjustment of the Bunea and Besag algorithm which show that it outperforms the the original algorithm for some settings.
APA, Harvard, Vancouver, ISO, and other styles
18

Ibrahim, Adriana Irawati Nur. "New methods for mode jumping in Markov chain Monte Carlo algorithms." Thesis, University of Bath, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500720.

Full text
Abstract:
Standard Markov chain Monte Carlo (MCMC) sampling methods can experience problem sampling from multi-modal distributions. A variety of sampling methods have been introduced to overcome this problem. The mode jumping method of Tjelmeland & Hegstad (2001) tries to find a mode and propose a value from that mode in each mode jumping attempt. This approach is inefficient in that the work needed to find each mode and model the distribution in a neighbourhood of the mode is carried out repeatedly during the sampling process. We shall propose a new mode jumping approach which retains features of the Tjelmeland & Hegstad (2001) method but differs in that it finds the modes in an initial search, then uses this information to jump between modes effectively in the sampling run. Although this approach does not allow a second chance to find modes in the sampling run, we can show that the overall probability of missing a mode in our approach is still low. We apply our methods to sample from distributions which have continuous variables, discrete variables, a mixture of discrete and continuous variables and variable dimension. We show that our methods work well in each case and in general, are better than the MCMC sampling methods commonly used in these cases and also, are better than the Tjelmeland & Hegstad (2001) method in particular.
APA, Harvard, Vancouver, ISO, and other styles
19

Barata, Teresa Cordeiro Ferreira Nunes. "Two examples of curve estimation using Markov Chain Monte Carlo methods." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Nemirovsky, Danil. "Monte Carlo methods and Markov chain based approaches for PageRank computation." Nice, 2010. http://www.theses.fr/2010NICE4018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Niederberger, Theresa. "Markov chain Monte Carlo methods for parameter identification in systems biology models." Diss., Ludwig-Maximilians-Universität München, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-157798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gausland, Eivind Blomholm. "Parameter Estimation in Extreme Value Models with Markov Chain Monte Carlo Methods." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10032.

Full text
Abstract:

In this thesis I have studied how to estimate parameters in an extreme value model with Markov Chain Monte Carlo (MCMC) given a data set. This is done with synthetic Gaussian time series generated by spectral densities, called spectrums, with a "box" shape. Three different spectrums have been used. In the acceptance probability in the MCMC algorithm, the likelihood have been built up by dividing the time series into blocks consisting of a constant number of points. In each block, only the maximum value, i.e. the extreme value, have been used. Each extreme value will then be interpreted as independent. Since the time series analysed are generated the way they are, there exists theoretical values for the parameters in the extreme value model. When the MCMC algorithm is tested to fit a model to the generated data, the true parameter values are already known. For the first and widest spectrum, the method is unable to find estimates matching the true values for the parameters in the extreme value model. For the two other spectrums, I obtained good estimates for some block lengths, others block lengths gave poor estimates compared to the true values. Finally, it looked like an increasing block length gave more accurate estimates as the spectrum became more narrow banded. A final simulation on a time series generated by a narrow banded spectrum, disproved this hypothesis.

APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Xiaojin. "Methods in Hypothesis Testing, Markov Chain Monte Carlo and Neuroimaging Data Analysis." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10927.

Full text
Abstract:
This thesis presents three distinct topics: a modified K-S test for autocorrelated data, improving MCMC convergence rate with residual augmentations, and resting state fMRI data analysis. In Chapter 1, we present a modified K-S test to adjust for sample autocorrelation. We first demonstrate that the original K-S test does not have the nominal type one error rate when applied to autocorrelated samples. Then the notion of mixing conditions and Billingsley's theorem are reviewed. Based on these results, we suggest an effective sample size formula to adjust sample autocorrelation. Extensive simulation studies are presented to demonstrate that this modified K-S test has the nominal type one error as well as reasonable power for various autocorrelated samples. An application to an fMRI data set is presented in the end. In Chapter 2 of this thesis, we present the work on MCMC sampling. Inspired by a toy example of random effect model, we find there are two ways to boost the efficiency of MCMC algorithms: direct and indirect residual augmentations. We first report theoretical investigations under a class of normal/independece models, where we find an intriguing phase transition type of phenomenon. Then we present an application of the direct residual augmentations to the probit regression, where we also include a numerical comparison with other existing algorithms.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
24

Demiris, Nikolaos. "Bayesian inference for stochastic epidemic models using Markov chain Monte Carlo methods." Thesis, University of Nottingham, 2004. http://eprints.nottingham.ac.uk/10078/.

Full text
Abstract:
This thesis is concerned with statistical methodology for the analysis of stochastic SIR (Susceptible->Infective->Removed) epidemic models. We adopt the Bayesian paradigm and we develop suitably tailored Markov chain Monte Carlo (MCMC) algorithms. The focus is on methods that are easy to generalise in order to accomodate epidemic models with complex population structures. Additionally, the models are general enough to be applicable to a wide range of infectious diseases. We introduce the stochastic epidemic models of interest and the MCMC methods we shall use and we review existing methods of statistical inference for epidemic models. We develop algorithms that utilise multiple precision arithmetic to overcome the well-known numerical problems in the calculation of the final size distribution for the generalised stochastic epidemic. Consequently, we use these exact results to evaluate the precision of asymptotic theorems previously derived in the literature. We also use the exact final size probabilities to obtain the posterior distribution of the threshold parameter $R_0$. We proceed to develop methods of statistical inference for an epidemic model with two levels of mixing. This model assumes that the population is partitioned into subpopulations and permits infection on both local (within-group) and global (population-wide) scales. We adopt two different data augmentation algorithms. The first method introduces an appropriate latent variable, the \emph{final severity}, for which we have asymptotic information in the event of an outbreak among a population with a large number of groups. Hence, approximate inference can be performed conditional on a ``major'' outbreak, a common assumption for stochastic processes with threshold behaviour such as epidemics and branching processes. In the last part of this thesis we use a \emph{random graph} representation of the epidemic process and we impute more detailed information about the infection spread. The augmented state-space contains aspects of the infection spread that have been impossible to obtain before. Additionally, the method is exact in the sense that it works for any (finite) population and group sizes and it does not assume that the epidemic is above threshold. Potential uses of the extra information include the design and testing of appropriate prophylactic measures like different vaccination strategies. An attractive feature is that the two algorithms complement each other in the sense that when the number of groups is large the approximate method (which is faster) is almost as accurate as the exact one and can be used instead. Finally, it is straightforward to extend our methods to more complex population structures like overlapping groups, small-world and scale-free networks
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Yuanzhi. "Bayesian Models for Repeated Measures Data Using Markov Chain Monte Carlo Methods." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/6997.

Full text
Abstract:
Bayesian models for repeated measures data are fitted to three different data an analysis projects. Markov Chain Monte Carlo (MCMC) methodology is applied to each case with Gibbs sampling and / or an adaptive Metropolis-Hastings (MH ) algorithm used to simulate the posterior distribution of parameters. We implement a Bayesian model with different variance-covariance structures to an audit fee data set. Block structures and linear models for variances are used to examine the linear trend and different behaviors before and after regulatory change during year 2004-2005. We proposed a Bayesian hierarchical model with latent teacher effects, to determine whether teacher professional development (PD) utilizing cyber-enabled resources lead to meaningful student learning outcomes measured by 8th grade student end-of-year scores (CRT scores) for students with teachers who underwent PD. Bayesian variable selection methods are applied to select teacher learning instrument variables to predict teacher effects. We fit a Bayesian two-part model with the first-part a multivariate probit model and the second-p art a log-normal regression to a repeated measures health care data set to analyze the relationship between Body Mass Index (BMI) and health care expenditures and the correlation between the probability of expenditures and dollar amount spent given expenditures. Models were fitted to a training set and predictions were made on both the training set and the test set.
APA, Harvard, Vancouver, ISO, and other styles
26

Spade, David Allen. "Investigating Convergence of Markov Chain Monte Carlo Methods for Bayesian Phylogenetic Inference." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1372173121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Veitch, John D. "Applications of Markov Chain Monte Carlo methods to continuous gravitational wave data analysis." Thesis, Connect to e-thesis to view abstract. Move to record for print version, 2007. http://theses.gla.ac.uk/35/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2007.
Ph.D. thesis submitted to Information and Mathematical Sciences Faculty, Department of Mathematics, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
28

Browne, William J. "Applying MCMC methods to multi-level models." Thesis, University of Bath, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Walker, Neil Rawlinson. "A Bayesian approach to the job search model and its application to unemployment durations using MCMC methods." Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Olvera, Astivia Oscar Lorenzo. "On the estimation of the polychoric correlation coefficient via Markov Chain Monte Carlo methods." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44349.

Full text
Abstract:
Bayesian statistics is an alternative approach to traditional frequentist statistics that is rapidly gaining adherents across different scientific fields. Although initially only accessible to statisticians or mathematically-sophisticated data analysts, advances in modern computational power are helping to make this new paradigm approachable to the everyday researcher and this dissemination is helping open doors to problems that have remained unsolvable or whose solution was extremely complicated through the use of classical statistics. In spite of this, many researchers in the behavioural or educational sciences are either unaware of this new approach or just vaguely familiar with some of its basic tenets. The primary purpose of this thesis is to take a well-known problem in psychometrics, the estimation of the polychoric correlation coefficient, and solve it using Bayesian statistics through the method developed by Albert (1992). Through the use of computer simulations this method is compared to traditional maximum likelihood estimation across various sample sizes, skewness levels and numbers of discretisation points for the latent variable, highlighting the cases where the Bayesian approach is superior, inferior or equally effective to the maximum likelihood approach. Another issue that is investigated is a sensitivity analysis of sorts of the prior probability distributions where a skewed (bivariate log-normal) and symmetric (bivariate normal) priors are used to calculate the polychoric correlation coefficient when feeding them data with varying degrees of skewness, helping demonstrate to the reader how does changing the prior distribution for certain kinds of data helps or hinders the estimation process. The most important results of these studies are discussed as well as future implications for the use of Bayesian statistics in psychometrics
APA, Harvard, Vancouver, ISO, and other styles
31

Witte, Hugh Douglas. "Markov chain Monte Carlo and data augmentation methods for continuous-time stochastic volatility models." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/283976.

Full text
Abstract:
In this paper we exploit some recent computational advances in Bayesian inference, coupled with data augmentation methods, to estimate and test continuous-time stochastic volatility models. We augment the observable data with a latent volatility process which governs the evolution of the data's volatility. The level of the latent process is estimated at finer increments than the data are observed in order to derive a consistent estimator of the variance over each time period the data are measured. The latent process follows a law of motion which has either a known transition density or an approximation to the transition density that is an explicit function of the parameters characterizing the stochastic differential equation. We analyze several models which differ with respect to both their drift and diffusion components. Our results suggest that for two size-based portfolios of U.S. common stocks, a model in which the volatility process is characterized by nonstationarity and constant elasticity of instantaneous variance (with respect to the level of the process) greater than 1 best describes the data. We show how to estimate the various models, undertake the model selection exercise, update posterior distributions of parameters and functions of interest in real time, and calculate smoothed estimates of within sample volatility and prediction of out-of-sample returns and volatility. One nice aspect of our approach is that no transformations of the data or the latent processes, such as subtracting out the mean return prior to estimation, or formulating the model in terms of the natural logarithm of volatility, are required.
APA, Harvard, Vancouver, ISO, and other styles
32

Bray, Isabelle Cella. "Modelling the prevalence of Down syndrome with applications of Markov chain Monte Carlo methods." Thesis, University of Plymouth, 1998. http://hdl.handle.net/10026.1/2408.

Full text
Abstract:
This thesis was motivated by applications in the epidemiology of Down syndrome and prenatal screening for Down syndrome. Methodological problems arising in these applications include under-ascertainment of cases in livebirth studies, double-sampled data with missing observations and coarsening of data. These issues are considered from a classical perspective using maximum likelihood and from a Bayesian viewpoint employing Markov chain Monte Carlo (MCMC) techniques. Livebirth prevalence studies published in the literature used a variety of data collection methods and many are of uncertain completeness. In two of the nine studies an estimate of the level of under-reporting is available. We present a meta-analysis of these studies in which maternal age-related risks and the levels of under-ascertainment in individual studies are estimated simultaneously. A modified logistic model is used to describe the relationship between Down syndrome prevalence and maternal age. The model is then extended to include data from several studies of prevalence rates observed at times of chorionic villus sampling (CVS) and amniocentesis. New estimates for spontaneous loss rates between the times" of CVS, amniocentesis and live birth are presented. The classical analysis of live birth prevalence data is then compared with an MCMC analysis which allows prior information concerning ascertainment to be incorporated. This approach is particularly attractive since the double-sampled data structure includes missing observations. The MCMC algorithm, which uses single-component Metropolis-Hastings steps to simulate model parameters and missing data, is run under three alternative prior specifications. Several convergence diagnostics are also considered and compared. Finally, MCMC techniques are used to model the distribution of fetal nuchal translucency (NT), an ultrasound marker for Down syndrome. The data are a mixture of measurements rounded to whole millimetres and measurements more accurately recorded to one decimal place. An MCMC algorithm is applied to simulate the proportion of measurements rounded to whole millimetres and parameters to describe the distribution of NT in unaffected and Down syndrome pregnancies. Predictive probabilities of Down syndrome given NT and maternal age are then calculated.
APA, Harvard, Vancouver, ISO, and other styles
33

Olsen, Andrew Nolan. "When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433770406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Niederberger, Theresa [Verfasser], and Patrick [Akademischer Betreuer] Cramer. "Markov chain Monte Carlo methods for parameter identification in systems biology models / Theresa Niederberger. Betreuer: Patrick Cramer." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2012. http://d-nb.info/1036101029/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yang, Chao. "ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511967797285962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lewis, John Robert. "Bayesian Restricted Likelihood Methods." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407505392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Walker, Matthew James. "Methods for Bayesian inversion of seismic data." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10504.

Full text
Abstract:
The purpose of Bayesian seismic inversion is to combine information derived from seismic data and prior geological knowledge to determine a posterior probability distribution over parameters describing the elastic and geological properties of the subsurface. Typically the subsurface is modelled by a cellular grid model containing thousands or millions of cells within which these parameters are to be determined. Thus such inversions are computationally expensive due to the size of the parameter space (being proportional to the number of grid cells) over which the posterior is to be determined. Therefore, in practice approximations to Bayesian seismic inversion must be considered. A particular, existing approximate workflow is described in this thesis: the so-called two-stage inversion method explicitly splits the inversion problem into elastic and geological inversion stages. These two stages sequentially estimate the elastic parameters given the seismic data, and then the geological parameters given the elastic parameter estimates, respectively. In this thesis a number of methodologies are developed which enhance the accuracy of this approximate workflow. To reduce computational cost, existing elastic inversion methods often incorporate only simplified prior information about the elastic parameters. Thus a method is introduced which transforms such results, obtained using prior information specified using only two-point geostatistics, into new estimates containing sophisticated multi-point geostatistical prior information. The method uses a so-called deep neural network, trained using only synthetic instances (or `examples') of these two estimates, to apply this transformation. The method is shown to improve the resolution and accuracy (by comparison to well measurements) of elastic parameter estimates determined for a real hydrocarbon reservoir. It has been shown previously that so-called mixture density network (MDN) inversion can be used to solve geological inversion analytically (and thus very rapidly and efficiently) but only under certain assumptions about the geological prior distribution. A so-called prior replacement operation is developed here, which can be used to relax these requirements. It permits the efficient MDN method to be incorporated into general stochastic geological inversion methods which are free from the restrictive assumptions. Such methods rely on the use of Markov-chain Monte-Carlo (MCMC) sampling, which estimate the posterior (over the geological parameters) by producing a correlated chain of samples from it. It is shown that this approach can yield biased estimates of the posterior. Thus an alternative method which obtains a set of non-correlated samples from the posterior is developed, avoiding the possibility of bias in the estimate. The new method was tested on a synthetic geological inversion problem; its results compared favourably to those of Gibbs sampling (a MCMC method) on the same problem, which exhibited very significant bias. The geological prior information used in seismic inversion can be derived from real images which bear similarity to the geology anticipated within the target region of the subsurface. Such so-called training images are not always available from which this information (in the form of geostatistics) may be extracted. In this case appropriate training images may be generated by geological experts. However, this process can be costly and difficult. Thus an elicitation method (based on a genetic algorithm) is developed here which obtains the appropriate geostatistics reliably and directly from a geological expert, without the need for training images. 12 experts were asked to use the algorithm (individually) to determine the appropriate geostatistics for a physical (target) geological image. The majority of the experts were able to obtain a set of geostatistics which were consistent with the true (measured) statistics of the target image.
APA, Harvard, Vancouver, ISO, and other styles
38

Allchin, Lorraine Doreen May. "Statistical methods for mapping complex traits." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:65f392ba-1b64-4b00-8871-7cee98809ce1.

Full text
Abstract:
The first section of this thesis addresses the problem of simultaneously identifying multiple loci that are associated with a trait, using a Bayesian Markov Chain Monte Carlo method. It is applicable to both case/control and quantitative data. I present simulations comparing the methods to standard frequentist methods in human case/control and mouse QTL datasets, and show that in the case/control simulations the standard frequentist method out performs my model for all but the highest effect simulations and that for the mouse QTL simulations my method performs as well as the frequentist method in some cases and worse in others. I also present analysis of real data and simulations applying my method to a simulated epistasis data set. The next section was inspired by the challenges involved in applying a Markov Chain Monte Carlo method to genetic data. It is an investigation into the performance and benefits of the Matlab parallel computing toolbox, specifically its implementation of the Cuda programing language to Matlab's higher level language. Cuda is a language which allows computational calculations to be carried out on the computer's graphics processing unit (GPU) rather than its central processing unit (CPU). The appeal of this tool box is its ease of use as few code adaptions are needed. The final project of this thesis was to develop an HMM for reconstructing the founders of sparsely sequenced inbred populations. The motivation here, that whilst sequencing costs are rapidly decreasing, it is still prohibitively expensive to fully sequence a large number of individuals. It was proposed that, for populations descended from a known number of founders, it would be possible to sequence these individuals with a very low coverage, use a hidden Markov model (HMM) to represent the chromosomes as mosaics of the founders, then use these states to impute the missing data. For this I developed a Viterbi algorithm with a transition probability matrix based on recombination rate which changes for each observed state.
APA, Harvard, Vancouver, ISO, and other styles
39

Dhulipala, Lakshmi Narasimha Somayajulu. "Bayesian Methods for Intensity Measure and Ground Motion Selection in Performance-Based Earthquake Engineering." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88493.

Full text
Abstract:
The objective of quantitative Performance-Based Earthquake Engineering (PBEE) is designing buildings that meet the specified performance objectives when subjected to an earthquake. One challenge to completely relying upon a PBEE approach in design practice is the open-ended nature of characterizing the earthquake ground motion by selecting appropriate ground motions and Intensity Measures (IM) for seismic analysis. This open-ended nature changes the quantified building performance depending upon the ground motions and IMs selected. So, improper ground motion and IM selection can lead to errors in structural performance prediction and thus to poor designs. Hence, the goal of this dissertation is to propose methods and tools that enable an informed selection of earthquake IMs and ground motions, with the broader goal of contributing toward a robust PBEE analysis. In doing so, the change of perspective and the mechanism to incorporate additional information provided by Bayesian methods will be utilized. Evaluation of the ability of IMs towards predicting the response of a building with precision and accuracy for a future, unknown earthquake is a fundamental problem in PBEE analysis. Whereas current methods for IM quality assessment are subjective and have multiple criteria (hence making IM selection challenging), a unified method is proposed that enables rating the numerous IMs. This is done by proposing the first quantitative metric for assessing IM accuracy in predicting the building response to a future earthquake, and then by investigating the relationship between precision and accuracy. This unified metric is further expected to provide a pathway toward improving PBEE analysis by allowing the consideration of multiple IMs. Similar to IM selection, ground motion selection is important for PBEE analysis. Consensus on the "right" input motions for conducting seismic response analyses is often varied and dependent on the analyst. Hence, a general and flexible tool is proposed to aid ground motion selection. General here means the tool encompasses several structural types by considering their sensitivities to different ground motion characteristics. Flexible here means the tool can consider additional information about the earthquake process when available with the analyst. Additionally, in support of this ground motion selection tool, a simplified method for seismic hazard analysis for a vector of IMs is developed. This dissertation addresses four critical issues in IM and ground motion selection for PBEE by proposing: (1) a simplified method for performing vector hazard analysis given multiple IMs; (2) a Bayesian framework to aid ground motion selection which is flexible and general to incorporate preferences of the analyst; (3) a unified metric to aid IM quality assessment for seismic fragility and demand hazard assessment; (4) Bayesian models for capturing heteroscedasticity (non-constant standard deviation) in seismic response analyses which may further influence IM selection.
Doctor of Philosophy
Earthquake ground shaking is a complex phenomenon since there is no unique way to assess its strength. Yet, the strength of ground motion (shaking) becomes an integral part for predicting the future earthquake performance of buildings using the Performance-Based Earthquake Engineering (PBEE) framework. The PBEE framework predicts building performance in terms of expected financial losses, possible downtime, the potential of the building to collapse under a future earthquake. Much prior research has shown that the predictions made by the PBEE framework are heavily dependent upon how the strength of a future earthquake ground motion is characterized. This dependency leads to uncertainty in the predicted building performance and hence its seismic design. The goal of this dissertation therefore is to employ Bayesian reasoning, which takes into account the alternative explanations or perspectives of a research problem, and propose robust quantitative methods that aid IM selection and ground motion selection in PBEE The fact that the local intensity of an earthquake can be characterized in multiple ways using Intensity Measures (IM; e.g., peak ground acceleration) is problematic for PBEE because it leads to different PBEE results for different choices of the IM. While formal procedures for selecting an optimal IM exist, they may be considered as being subjective and have multiple criteria making their use difficult and inconclusive. Bayes rule provides a mechanism called change of perspective using which a problem that is difficult to solve from one perspective could be tackled from a different perspective. This change of perspective mechanism is used to propose a quantitative, unified metric for rating alternative IMs. The immediate application of this metric is aiding the selection of the best IM that would predict the building earthquake performance with least bias. Structural analysis for performance assessment in PBEE is conducted by selecting ground motions which match a target response spectrum (a representation of future ground motions). The definition of a target response spectrum lacks general consensus and is dependent on the analysts’ preferences. To encompass all these preferences and requirements of analysts, a Bayesian target response spectrum which is general and flexible is proposed. While the generality of this Bayesian target response spectrum allow analysts select those ground motions to which their structures are the most sensitive, its flexibility permits the incorporation of additional information (preferences) into the target response spectrum development. This dissertation addresses four critical questions in PBEE: (1) how can we best define ground motion at a site?; (2) if ground motion can only be defined by multiple metrics, how can we easily derive the probability of such shaking at a site?; (3) how do we use these multiple metrics to select a set of ground motion records that best capture the site’s unique seismicity; (4) when those records are used to analyze the response of a structure, how can we be sure that a standard linear regression technique accurately captures the uncertainty in structural response at low and high levels of shaking?
APA, Harvard, Vancouver, ISO, and other styles
40

Matsumoto, Nobuyuki. "Geometry of configuration space in Markov chain Monte Carlo methods and the worldvolume approach to the tempered Lefschetz thimble method." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bӑrbos, Andrei-Cristian. "Efficient high-dimension gaussian sampling based on matrix splitting : application to bayesian Inversion." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0002/document.

Full text
Abstract:
La thèse traite du problème de l’échantillonnage gaussien en grande dimension.Un tel problème se pose par exemple dans les problèmes inverses bayésiens en imagerie où le nombre de variables atteint facilement un ordre de grandeur de 106_109.La complexité du problème d’échantillonnage est intrinsèquement liée à la structure de la matrice de covariance. Pour résoudre ce problème différentes solutions ont déjà été proposées,parmi lesquelles nous soulignons l’algorithme de Hogwild qui exécute des mises à jour de Gibbs locales en parallèle avec une synchronisation globale périodique.Notre algorithme utilise la connexion entre une classe d’échantillonneurs itératifs et les solveurs itératifs pour les systèmes linéaires. Il ne cible pas la distribution gaussienne requise, mais cible une distribution approximative. Cependant, nous sommes en mesure de contrôler la disparité entre la distribution approximative est la distribution requise au moyen d’un seul paramètre de réglage.Nous comparons d’abord notre algorithme avec les algorithmes de Gibbs et Hogwild sur des problèmes de taille modérée pour différentes distributions cibles. Notre algorithme parvient à surpasser les algorithmes de Gibbs et Hogwild dans la plupart des cas. Notons que les performances de notre algorithme dépendent d’un paramètre de réglage.Nous comparons ensuite notre algorithme avec l’algorithme de Hogwild sur une application réelle en grande dimension, à savoir la déconvolution-interpolation d’image.L’algorithme proposé permet d’obtenir de bons résultats, alors que l’algorithme de Hogwild ne converge pas. Notons que pour des petites valeurs du paramètre de réglage, notre algorithme ne converge pas non plus. Néanmoins, une valeur convenablement choisie pour ce paramètre permet à notre échantillonneur de converger et d’obtenir de bons résultats
The thesis deals with the problem of high-dimensional Gaussian sampling.Such a problem arises for example in Bayesian inverse problems in imaging where the number of variables easily reaches an order of 106_109. The complexity of the sampling problem is inherently linked to the structure of the covariance matrix. Different solutions to tackle this problem have already been proposed among which we emphasizethe Hogwild algorithm which runs local Gibbs sampling updates in parallel with periodic global synchronisation.Our algorithm makes use of the connection between a class of iterative samplers and iterative solvers for systems of linear equations. It does not target the required Gaussian distribution, instead it targets an approximate distribution. However, we are able to control how far off the approximate distribution is with respect to the required one by means of asingle tuning parameter.We first compare the proposed sampling algorithm with the Gibbs and Hogwild algorithms on moderately sized problems for different target distributions. Our algorithm manages to out perform the Gibbs and Hogwild algorithms in most of the cases. Let us note that the performances of our algorithm are dependent on the tuning parameter.We then compare the proposed algorithm with the Hogwild algorithm on a large scalereal application, namely image deconvolution-interpolation. The proposed algorithm enables us to obtain good results, whereas the Hogwild algorithm fails to converge. Let us note that for small values of the tuning parameter our algorithm fails to converge as well.Not with standing, a suitably chosen value for the tuning parameter enables our proposed sampler to converge and to deliver good results
APA, Harvard, Vancouver, ISO, and other styles
42

Haber, René. "Transition Matrix Monte Carlo Methods for Density of States Prediction." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-146873.

Full text
Abstract:
Ziel dieser Arbeit ist zunächst die Entwicklung einer Vergleichsgrundlage, auf Basis derer Algorithmen zur Berechnung der Zustandsdichte verglichen werden können. Darauf aufbauend wird ein bestehendes übergangsmatrixbasiertes Verfahren für das großkanonisch Ensemble um ein neues Auswerteverfahren erweitert. Dazu werden numerische Untersuchungen verschiedener Monte-Carlo-Algorithmen zur Berechnung der Zustandsdichte durchgeführt. Das Hauptaugenmerk liegt dabei auf Verfahren, die auf Übergangsmatrizen basieren, sowie auf dem Verfahren von Wang und Landau. Im ersten Teil der Forschungsarbeit wird ein umfassender Überblick über Monte-Carlo-Methoden und Auswerteverfahren zur Bestimmung der Zustandsdichte sowie über verwandte Verfahren gegeben. Außerdem werden verschiedene Methoden zur Berechnung der Zustandsdichte aus Übergangsmatrizen vorgestellt und diskutiert. Im zweiten Teil der Arbeit wird eine neue Vergleichsgrundlage für Algorithmen zur Bestimmung der Zustandsdichte erarbeitet. Dazu wird ein neues Modellsystem entwickelt, an dem verschiedene Parameter frei gewählt werden können und für das die exakte Zustandsdichte sowie die exakte Übergangsmatrix bekannt sind. Anschließend werden zwei weitere Systeme diskutiert für welche zumindest die exakte Zustandsdichte bekannt ist: das Ising Modell und das Lennard-Jones System. Der dritte Teil der Arbeit beschäftigt sich mit numerischen Untersuchungen an einer Auswahl der vorgestellten Verfahren. Auf Basis der entwickelten Vergleichsgrundlage wird der Einfluss verschiedener Parameter auf die Qualität der berechneten Zustandsdichte quantitativ bestimmt. Es wird gezeigt, dass Übergangsmatrizen in Simulationen mit Wang-Landau-Verfahren eine wesentlich bessere Zustandsdichte liefern als das Verfahren selbst. Anschließend werden die gewonnenen Erkenntnisse genutzt um ein neues Verfahren zu entwickeln mit welchem die Zustandsdichte mittels Minimierung der Abweichungen des detaillierten Gleichgewichts aus großen, dünnbesetzten Übergangsmatrizen gewonnen werden kann. Im Anschluss wird ein Lennard-Jones-System im großkanonischen Ensemble untersucht. Es wird gezeigt, dass durch das neue Verfahren Zustandsdichte und Dampfdruckkurve bestimmt werden können, welche qualitativ mit Referenzdaten übereinstimmen.
APA, Harvard, Vancouver, ISO, and other styles
43

Heng, Jeremy. "On the use of transport and optimal control methods for Monte Carlo simulation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6cbc7690-ac54-4a6a-b235-57fa62e5b2fc.

Full text
Abstract:
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
44

Papež, Milan. "Monte Carlo identifikační strategie pro stavové modely." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400416.

Full text
Abstract:
Stavové modely jsou neobyčejně užitečné v mnoha inženýrských a vědeckých oblastech. Jejich atraktivita vychází především z toho faktu, že poskytují obecný nástroj pro popis široké škály dynamických systémů reálného světa. Nicméně, z důvodu jejich obecnosti, přidružené úlohy inference parametrů a stavů jsou ve většině praktických situacích nepoddajné. Tato dizertační práce uvažuje dvě zvláště důležité třídy nelineárních a ne-Gaussovských stavových modelů: podmíněně konjugované stavové modely a Markovsky přepínající nelineární modely. Hlavní rys těchto modelů spočívá v tom, že---navzdory jejich nepoddajnosti---obsahují poddajnou podstrukturu. Nepoddajná část požaduje abychom využily aproximační techniky. Monte Carlo výpočetní metody představují teoreticky a prakticky dobře etablovaný nástroj pro řešení tohoto problému. Výhoda těchto modelů spočívá v tom, že poddajná část může být využita pro zvýšení efektivity Monte Carlo metod tím, že se uchýlíme k Rao-Blackwellizaci. Konkrétně, tato doktorská práce navrhuje dva Rao-Blackwellizované částicové filtry pro identifikaci buďto statických anebo časově proměnných parametrů v podmíněně konjugovaných stavových modelech. Kromě toho, tato práce adoptuje nedávnou particle Markov chain Monte Carlo metodologii pro návrh Rao-Blackwellizovaných částicových Gibbsových jader pro vyhlazování stavů v Markovsky přepínajících nelineárních modelech. Tyto jádra jsou posléze použity pro inferenci parametrů metodou maximální věrohodnosti v uvažovaných modelech. Výsledné experimenty demonstrují, že navržené algoritmy překonávají příbuzné techniky ve smyslu přesnosti odhadu a výpočetního času.
APA, Harvard, Vancouver, ISO, and other styles
45

Schmidl, Daniel [Verfasser], Fabian J. [Akademischer Betreuer] Theis, Achim [Akademischer Betreuer] Tresch, and Claudia [Akademischer Betreuer] Czado. "Bayesian model inference in dynamic biological systems using Markov Chain Monte Carlo methods / Daniel Schmidl. Gutachter: Achim Tresch ; Claudia Czado. Betreuer: Fabian J. Theis." München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1030099774/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Frühwirth-Schnatter, Sylvia. "MCMC Estimation of Classical and Dynamic Switching and Mixture Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/698/1/document.pdf.

Full text
Abstract:
In the present paper we discuss Bayesian estimation of a very general model class where the distribution of the observations is assumed to depend on a latent mixture or switching variable taking values in a discrete state space. This model class covers e.g. finite mixture modelling, Markov switching autoregressive modelling and dynamic linear models with switching. Joint Bayesian estimation of all latent variables, model parameters and parameters determining the probability law of the switching variable is carried out by a new Markov Chain Monte Carlo method called permutation sampling. Estimation of switching and mixture models is known to be faced with identifiability problems as switching and mixture are identifiable only up to permutations of the indices of the states. For a Bayesian analysis the posterior has to be constrained in such a way that identifiablity constraints are fulfilled. The permutation sampler is designed to sample efficiently from the constrained posterior, by first sampling from the unconstrained posterior - which often can be done in a convenient multimove manner - and then by applying a suitable permutation, if the identifiability constraint is violated. We present simple conditions on the prior which ensure that this method is a valid Markov Chain Monte Carlo method (that is invariance, irreducibility and aperiodicity hold). Three case studies are presented, including finite mixture modelling of fetal lamb data, Markov switching Autoregressive modelling of the U.S. quarterly real GDP data, and modelling the U .S./U.K. real exchange rate by a dynamic linear model with Markov switching heteroscedasticity. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
47

Al-Saadony, Muhannad. "Bayesian stochastic differential equation modelling with application to finance." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1530.

Full text
Abstract:
In this thesis, we consider some popular stochastic differential equation models used in finance, such as the Vasicek Interest Rate model, the Heston model and a new fractional Heston model. We discuss how to perform inference about unknown quantities associated with these models in the Bayesian framework. We describe sequential importance sampling, the particle filter and the auxiliary particle filter. We apply these inference methods to the Vasicek Interest Rate model and the standard stochastic volatility model, both to sample from the posterior distribution of the underlying processes and to update the posterior distribution of the parameters sequentially, as data arrive over time. We discuss the sensitivity of our results to prior assumptions. We then consider the use of Markov chain Monte Carlo (MCMC) methodology to sample from the posterior distribution of the underlying volatility process and of the unknown model parameters in the Heston model. The particle filter and the auxiliary particle filter are also employed to perform sequential inference. Next we extend the Heston model to the fractional Heston model, by replacing the Brownian motions that drive the underlying stochastic differential equations by fractional Brownian motions, so allowing a richer dependence structure across time. Again, we use a variety of methods to perform inference. We apply our methodology to simulated and real financial data with success. We then discuss how to make forecasts using both the Heston and the fractional Heston model. We make comparisons between the models and show that using our new fractional Heston model can lead to improve forecasts for real financial data.
APA, Harvard, Vancouver, ISO, and other styles
48

Crespo, Cuaresma Jesus, and Philipp Piribauer. "Bayesian Variable Selection in Spatial Autoregressive Models." WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4584/1/wp199.pdf.

Full text
Abstract:
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging techniques both in terms of in-sample predictive performance and computational efficiency. (authors' abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
49

Thouvenin, Pierre-Antoine. "Modeling spatial and temporal variabilities in hyperspectral image unmixing." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19258/1/THOUVENIN_PierreAntoine.pdf.

Full text
Abstract:
Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Qianqiu. "Bayesian inference on dynamics of individual and population hepatotoxicity via state space models." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1124297874.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xiv, 155 p.; also includes graphics (some col.). Includes bibliographical references (p. 147-155). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography