Academic literature on the topic 'Unadjusted Langevin algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unadjusted Langevin algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unadjusted Langevin algorithm"

1

Brosse, Nicolas, Alain Durmus, Éric Moulines, and Sotirios Sabanis. "The tamed unadjusted Langevin algorithm." Stochastic Processes and their Applications 129, no. 10 (October 2019): 3638–63. http://dx.doi.org/10.1016/j.spa.2018.10.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Durmus, Alain, and Éric Moulines. "Nonasymptotic convergence analysis for the unadjusted Langevin algorithm." Annals of Applied Probability 27, no. 3 (June 2017): 1551–87. http://dx.doi.org/10.1214/16-aap1238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Durmus, Alain, and Éric Moulines. "High-dimensional Bayesian inference via the unadjusted Langevin algorithm." Bernoulli 25, no. 4A (November 2019): 2854–82. http://dx.doi.org/10.3150/18-bej1073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

De Bortoli, Valentin, Alain Durmus, Marcelo Pereyra, and Ana F. Vidal. "Efficient stochastic optimisation by unadjusted Langevin Monte Carlo." Statistics and Computing 31, no. 3 (March 19, 2021). http://dx.doi.org/10.1007/s11222-020-09986-y.

Full text
Abstract:
AbstractStochastic approximation methods play a central role in maximum likelihood estimation problems involving intractable likelihood functions, such as marginal likelihoods arising in problems with missing or incomplete data, and in parametric empirical Bayesian estimation. Combined with Markov chain Monte Carlo algorithms, these stochastic optimisation methods have been successfully applied to a wide range of problems in science and industry. However, this strategy scales poorly to large problems because of methodological and theoretical difficulties related to using high-dimensional Markov chain Monte Carlo algorithms within a stochastic approximation scheme. This paper proposes to address these difficulties by using unadjusted Langevin algorithms to construct the stochastic approximation. This leads to a highly efficient stochastic optimisation methodology with favourable convergence properties that can be quantified explicitly and easily checked. The proposed methodology is demonstrated with three experiments, including a challenging application to statistical audio analysis and a sparse Bayesian logistic regression with random effects problem.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Unadjusted Langevin algorithm"

1

Enfroy, Aurélien. "Contributions à la conception, l'étude et la mise en œuvre de méthodes de Monte Carlo par chaîne de Markov appliquées à l'inférence bayésienne." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS012.

Full text
Abstract:
Cette thèse s'intéresse à l'analyse et la conception de méthodes de Monte Carlo par chaine de Markov (MCMC) utilisées dans l'échantillonnage en grande dimension. Elle est constituée de trois parties.La première introduit une nouvelle classe de chaînes de Markov et méthodes MCMC. Celles-ci permettent d'améliorer des méthodes MCMC à l'aide d'échantillons visant une restriction de la loi cible originale sur un domaine choisi par l'utilisateur. Cette procédure donne naissance à une nouvelle chaîne qui tire au mieux parti des propriétés de convergences des deux processus qui lui sont sous-jacents. En plus de montrer que cette chaîne vise toujours la mesure cible originale, nous établissons également des propriétés d'ergodicité sous des hypothèses faibles sur les noyaux de Markov mis en jeu.La seconde partie de ce document s'intéresse aux discrétisations de la diffusion de Langevin sous-amortie. Cette diffusion ne pouvant être calculée explicitement en général, il est classique de considérer des discrétisations. Cette thèse établie pour une large classe de discrétisations une condition de minoration uniforme en le pas de temps. Avec des hypothèses supplémentaires sur le potentiel, cela permet de montrer que ces discrétisations convergent géométriquement vers leur unique mesure de probabilité invariante en V-norme.La dernière partie étudie l'algorithme de Langevin non ajusté dans le cas où le gradient du potentiel est connu à une erreur uniformément bornée près. Cette partie fournie des bornes en V-norme et en distance de Wasserstein entre les itérations de l'algorithme avec le gradient exact et celle avec le gradient approché. Pour ce faire il est introduit une chaine de Markov auxiliaire qui borne la différence. Il est établi que cette chaîne auxiliaire converge en loi vers un processus dit collant déjà étudié dans la littérature pour la version continue de ce problème
This thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in high-dimensional sampling. It consists of three parts.The first part introduces a new class of Markov chains and MCMC methods. These methods allow to improve MCMC methods by using samples targeting a restriction of the original target distribution on a domain chosen by the user. This procedure gives rise to a new chain that takes advantage of the convergence properties of the two underlying processes. In addition to showing that this chain always targets the original target measure, we also establish ergodicity properties under weak assumptions on the Markov kernels involved.The second part of this thesis focuses on discretizations of the underdamped Langevin diffusion. As this diffusion cannot be computed explicitly in general, it is classical to consider discretizations. This thesis establishes for a large class of discretizations a condition of uniform minimization in the time step. With additional assumptions on the potential, it shows that these discretizations converge geometrically to their unique V-invariant probability measure.The last part studies the unadjusted Langevin algorithm in the case where the gradient of the potential is known to within a uniformly bounded error. This part provides bounds in V-norm and in Wasserstein distance between the iterations of the algorithm with the exact gradient and the one with the approximated gradient. To do this, an auxiliary Markov chain is introduced that bounds the difference. It is established that this auxiliary chain converges in distribution to sticky process already studied in the literature for the continuous version of this problem
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography