Contents
Academic literature on the topic 'Unadjusted Langevin algorithm'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unadjusted Langevin algorithm.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Unadjusted Langevin algorithm"
Brosse, Nicolas, Alain Durmus, Éric Moulines, and Sotirios Sabanis. "The tamed unadjusted Langevin algorithm." Stochastic Processes and their Applications 129, no. 10 (October 2019): 3638–63. http://dx.doi.org/10.1016/j.spa.2018.10.002.
Full textDurmus, Alain, and Éric Moulines. "Nonasymptotic convergence analysis for the unadjusted Langevin algorithm." Annals of Applied Probability 27, no. 3 (June 2017): 1551–87. http://dx.doi.org/10.1214/16-aap1238.
Full textDurmus, Alain, and Éric Moulines. "High-dimensional Bayesian inference via the unadjusted Langevin algorithm." Bernoulli 25, no. 4A (November 2019): 2854–82. http://dx.doi.org/10.3150/18-bej1073.
Full textDe Bortoli, Valentin, Alain Durmus, Marcelo Pereyra, and Ana F. Vidal. "Efficient stochastic optimisation by unadjusted Langevin Monte Carlo." Statistics and Computing 31, no. 3 (March 19, 2021). http://dx.doi.org/10.1007/s11222-020-09986-y.
Full textDissertations / Theses on the topic "Unadjusted Langevin algorithm"
Enfroy, Aurélien. "Contributions à la conception, l'étude et la mise en œuvre de méthodes de Monte Carlo par chaîne de Markov appliquées à l'inférence bayésienne." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS012.
Full textThis thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in high-dimensional sampling. It consists of three parts.The first part introduces a new class of Markov chains and MCMC methods. These methods allow to improve MCMC methods by using samples targeting a restriction of the original target distribution on a domain chosen by the user. This procedure gives rise to a new chain that takes advantage of the convergence properties of the two underlying processes. In addition to showing that this chain always targets the original target measure, we also establish ergodicity properties under weak assumptions on the Markov kernels involved.The second part of this thesis focuses on discretizations of the underdamped Langevin diffusion. As this diffusion cannot be computed explicitly in general, it is classical to consider discretizations. This thesis establishes for a large class of discretizations a condition of uniform minimization in the time step. With additional assumptions on the potential, it shows that these discretizations converge geometrically to their unique V-invariant probability measure.The last part studies the unadjusted Langevin algorithm in the case where the gradient of the potential is known to within a uniformly bounded error. This part provides bounds in V-norm and in Wasserstein distance between the iterations of the algorithm with the exact gradient and the one with the approximated gradient. To do this, an auxiliary Markov chain is introduced that bounds the difference. It is established that this auxiliary chain converges in distribution to sticky process already studied in the literature for the continuous version of this problem