Зміст
Добірка наукової літератури з теми "Unadjusted Langevin algorithm"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Unadjusted Langevin algorithm".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Unadjusted Langevin algorithm"
Brosse, Nicolas, Alain Durmus, Éric Moulines, and Sotirios Sabanis. "The tamed unadjusted Langevin algorithm." Stochastic Processes and their Applications 129, no. 10 (October 2019): 3638–63. http://dx.doi.org/10.1016/j.spa.2018.10.002.
Повний текст джерелаDurmus, Alain, and Éric Moulines. "Nonasymptotic convergence analysis for the unadjusted Langevin algorithm." Annals of Applied Probability 27, no. 3 (June 2017): 1551–87. http://dx.doi.org/10.1214/16-aap1238.
Повний текст джерелаDurmus, Alain, and Éric Moulines. "High-dimensional Bayesian inference via the unadjusted Langevin algorithm." Bernoulli 25, no. 4A (November 2019): 2854–82. http://dx.doi.org/10.3150/18-bej1073.
Повний текст джерелаDe Bortoli, Valentin, Alain Durmus, Marcelo Pereyra, and Ana F. Vidal. "Efficient stochastic optimisation by unadjusted Langevin Monte Carlo." Statistics and Computing 31, no. 3 (March 19, 2021). http://dx.doi.org/10.1007/s11222-020-09986-y.
Повний текст джерелаДисертації з теми "Unadjusted Langevin algorithm"
Enfroy, Aurélien. "Contributions à la conception, l'étude et la mise en œuvre de méthodes de Monte Carlo par chaîne de Markov appliquées à l'inférence bayésienne." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS012.
Повний текст джерелаThis thesis focuses on the analysis and design of Markov chain Monte Carlo (MCMC) methods used in high-dimensional sampling. It consists of three parts.The first part introduces a new class of Markov chains and MCMC methods. These methods allow to improve MCMC methods by using samples targeting a restriction of the original target distribution on a domain chosen by the user. This procedure gives rise to a new chain that takes advantage of the convergence properties of the two underlying processes. In addition to showing that this chain always targets the original target measure, we also establish ergodicity properties under weak assumptions on the Markov kernels involved.The second part of this thesis focuses on discretizations of the underdamped Langevin diffusion. As this diffusion cannot be computed explicitly in general, it is classical to consider discretizations. This thesis establishes for a large class of discretizations a condition of uniform minimization in the time step. With additional assumptions on the potential, it shows that these discretizations converge geometrically to their unique V-invariant probability measure.The last part studies the unadjusted Langevin algorithm in the case where the gradient of the potential is known to within a uniformly bounded error. This part provides bounds in V-norm and in Wasserstein distance between the iterations of the algorithm with the exact gradient and the one with the approximated gradient. To do this, an auxiliary Markov chain is introduced that bounds the difference. It is established that this auxiliary chain converges in distribution to sticky process already studied in the literature for the continuous version of this problem