Добірка наукової літератури з теми "Algorithme de Robbins-Monro"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Algorithme de Robbins-Monro".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Algorithme de Robbins-Monro"
Moler, José A., Fernando Plo, and Miguel San Miguel. "Adaptive designs and Robbins–Monro algorithm." Journal of Statistical Planning and Inference 131, no. 1 (April 2005): 161–74. http://dx.doi.org/10.1016/j.jspi.2003.12.018.
Повний текст джерелаXU, ZI, YINGYING LI, and XINGFANG ZHAO. "SIMULATION-BASED OPTIMIZATION BY NEW STOCHASTIC APPROXIMATION ALGORITHM." Asia-Pacific Journal of Operational Research 31, no. 04 (August 2014): 1450026. http://dx.doi.org/10.1142/s0217595914500262.
Повний текст джерелаArouna, Bouhari. "Robbins–Monro algorithms and variance reduction in finance." Journal of Computational Finance 7, no. 2 (2003): 35–61. http://dx.doi.org/10.21314/jcf.2003.111.
Повний текст джерелаWardi, Y. "On a proof of a Robbins-Monro algorithm." Journal of Optimization Theory and Applications 64, no. 1 (January 1990): 217. http://dx.doi.org/10.1007/bf00940033.
Повний текст джерелаLin, Siming, and Jennie Si. "Weight-Value Convergence of the SOM Algorithm for Discrete Input." Neural Computation 10, no. 4 (May 1, 1998): 807–14. http://dx.doi.org/10.1162/089976698300017485.
Повний текст джерелаMoser, Barry Kurt, and Melinda H. McCann. "Algorithm AS 316: A Robbins-Monro-based Sequential Procedure." Journal of the Royal Statistical Society: Series C (Applied Statistics) 46, no. 3 (1997): 388–99. http://dx.doi.org/10.1111/1467-9876.00078.
Повний текст джерелаEl Moumen, AbdelKader, Salim Benslimane, and Samir Rahmani. "Robbins–Monro Algorithm with $$\boldsymbol{\psi}$$-Mixing Random Errors." Mathematical Methods of Statistics 31, no. 3 (September 2022): 105–19. http://dx.doi.org/10.3103/s1066530722030024.
Повний текст джерелаCai, Li. "Metropolis-Hastings Robbins-Monro Algorithm for Confirmatory Item Factor Analysis." Journal of Educational and Behavioral Statistics 35, no. 3 (June 2010): 307–35. http://dx.doi.org/10.3102/1076998609353115.
Повний текст джерелаChen, Han-Fu. "Stochastic approximation with non-additive measurement noise." Journal of Applied Probability 35, no. 2 (June 1998): 407–17. http://dx.doi.org/10.1239/jap/1032192856.
Повний текст джерелаChen, Han-Fu. "Stochastic approximation with non-additive measurement noise." Journal of Applied Probability 35, no. 02 (June 1998): 407–17. http://dx.doi.org/10.1017/s0021900200015035.
Повний текст джерелаДисертації з теми "Algorithme de Robbins-Monro"
Lu, Wei. "Μéthοdes stοchastiques du secοnd οrdre pοur le traitement séquentiel de dοnnées massives". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR13.
Повний текст джерелаWith the rapid development of technologies and the acquisition of big data, methods capable of processing data sequentially (online) have become indispensable. Among these methods, stochastic gradient algorithms have been established for estimating the minimizer of a function expressed as the expectation of a random function. Although they have become essential, these algorithms encounter difficulties when the problem is ill-conditioned. In this thesis, we focus on second-order stochastic algorithms, such as those of the Newton type, and their applications to various statistical and optimization problems. After establishing theoretical foundations and exposing the motivations that lead us to explore stochastic Newton algorithms, we develop the various contributions of this thesis. The first contribution concerns the study and development of stochastic Newton algorithms for ridge linear regression and ridge logistic regression. These algorithms are based on the Riccati formula (Sherman-Morrison) to recursively estimate the inverse of the Hessian. As the acquisition of big data is generally accompanied by a contamination of the latter, in a second contribution, we focus on the online estimation of the geometric median, which is a robust indicator, i.e., not very sensitive to the presence of atypical data. More specifically, we propose a new stochastic Newton estimator to estimate the geometric median. In the first two contributions, the estimators of the Hessians' inverses are constructed using the Riccati formula, but this is only possible for certain functions. Thus, our third contribution introduces a new Robbins-Monro type method for online estimation of the Hessian's inverse, allowing us then to develop universal stochastic Newton algorithms. Finally, our last contribution focuses on Full Adagrad type algorithms, where the difficulty lies in the fact that there is an adaptive step based on the square root of the inverse of the gradient's covariance. We thus propose a Robbins-Monro type algorithm to estimate this matrix, allowing us to propose a recursive approach for Full AdaGrad and its streaming version, with reduced computational costs. For all the new estimators we propose, we establish their convergence rates as well as their asymptotic efficiency. Moreover, we illustrate the efficiency of these algorithms using numerical simulations and by applying them to real data
Arouna, Bouhari. "Méthodes de Monté Carlo et algorithmes stochastiques." Marne-la-vallée, ENPC, 2004. https://pastel.archives-ouvertes.fr/pastel-00001269.
Повний текст джерелаHajji, Kaouther. "Accélération de la méthode de Monte Carlo pour des processus de diffusions et applications en Finance." Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132054/document.
Повний текст джерелаIn this thesis, we are interested in studying the combination of variance reduction methods and complexity improvement of the Monte Carlo method. In the first part of this thesis,we consider a continuous diffusion model for which we construct an adaptive algorithm by applying importance sampling to Statistical Romberg method. Then, we prove a central limit theorem of Lindeberg-Feller type for this algorithm. In the same setting and in the same spirit, we apply the importance sampling to the Multilevel Monte Carlo method. We also prove a central limit theorem for the obtained adaptive algorithm. In the second part of this thesis, we develop the same type of adaptive algorithm for a discontinuous model namely the Lévy processes and we prove the associated central limit theorem. Numerical simulations are processed for the different obtained algorithms in both settings with and without jumps
Stanley, Leanne M. "Flexible Multidimensional Item Response Theory Models Incorporating Response Styles." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494316298549437.
Повний текст джерелаТези доповідей конференцій з теми "Algorithme de Robbins-Monro"
Ram, S. Sundhar, V. V. Veeravalli, and A. Nedic. "Incremental Robbins-Monro Gradient Algorithm for Regression in Sensor Networks." In 2007 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing. IEEE, 2007. http://dx.doi.org/10.1109/camsap.2007.4498027.
Повний текст джерелаIooss, Bertrand, and Jérôme Lonchampt. "Robust Tuning of Robbins-Monro Algorithm for Quantile Estimation -- Application to Wind-Farm Asset Management." In Proceedings of the 31st European Safety and Reliability Conference. Singapore: Research Publishing Services, 2021. http://dx.doi.org/10.3850/978-981-18-2016-8_084-cd.
Повний текст джерела