Academic literature on the topic 'Algorithme de Robbins-Monro'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithme de Robbins-Monro.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithme de Robbins-Monro"

1

Moler, José A., Fernando Plo, and Miguel San Miguel. "Adaptive designs and Robbins–Monro algorithm." Journal of Statistical Planning and Inference 131, no. 1 (April 2005): 161–74. http://dx.doi.org/10.1016/j.jspi.2003.12.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

XU, ZI, YINGYING LI, and XINGFANG ZHAO. "SIMULATION-BASED OPTIMIZATION BY NEW STOCHASTIC APPROXIMATION ALGORITHM." Asia-Pacific Journal of Operational Research 31, no. 04 (August 2014): 1450026. http://dx.doi.org/10.1142/s0217595914500262.

Full text
Abstract:
This paper proposes one new stochastic approximation algorithm for solving simulation-based optimization problems. It employs a weighted combination of two independent current noisy gradient measurements as the iterative direction. It can be regarded as a stochastic approximation algorithm with a special matrix step size. The almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Our numerical experiments show that it outperforms the classical Robbins–Monro (RM) algorithm and several other existing algorithms for one noisy nonlinear function minimization problem, several unconstrained optimization problems and one typical simulation-based optimization problem, i.e., (s, S)-inventory problem.
APA, Harvard, Vancouver, ISO, and other styles
3

Arouna, Bouhari. "Robbins–Monro algorithms and variance reduction in finance." Journal of Computational Finance 7, no. 2 (2003): 35–61. http://dx.doi.org/10.21314/jcf.2003.111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wardi, Y. "On a proof of a Robbins-Monro algorithm." Journal of Optimization Theory and Applications 64, no. 1 (January 1990): 217. http://dx.doi.org/10.1007/bf00940033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Siming, and Jennie Si. "Weight-Value Convergence of the SOM Algorithm for Discrete Input." Neural Computation 10, no. 4 (May 1, 1998): 807–14. http://dx.doi.org/10.1162/089976698300017485.

Full text
Abstract:
Some insights on the convergence of the weight values of the self-organizing map (SOM) to a stationary state in the case of discrete input are provided. The convergence result is obtained by applying the Robbins-Monro algorithm and is applicable to input-output maps of any dimension.
APA, Harvard, Vancouver, ISO, and other styles
6

Moser, Barry Kurt, and Melinda H. McCann. "Algorithm AS 316: A Robbins-Monro-based Sequential Procedure." Journal of the Royal Statistical Society: Series C (Applied Statistics) 46, no. 3 (1997): 388–99. http://dx.doi.org/10.1111/1467-9876.00078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

El Moumen, AbdelKader, Salim Benslimane, and Samir Rahmani. "Robbins–Monro Algorithm with $$\boldsymbol{\psi}$$-Mixing Random Errors." Mathematical Methods of Statistics 31, no. 3 (September 2022): 105–19. http://dx.doi.org/10.3103/s1066530722030024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Li. "Metropolis-Hastings Robbins-Monro Algorithm for Confirmatory Item Factor Analysis." Journal of Educational and Behavioral Statistics 35, no. 3 (June 2010): 307–35. http://dx.doi.org/10.3102/1076998609353115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Han-Fu. "Stochastic approximation with non-additive measurement noise." Journal of Applied Probability 35, no. 2 (June 1998): 407–17. http://dx.doi.org/10.1239/jap/1032192856.

Full text
Abstract:
The Robbins–Monro algorithm with randomly varying truncations for measurements with non-additive noise is considered. Assuming that the function under observation is locally Lipschitz-continuous in its first argument and that the noise is a φ-mixing process, strong consistency of the estimate is shown. Neither growth rate restriction on the function, nor the decreasing rate of the mixing coefficients are required.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Han-Fu. "Stochastic approximation with non-additive measurement noise." Journal of Applied Probability 35, no. 02 (June 1998): 407–17. http://dx.doi.org/10.1017/s0021900200015035.

Full text
Abstract:
The Robbins–Monro algorithm with randomly varying truncations for measurements with non-additive noise is considered. Assuming that the function under observation is locally Lipschitz-continuous in its first argument and that the noise is a φ-mixing process, strong consistency of the estimate is shown. Neither growth rate restriction on the function, nor the decreasing rate of the mixing coefficients are required.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Algorithme de Robbins-Monro"

1

Lu, Wei. "Μéthοdes stοchastiques du secοnd οrdre pοur le traitement séquentiel de dοnnées massives." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR13.

Full text
Abstract:
Avec le développement rapide des technologies et l'acquisition de données de plus en plus massives, les méthodes capables de traiter les données de manière séquentielle (en ligne) sont devenues indispensables. Parmi ces méthodes, les algorithmes de gradient stochastique se sont imposés pour estimer le minimiseur d'une fonction exprimée comme l'espérance d'une fonction aléatoire. Bien qu'ils soient devenus incontournables, ces algorithmes rencontrent des difficultés lorsque le problème est mal conditionné. Dans cette thèse, nous nous intéressons sur les algorithmes stochastiques du second ordre, tels que ceux de type Newton, et leurs applications à diverses problématiques statistiques et d'optimisation. Après avoir établi des bases théoriques et exposé les motivations qui nous amènent à explorer les algorithmes de Newton stochastiques, nous développons les différentes contributions de cette thèse. La première contribution concerne l'étude et le développement d'algorithmes de Newton stochastiques pour la régression linéaire ridge et la régression logistique ridge. Ces algorithmes sont basés sur la formule de Riccati (Sherman-Morrison) pour estimer récursivement l'inverse de la Hessienne. Comme l'acquisition de données massives s'accompagne généralement d'une contamination de ces dernières, on s'intéresse, dans une deuxième contribution, à l'estimation en ligne de la médiane géométrique, qui est un indicateur robuste, i.e. peu sensible à la présence de données atypiques. Plus précisément, nous proposons un nouvel estimateur de Newton stochastique pour estimer la médiane géométrique. Dans les deux premières contributions, les estimateurs des inverses de Hessienne sont construits à l'aide de la formule de Riccati, mais cela n'est possible que pour certaines fonctions. Ainsi, notre troisième contribution introduit une nouvelle méthode de type Robbins-Monro pour l'estimation en ligne de l'inverse de la Hessienne, nous permettant ensuite de développer des algorithmes de Newton stochastiques dits universels. Enfin, notre dernière contribution se focalise sur des algorithmes de type Full Adagrad, où la difficulté réside dans le fait que l'on a un pas adaptatif basé sur la racine carré de l'inverse de la variance du gradient. On propose donc un algorithme de type Robbins-Monro pour estimer cette matrice, nous permettant ainsi de proposer une approche récursive pour Full AdaGrad et sa version streaming, avec des coûts de calcul réduits. Pour tous les nouveaux estimateurs que nous proposons, nous établissons leurs vitesses de convergence ainsi que leur efficacité asymptotique. De plus, nous illustrons l'efficacité de ces algorithmes à l'aide de simulations numériques et en les appliquant à des données réelles
With the rapid development of technologies and the acquisition of big data, methods capable of processing data sequentially (online) have become indispensable. Among these methods, stochastic gradient algorithms have been established for estimating the minimizer of a function expressed as the expectation of a random function. Although they have become essential, these algorithms encounter difficulties when the problem is ill-conditioned. In this thesis, we focus on second-order stochastic algorithms, such as those of the Newton type, and their applications to various statistical and optimization problems. After establishing theoretical foundations and exposing the motivations that lead us to explore stochastic Newton algorithms, we develop the various contributions of this thesis. The first contribution concerns the study and development of stochastic Newton algorithms for ridge linear regression and ridge logistic regression. These algorithms are based on the Riccati formula (Sherman-Morrison) to recursively estimate the inverse of the Hessian. As the acquisition of big data is generally accompanied by a contamination of the latter, in a second contribution, we focus on the online estimation of the geometric median, which is a robust indicator, i.e., not very sensitive to the presence of atypical data. More specifically, we propose a new stochastic Newton estimator to estimate the geometric median. In the first two contributions, the estimators of the Hessians' inverses are constructed using the Riccati formula, but this is only possible for certain functions. Thus, our third contribution introduces a new Robbins-Monro type method for online estimation of the Hessian's inverse, allowing us then to develop universal stochastic Newton algorithms. Finally, our last contribution focuses on Full Adagrad type algorithms, where the difficulty lies in the fact that there is an adaptive step based on the square root of the inverse of the gradient's covariance. We thus propose a Robbins-Monro type algorithm to estimate this matrix, allowing us to propose a recursive approach for Full AdaGrad and its streaming version, with reduced computational costs. For all the new estimators we propose, we establish their convergence rates as well as their asymptotic efficiency. Moreover, we illustrate the efficiency of these algorithms using numerical simulations and by applying them to real data
APA, Harvard, Vancouver, ISO, and other styles
2

Arouna, Bouhari. "Méthodes de Monté Carlo et algorithmes stochastiques." Marne-la-vallée, ENPC, 2004. https://pastel.archives-ouvertes.fr/pastel-00001269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hajji, Kaouther. "Accélération de la méthode de Monte Carlo pour des processus de diffusions et applications en Finance." Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132054/document.

Full text
Abstract:
Dans cette thèse, on s’intéresse à la combinaison des méthodes de réduction de variance et de réduction de la complexité de la méthode Monte Carlo. Dans une première partie de cette thèse, nous considérons un modèle de diffusion continu pour lequel on construit un algorithme adaptatif en appliquant l’importance sampling à la méthode de Romberg Statistique Nous démontrons un théorème central limite de type Lindeberg Feller pour cet algorithme. Dans ce même cadre et dans le même esprit, on applique l’importance sampling à la méthode de Multilevel Monte Carlo et on démontre également un théorème central limite pour l’algorithme adaptatif obtenu. Dans la deuxième partie de cette thèse,on développe le même type d’algorithme pour un modèle non continu à savoir les processus de Lévy. De même, nous démontrons un théorème central limite de type Lindeberg Feller. Des illustrations numériques ont été menées pour les différents algorithmes obtenus dans les deux cadres avec sauts et sans sauts
In this thesis, we are interested in studying the combination of variance reduction methods and complexity improvement of the Monte Carlo method. In the first part of this thesis,we consider a continuous diffusion model for which we construct an adaptive algorithm by applying importance sampling to Statistical Romberg method. Then, we prove a central limit theorem of Lindeberg-Feller type for this algorithm. In the same setting and in the same spirit, we apply the importance sampling to the Multilevel Monte Carlo method. We also prove a central limit theorem for the obtained adaptive algorithm. In the second part of this thesis, we develop the same type of adaptive algorithm for a discontinuous model namely the Lévy processes and we prove the associated central limit theorem. Numerical simulations are processed for the different obtained algorithms in both settings with and without jumps
APA, Harvard, Vancouver, ISO, and other styles
4

Stanley, Leanne M. "Flexible Multidimensional Item Response Theory Models Incorporating Response Styles." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494316298549437.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithme de Robbins-Monro"

1

Ram, S. Sundhar, V. V. Veeravalli, and A. Nedic. "Incremental Robbins-Monro Gradient Algorithm for Regression in Sensor Networks." In 2007 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing. IEEE, 2007. http://dx.doi.org/10.1109/camsap.2007.4498027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iooss, Bertrand, and Jérôme Lonchampt. "Robust Tuning of Robbins-Monro Algorithm for Quantile Estimation -- Application to Wind-Farm Asset Management." In Proceedings of the 31st European Safety and Reliability Conference. Singapore: Research Publishing Services, 2021. http://dx.doi.org/10.3850/978-981-18-2016-8_084-cd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography