Literatura científica selecionada sobre o tema "Robbins-Monro algorithm"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Robbins-Monro algorithm".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Robbins-Monro algorithm"

1

Moler, José A., Fernando Plo e Miguel San Miguel. "Adaptive designs and Robbins–Monro algorithm". Journal of Statistical Planning and Inference 131, n.º 1 (abril de 2005): 161–74. http://dx.doi.org/10.1016/j.jspi.2003.12.018.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wardi, Y. "On a proof of a Robbins-Monro algorithm". Journal of Optimization Theory and Applications 64, n.º 1 (janeiro de 1990): 217. http://dx.doi.org/10.1007/bf00940033.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lin, Siming, e Jennie Si. "Weight-Value Convergence of the SOM Algorithm for Discrete Input". Neural Computation 10, n.º 4 (1 de maio de 1998): 807–14. http://dx.doi.org/10.1162/089976698300017485.

Texto completo da fonte
Resumo:
Some insights on the convergence of the weight values of the self-organizing map (SOM) to a stationary state in the case of discrete input are provided. The convergence result is obtained by applying the Robbins-Monro algorithm and is applicable to input-output maps of any dimension.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Moser, Barry Kurt, e Melinda H. McCann. "Algorithm AS 316: A Robbins-Monro-based Sequential Procedure". Journal of the Royal Statistical Society: Series C (Applied Statistics) 46, n.º 3 (1997): 388–99. http://dx.doi.org/10.1111/1467-9876.00078.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

El Moumen, AbdelKader, Salim Benslimane e Samir Rahmani. "Robbins–Monro Algorithm with $$\boldsymbol{\psi}$$-Mixing Random Errors". Mathematical Methods of Statistics 31, n.º 3 (setembro de 2022): 105–19. http://dx.doi.org/10.3103/s1066530722030024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

XU, ZI, YINGYING LI e XINGFANG ZHAO. "SIMULATION-BASED OPTIMIZATION BY NEW STOCHASTIC APPROXIMATION ALGORITHM". Asia-Pacific Journal of Operational Research 31, n.º 04 (agosto de 2014): 1450026. http://dx.doi.org/10.1142/s0217595914500262.

Texto completo da fonte
Resumo:
This paper proposes one new stochastic approximation algorithm for solving simulation-based optimization problems. It employs a weighted combination of two independent current noisy gradient measurements as the iterative direction. It can be regarded as a stochastic approximation algorithm with a special matrix step size. The almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Our numerical experiments show that it outperforms the classical Robbins–Monro (RM) algorithm and several other existing algorithms for one noisy nonlinear function minimization problem, several unconstrained optimization problems and one typical simulation-based optimization problem, i.e., (s, S)-inventory problem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Cai, Li. "Metropolis-Hastings Robbins-Monro Algorithm for Confirmatory Item Factor Analysis". Journal of Educational and Behavioral Statistics 35, n.º 3 (junho de 2010): 307–35. http://dx.doi.org/10.3102/1076998609353115.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Chen, Han-Fu. "Stochastic approximation with non-additive measurement noise". Journal of Applied Probability 35, n.º 2 (junho de 1998): 407–17. http://dx.doi.org/10.1239/jap/1032192856.

Texto completo da fonte
Resumo:
The Robbins–Monro algorithm with randomly varying truncations for measurements with non-additive noise is considered. Assuming that the function under observation is locally Lipschitz-continuous in its first argument and that the noise is a φ-mixing process, strong consistency of the estimate is shown. Neither growth rate restriction on the function, nor the decreasing rate of the mixing coefficients are required.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Chen, Han-Fu. "Stochastic approximation with non-additive measurement noise". Journal of Applied Probability 35, n.º 02 (junho de 1998): 407–17. http://dx.doi.org/10.1017/s0021900200015035.

Texto completo da fonte
Resumo:
The Robbins–Monro algorithm with randomly varying truncations for measurements with non-additive noise is considered. Assuming that the function under observation is locally Lipschitz-continuous in its first argument and that the noise is a φ-mixing process, strong consistency of the estimate is shown. Neither growth rate restriction on the function, nor the decreasing rate of the mixing coefficients are required.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Buckland, S. T., e P. H. Garthwaite. "Algorithm AS 259: Estimating Confidence Intervals by the Robbins-Monro Search Process". Applied Statistics 39, n.º 3 (1990): 413. http://dx.doi.org/10.2307/2347401.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Robbins-Monro algorithm"

1

Lu, Wei. "Μéthοdes stοchastiques du secοnd οrdre pοur le traitement séquentiel de dοnnées massives". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR13.

Texto completo da fonte
Resumo:
Avec le développement rapide des technologies et l'acquisition de données de plus en plus massives, les méthodes capables de traiter les données de manière séquentielle (en ligne) sont devenues indispensables. Parmi ces méthodes, les algorithmes de gradient stochastique se sont imposés pour estimer le minimiseur d'une fonction exprimée comme l'espérance d'une fonction aléatoire. Bien qu'ils soient devenus incontournables, ces algorithmes rencontrent des difficultés lorsque le problème est mal conditionné. Dans cette thèse, nous nous intéressons sur les algorithmes stochastiques du second ordre, tels que ceux de type Newton, et leurs applications à diverses problématiques statistiques et d'optimisation. Après avoir établi des bases théoriques et exposé les motivations qui nous amènent à explorer les algorithmes de Newton stochastiques, nous développons les différentes contributions de cette thèse. La première contribution concerne l'étude et le développement d'algorithmes de Newton stochastiques pour la régression linéaire ridge et la régression logistique ridge. Ces algorithmes sont basés sur la formule de Riccati (Sherman-Morrison) pour estimer récursivement l'inverse de la Hessienne. Comme l'acquisition de données massives s'accompagne généralement d'une contamination de ces dernières, on s'intéresse, dans une deuxième contribution, à l'estimation en ligne de la médiane géométrique, qui est un indicateur robuste, i.e. peu sensible à la présence de données atypiques. Plus précisément, nous proposons un nouvel estimateur de Newton stochastique pour estimer la médiane géométrique. Dans les deux premières contributions, les estimateurs des inverses de Hessienne sont construits à l'aide de la formule de Riccati, mais cela n'est possible que pour certaines fonctions. Ainsi, notre troisième contribution introduit une nouvelle méthode de type Robbins-Monro pour l'estimation en ligne de l'inverse de la Hessienne, nous permettant ensuite de développer des algorithmes de Newton stochastiques dits universels. Enfin, notre dernière contribution se focalise sur des algorithmes de type Full Adagrad, où la difficulté réside dans le fait que l'on a un pas adaptatif basé sur la racine carré de l'inverse de la variance du gradient. On propose donc un algorithme de type Robbins-Monro pour estimer cette matrice, nous permettant ainsi de proposer une approche récursive pour Full AdaGrad et sa version streaming, avec des coûts de calcul réduits. Pour tous les nouveaux estimateurs que nous proposons, nous établissons leurs vitesses de convergence ainsi que leur efficacité asymptotique. De plus, nous illustrons l'efficacité de ces algorithmes à l'aide de simulations numériques et en les appliquant à des données réelles
With the rapid development of technologies and the acquisition of big data, methods capable of processing data sequentially (online) have become indispensable. Among these methods, stochastic gradient algorithms have been established for estimating the minimizer of a function expressed as the expectation of a random function. Although they have become essential, these algorithms encounter difficulties when the problem is ill-conditioned. In this thesis, we focus on second-order stochastic algorithms, such as those of the Newton type, and their applications to various statistical and optimization problems. After establishing theoretical foundations and exposing the motivations that lead us to explore stochastic Newton algorithms, we develop the various contributions of this thesis. The first contribution concerns the study and development of stochastic Newton algorithms for ridge linear regression and ridge logistic regression. These algorithms are based on the Riccati formula (Sherman-Morrison) to recursively estimate the inverse of the Hessian. As the acquisition of big data is generally accompanied by a contamination of the latter, in a second contribution, we focus on the online estimation of the geometric median, which is a robust indicator, i.e., not very sensitive to the presence of atypical data. More specifically, we propose a new stochastic Newton estimator to estimate the geometric median. In the first two contributions, the estimators of the Hessians' inverses are constructed using the Riccati formula, but this is only possible for certain functions. Thus, our third contribution introduces a new Robbins-Monro type method for online estimation of the Hessian's inverse, allowing us then to develop universal stochastic Newton algorithms. Finally, our last contribution focuses on Full Adagrad type algorithms, where the difficulty lies in the fact that there is an adaptive step based on the square root of the inverse of the gradient's covariance. We thus propose a Robbins-Monro type algorithm to estimate this matrix, allowing us to propose a recursive approach for Full AdaGrad and its streaming version, with reduced computational costs. For all the new estimators we propose, we establish their convergence rates as well as their asymptotic efficiency. Moreover, we illustrate the efficiency of these algorithms using numerical simulations and by applying them to real data
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Stanley, Leanne M. "Flexible Multidimensional Item Response Theory Models Incorporating Response Styles". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494316298549437.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Arouna, Bouhari. "Méthodes de Monté Carlo et algorithmes stochastiques". Marne-la-vallée, ENPC, 2004. https://pastel.archives-ouvertes.fr/pastel-00001269.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Hajji, Kaouther. "Accélération de la méthode de Monte Carlo pour des processus de diffusions et applications en Finance". Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132054/document.

Texto completo da fonte
Resumo:
Dans cette thèse, on s’intéresse à la combinaison des méthodes de réduction de variance et de réduction de la complexité de la méthode Monte Carlo. Dans une première partie de cette thèse, nous considérons un modèle de diffusion continu pour lequel on construit un algorithme adaptatif en appliquant l’importance sampling à la méthode de Romberg Statistique Nous démontrons un théorème central limite de type Lindeberg Feller pour cet algorithme. Dans ce même cadre et dans le même esprit, on applique l’importance sampling à la méthode de Multilevel Monte Carlo et on démontre également un théorème central limite pour l’algorithme adaptatif obtenu. Dans la deuxième partie de cette thèse,on développe le même type d’algorithme pour un modèle non continu à savoir les processus de Lévy. De même, nous démontrons un théorème central limite de type Lindeberg Feller. Des illustrations numériques ont été menées pour les différents algorithmes obtenus dans les deux cadres avec sauts et sans sauts
In this thesis, we are interested in studying the combination of variance reduction methods and complexity improvement of the Monte Carlo method. In the first part of this thesis,we consider a continuous diffusion model for which we construct an adaptive algorithm by applying importance sampling to Statistical Romberg method. Then, we prove a central limit theorem of Lindeberg-Feller type for this algorithm. In the same setting and in the same spirit, we apply the importance sampling to the Multilevel Monte Carlo method. We also prove a central limit theorem for the obtained adaptive algorithm. In the second part of this thesis, we develop the same type of adaptive algorithm for a discontinuous model namely the Lévy processes and we prove the associated central limit theorem. Numerical simulations are processed for the different obtained algorithms in both settings with and without jumps
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Robbins-Monro algorithm"

1

Ram, S. Sundhar, V. V. Veeravalli e A. Nedic. "Incremental Robbins-Monro Gradient Algorithm for Regression in Sensor Networks". In 2007 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing. IEEE, 2007. http://dx.doi.org/10.1109/camsap.2007.4498027.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Iooss, Bertrand, e Jérôme Lonchampt. "Robust Tuning of Robbins-Monro Algorithm for Quantile Estimation -- Application to Wind-Farm Asset Management". In Proceedings of the 31st European Safety and Reliability Conference. Singapore: Research Publishing Services, 2021. http://dx.doi.org/10.3850/978-981-18-2016-8_084-cd.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia