Letteratura scientifica selezionata sul tema "Algorithme de Robbins-Monro"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Algorithme de Robbins-Monro".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Algorithme de Robbins-Monro"

1

Moler, José A., Fernando Plo e Miguel San Miguel. "Adaptive designs and Robbins–Monro algorithm". Journal of Statistical Planning and Inference 131, n. 1 (aprile 2005): 161–74. http://dx.doi.org/10.1016/j.jspi.2003.12.018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

XU, ZI, YINGYING LI e XINGFANG ZHAO. "SIMULATION-BASED OPTIMIZATION BY NEW STOCHASTIC APPROXIMATION ALGORITHM". Asia-Pacific Journal of Operational Research 31, n. 04 (agosto 2014): 1450026. http://dx.doi.org/10.1142/s0217595914500262.

Testo completo
Abstract (sommario):
This paper proposes one new stochastic approximation algorithm for solving simulation-based optimization problems. It employs a weighted combination of two independent current noisy gradient measurements as the iterative direction. It can be regarded as a stochastic approximation algorithm with a special matrix step size. The almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Our numerical experiments show that it outperforms the classical Robbins–Monro (RM) algorithm and several other existing algorithms for one noisy nonlinear function minimization problem, several unconstrained optimization problems and one typical simulation-based optimization problem, i.e., (s, S)-inventory problem.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Arouna, Bouhari. "Robbins–Monro algorithms and variance reduction in finance". Journal of Computational Finance 7, n. 2 (2003): 35–61. http://dx.doi.org/10.21314/jcf.2003.111.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wardi, Y. "On a proof of a Robbins-Monro algorithm". Journal of Optimization Theory and Applications 64, n. 1 (gennaio 1990): 217. http://dx.doi.org/10.1007/bf00940033.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lin, Siming, e Jennie Si. "Weight-Value Convergence of the SOM Algorithm for Discrete Input". Neural Computation 10, n. 4 (1 maggio 1998): 807–14. http://dx.doi.org/10.1162/089976698300017485.

Testo completo
Abstract (sommario):
Some insights on the convergence of the weight values of the self-organizing map (SOM) to a stationary state in the case of discrete input are provided. The convergence result is obtained by applying the Robbins-Monro algorithm and is applicable to input-output maps of any dimension.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Moser, Barry Kurt, e Melinda H. McCann. "Algorithm AS 316: A Robbins-Monro-based Sequential Procedure". Journal of the Royal Statistical Society: Series C (Applied Statistics) 46, n. 3 (1997): 388–99. http://dx.doi.org/10.1111/1467-9876.00078.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

El Moumen, AbdelKader, Salim Benslimane e Samir Rahmani. "Robbins–Monro Algorithm with $$\boldsymbol{\psi}$$-Mixing Random Errors". Mathematical Methods of Statistics 31, n. 3 (settembre 2022): 105–19. http://dx.doi.org/10.3103/s1066530722030024.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Cai, Li. "Metropolis-Hastings Robbins-Monro Algorithm for Confirmatory Item Factor Analysis". Journal of Educational and Behavioral Statistics 35, n. 3 (giugno 2010): 307–35. http://dx.doi.org/10.3102/1076998609353115.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Chen, Han-Fu. "Stochastic approximation with non-additive measurement noise". Journal of Applied Probability 35, n. 2 (giugno 1998): 407–17. http://dx.doi.org/10.1239/jap/1032192856.

Testo completo
Abstract (sommario):
The Robbins–Monro algorithm with randomly varying truncations for measurements with non-additive noise is considered. Assuming that the function under observation is locally Lipschitz-continuous in its first argument and that the noise is a φ-mixing process, strong consistency of the estimate is shown. Neither growth rate restriction on the function, nor the decreasing rate of the mixing coefficients are required.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Chen, Han-Fu. "Stochastic approximation with non-additive measurement noise". Journal of Applied Probability 35, n. 02 (giugno 1998): 407–17. http://dx.doi.org/10.1017/s0021900200015035.

Testo completo
Abstract (sommario):
The Robbins–Monro algorithm with randomly varying truncations for measurements with non-additive noise is considered. Assuming that the function under observation is locally Lipschitz-continuous in its first argument and that the noise is a φ-mixing process, strong consistency of the estimate is shown. Neither growth rate restriction on the function, nor the decreasing rate of the mixing coefficients are required.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Algorithme de Robbins-Monro"

1

Lu, Wei. "Μéthοdes stοchastiques du secοnd οrdre pοur le traitement séquentiel de dοnnées massives". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR13.

Testo completo
Abstract (sommario):
Avec le développement rapide des technologies et l'acquisition de données de plus en plus massives, les méthodes capables de traiter les données de manière séquentielle (en ligne) sont devenues indispensables. Parmi ces méthodes, les algorithmes de gradient stochastique se sont imposés pour estimer le minimiseur d'une fonction exprimée comme l'espérance d'une fonction aléatoire. Bien qu'ils soient devenus incontournables, ces algorithmes rencontrent des difficultés lorsque le problème est mal conditionné. Dans cette thèse, nous nous intéressons sur les algorithmes stochastiques du second ordre, tels que ceux de type Newton, et leurs applications à diverses problématiques statistiques et d'optimisation. Après avoir établi des bases théoriques et exposé les motivations qui nous amènent à explorer les algorithmes de Newton stochastiques, nous développons les différentes contributions de cette thèse. La première contribution concerne l'étude et le développement d'algorithmes de Newton stochastiques pour la régression linéaire ridge et la régression logistique ridge. Ces algorithmes sont basés sur la formule de Riccati (Sherman-Morrison) pour estimer récursivement l'inverse de la Hessienne. Comme l'acquisition de données massives s'accompagne généralement d'une contamination de ces dernières, on s'intéresse, dans une deuxième contribution, à l'estimation en ligne de la médiane géométrique, qui est un indicateur robuste, i.e. peu sensible à la présence de données atypiques. Plus précisément, nous proposons un nouvel estimateur de Newton stochastique pour estimer la médiane géométrique. Dans les deux premières contributions, les estimateurs des inverses de Hessienne sont construits à l'aide de la formule de Riccati, mais cela n'est possible que pour certaines fonctions. Ainsi, notre troisième contribution introduit une nouvelle méthode de type Robbins-Monro pour l'estimation en ligne de l'inverse de la Hessienne, nous permettant ensuite de développer des algorithmes de Newton stochastiques dits universels. Enfin, notre dernière contribution se focalise sur des algorithmes de type Full Adagrad, où la difficulté réside dans le fait que l'on a un pas adaptatif basé sur la racine carré de l'inverse de la variance du gradient. On propose donc un algorithme de type Robbins-Monro pour estimer cette matrice, nous permettant ainsi de proposer une approche récursive pour Full AdaGrad et sa version streaming, avec des coûts de calcul réduits. Pour tous les nouveaux estimateurs que nous proposons, nous établissons leurs vitesses de convergence ainsi que leur efficacité asymptotique. De plus, nous illustrons l'efficacité de ces algorithmes à l'aide de simulations numériques et en les appliquant à des données réelles
With the rapid development of technologies and the acquisition of big data, methods capable of processing data sequentially (online) have become indispensable. Among these methods, stochastic gradient algorithms have been established for estimating the minimizer of a function expressed as the expectation of a random function. Although they have become essential, these algorithms encounter difficulties when the problem is ill-conditioned. In this thesis, we focus on second-order stochastic algorithms, such as those of the Newton type, and their applications to various statistical and optimization problems. After establishing theoretical foundations and exposing the motivations that lead us to explore stochastic Newton algorithms, we develop the various contributions of this thesis. The first contribution concerns the study and development of stochastic Newton algorithms for ridge linear regression and ridge logistic regression. These algorithms are based on the Riccati formula (Sherman-Morrison) to recursively estimate the inverse of the Hessian. As the acquisition of big data is generally accompanied by a contamination of the latter, in a second contribution, we focus on the online estimation of the geometric median, which is a robust indicator, i.e., not very sensitive to the presence of atypical data. More specifically, we propose a new stochastic Newton estimator to estimate the geometric median. In the first two contributions, the estimators of the Hessians' inverses are constructed using the Riccati formula, but this is only possible for certain functions. Thus, our third contribution introduces a new Robbins-Monro type method for online estimation of the Hessian's inverse, allowing us then to develop universal stochastic Newton algorithms. Finally, our last contribution focuses on Full Adagrad type algorithms, where the difficulty lies in the fact that there is an adaptive step based on the square root of the inverse of the gradient's covariance. We thus propose a Robbins-Monro type algorithm to estimate this matrix, allowing us to propose a recursive approach for Full AdaGrad and its streaming version, with reduced computational costs. For all the new estimators we propose, we establish their convergence rates as well as their asymptotic efficiency. Moreover, we illustrate the efficiency of these algorithms using numerical simulations and by applying them to real data
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Arouna, Bouhari. "Méthodes de Monté Carlo et algorithmes stochastiques". Marne-la-vallée, ENPC, 2004. https://pastel.archives-ouvertes.fr/pastel-00001269.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Hajji, Kaouther. "Accélération de la méthode de Monte Carlo pour des processus de diffusions et applications en Finance". Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132054/document.

Testo completo
Abstract (sommario):
Dans cette thèse, on s’intéresse à la combinaison des méthodes de réduction de variance et de réduction de la complexité de la méthode Monte Carlo. Dans une première partie de cette thèse, nous considérons un modèle de diffusion continu pour lequel on construit un algorithme adaptatif en appliquant l’importance sampling à la méthode de Romberg Statistique Nous démontrons un théorème central limite de type Lindeberg Feller pour cet algorithme. Dans ce même cadre et dans le même esprit, on applique l’importance sampling à la méthode de Multilevel Monte Carlo et on démontre également un théorème central limite pour l’algorithme adaptatif obtenu. Dans la deuxième partie de cette thèse,on développe le même type d’algorithme pour un modèle non continu à savoir les processus de Lévy. De même, nous démontrons un théorème central limite de type Lindeberg Feller. Des illustrations numériques ont été menées pour les différents algorithmes obtenus dans les deux cadres avec sauts et sans sauts
In this thesis, we are interested in studying the combination of variance reduction methods and complexity improvement of the Monte Carlo method. In the first part of this thesis,we consider a continuous diffusion model for which we construct an adaptive algorithm by applying importance sampling to Statistical Romberg method. Then, we prove a central limit theorem of Lindeberg-Feller type for this algorithm. In the same setting and in the same spirit, we apply the importance sampling to the Multilevel Monte Carlo method. We also prove a central limit theorem for the obtained adaptive algorithm. In the second part of this thesis, we develop the same type of adaptive algorithm for a discontinuous model namely the Lévy processes and we prove the associated central limit theorem. Numerical simulations are processed for the different obtained algorithms in both settings with and without jumps
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Stanley, Leanne M. "Flexible Multidimensional Item Response Theory Models Incorporating Response Styles". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494316298549437.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Algorithme de Robbins-Monro"

1

Ram, S. Sundhar, V. V. Veeravalli e A. Nedic. "Incremental Robbins-Monro Gradient Algorithm for Regression in Sensor Networks". In 2007 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing. IEEE, 2007. http://dx.doi.org/10.1109/camsap.2007.4498027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Iooss, Bertrand, e Jérôme Lonchampt. "Robust Tuning of Robbins-Monro Algorithm for Quantile Estimation -- Application to Wind-Farm Asset Management". In Proceedings of the 31st European Safety and Reliability Conference. Singapore: Research Publishing Services, 2021. http://dx.doi.org/10.3850/978-981-18-2016-8_084-cd.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia