Auswahl der wissenschaftlichen Literatur zum Thema „Convergence of Markov processes“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Convergence of Markov processes" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Convergence of Markov processes"

1

Abakuks, A., S. N. Ethier und T. G. Kurtz. „Markov Processes: Characterization and Convergence.“ Biometrics 43, Nr. 2 (Juni 1987): 484. http://dx.doi.org/10.2307/2531839.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Perkins, Edwin, S. N. Ethier und T. G. Kurtz. „Markov Processes, Characterization and Convergence.“ Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, Nr. 2 (1988): 367. http://dx.doi.org/10.2307/2982773.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Franz, Uwe, Volkmar Liebscher und Stefan Zeiser. „Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes“. Advances in Applied Probability 44, Nr. 3 (September 2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.

Der volle Inhalt der Quelle
Annotation:
A classical result about Markov jump processes states that a certain class of dynamical systems given by ordinary differential equations are obtained as the limit of a sequence of scaled Markov jump processes. This approach fails if the scaling cannot be carried out equally across all entities. In the present paper we present a convergence theorem for such an unequal scaling. In contrast to an equal scaling the limit process is not purely deterministic but still possesses randomness. We show that these processes constitute a rich subclass of piecewise-deterministic processes. Such processes apply in molecular biology where entities often occur in different scales of numbers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Franz, Uwe, Volkmar Liebscher und Stefan Zeiser. „Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes“. Advances in Applied Probability 44, Nr. 03 (September 2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.

Der volle Inhalt der Quelle
Annotation:
A classical result about Markov jump processes states that a certain class of dynamical systems given by ordinary differential equations are obtained as the limit of a sequence of scaled Markov jump processes. This approach fails if the scaling cannot be carried out equally across all entities. In the present paper we present a convergence theorem for such an unequal scaling. In contrast to an equal scaling the limit process is not purely deterministic but still possesses randomness. We show that these processes constitute a rich subclass of piecewise-deterministic processes. Such processes apply in molecular biology where entities often occur in different scales of numbers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

HWANG, CHII-RUEY. „ACCELERATING MONTE CARLO MARKOV PROCESSES“. COSMOS 01, Nr. 01 (Mai 2005): 87–94. http://dx.doi.org/10.1142/s0219607705000085.

Der volle Inhalt der Quelle
Annotation:
Let π be a probability density proportional to exp - U(x) in S. A convergent Markov process to π(x) may be regarded as a "conceptual" algorithm. Assume that S is a finite set. Let X0,X1,…,Xn,… be a Markov chain with transition matrix P and invariant probability π. Under suitable condition on P, it is known that [Formula: see text] converges to π(f) and the corresponding asymptotic variance v(f, P) depends only on f and P. It is natural to consider criteria vw(P) and va(P), defined respectively by maximizing and averaging v(f, P) over f. Two families of transition matrices are considered. There are four problems to be investigated. Some results and conjectures are given. As for the continuum case, to accelerate the convergence a family of diffusions with drift ∇U(x) + C(x) with div(C(x)exp - U(x)) = 0 is considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Aldous, David J. „Book Review: Markov processes: Characterization and convergence“. Bulletin of the American Mathematical Society 16, Nr. 2 (01.04.1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Swishchuk, Anatoliy, und M. Shafiqul Islam. „Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas“. International Journal of Stochastic Analysis 2010 (19.12.2010): 1–21. http://dx.doi.org/10.1155/2010/347105.

Der volle Inhalt der Quelle
Annotation:
We consider the geometric Markov renewal processes as a model for a security market and study this processes in a diffusion approximation scheme. Weak convergence analysis and rates of convergence of ergodic geometric Markov renewal processes in diffusion scheme are presented. We present European call option pricing formulas in the case of ergodic, double-averaged, and merged diffusion geometric Markov renewal processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Crank, Keith N., und Prem S. Puri. „A method of approximating Markov jump processes“. Advances in Applied Probability 20, Nr. 1 (März 1988): 33–58. http://dx.doi.org/10.2307/1427269.

Der volle Inhalt der Quelle
Annotation:
We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Crank, Keith N., und Prem S. Puri. „A method of approximating Markov jump processes“. Advances in Applied Probability 20, Nr. 01 (März 1988): 33–58. http://dx.doi.org/10.1017/s0001867800017936.

Der volle Inhalt der Quelle
Annotation:
We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Deng, Chang-Song, René L. Schilling und Yan-Hong Song. „Subgeometric rates of convergence for Markov processes under subordination“. Advances in Applied Probability 49, Nr. 1 (März 2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.

Der volle Inhalt der Quelle
Annotation:
Abstract We are interested in the rate of convergence of a subordinate Markov process to its invariant measure. Given a subordinator and the corresponding Bernstein function (Laplace exponent), we characterize the convergence rate of the subordinate Markov process; the key ingredients are the rate of convergence of the original process and the (inverse of the) Bernstein function. At a technical level, the crucial point is to bound three types of moment (subexponential, algebraic, and logarithmic) for subordinators as time t tends to ∞. We also discuss some concrete models and we show that subordination can dramatically change the speed of convergence to equilibrium.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Convergence of Markov processes"

1

Hahn, Léo. „Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence“. Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Der volle Inhalt der Quelle
Annotation:
1. Simuler des systèmes actifs et métastables avec des processus de Markov déterministes par morceaux (PDMPs): quelle dynamique choisir pour simuler efficacement des états métastables? comment exploiter directement la nature hors équilibre des PDMPs pour étudier les systèmes physiques modélisés? 2. Modéliser des systèmes actifs avec des PDMPs: quelles conditions doit remplir un système pour être modélisable par un PDMP? dans quels cas le système a-t-il un distribution stationnaire? comment calculer des quantités dynamiques (ex: rates de transition) dans ce cadre? 3. Améliorer les techniques de simulation de systèmes à l'équilibre: peut-on utiliser les résultats obtenus dans le cadre de systèmes hors équilibre pour accélérer la simulation de systèmes à l'équilibre? comment utiliser l'information topologique pour adapter la dynamique en temps réel?
1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pötzelberger, Klaus. „On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes“. Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.

Der volle Inhalt der Quelle
Annotation:
We give an upper bound for the norm distance of (0,1) -valued Markov-exchangeable random variables to mixtures of distributions of Markov processes. A Markov-exchangeable random variable has a distribution that depends only on the starting value and the number of transitions 0-0, 0-1, 1-0 and 1-1. We show that if, for increasing length of variables, the norm distance to mixtures of Markov processes goes to 0, the rate of this convergence may be arbitrarily slow. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Drozdenko, Myroslav. „Weak Convergence of First-Rare-Event Times for Semi-Markov Processes“. Doctoral thesis, Västerås : Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-394.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yuen, Wai Kong. „Application of geometric bounds to convergence rates of Markov chains and Markov processes on R[superscript]n“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58619.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kaijser, Thomas. „Convergence in distribution for filtering processes associated to Hidden Markov Models with densities“. Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.

Der volle Inhalt der Quelle
Annotation:
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filtering process converge towards a unique limit measure. This problem goes back to a paper of D Blackwell for the case when the Markov chain takes its values in a finite set and it goes back to a paper of H Kunita for the case when the state space of the Markov chain is a compact Hausdor space. Recently, due to work by F Kochmann, J Reeds, P Chigansky and R van Handel, a necessary and sucient condition for the convergence of the distributions of the filtering process has been found for the case when the state space is finite. This condition has since been generalised to the case when the state space is denumerable. In this paper we generalise some of the previous results on convergence in distribution to the case when the Markov chain and the observation sequence of a Hidden Markov Model take their values in complete, separable, metric spaces; it has though been necessary to assume that both the transition probability function of the Markov chain and the transition probability function that generates the observation sequence have densities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lachaud, Béatrice. „Détection de la convergence de processus de Markov“. Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.

Der volle Inhalt der Quelle
Annotation:
Notre travail porte sur le phénomène de cutoff pour des n-échantillons de processus de Markov, dans le but de l'appliquer à la détection de la convergence d'algorithmes parallélisés. Dans un premier temps, le processus échantillonné est un processus d'Ornstein-Uhlenbeck. Nous mettons en évidence le phénomène de cutoff pour le n-échantillon, puis nous faisons le lien avec la convergence en loi du temps d'atteinte par le processus moyen d'un niveau fixé. Dans un second temps, nous traitons le cas général où le processus échantillonné converge à vitesse exponentielle vers sa loi stationnaire. Nous donnons des estimations précises des distances entre la loi du n-échantillon et sa loi stationnaire. Enfin, nous expliquons comment aborder les problèmes de temps d'atteinte liés au phénomène du cutoff.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fisher, Diana. „Convergence analysis of MCMC method in the study of genetic linkage with missing data“. Huntington, WV : [Marshall University Libraries], 2005. http://www.marshall.edu/etd/descript.asp?ref=568.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wang, Xinyu. „Sur la convergence sous-exponentielle de processus de Markov“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.

Der volle Inhalt der Quelle
Annotation:
Ma thèse de doctorat se concentre principalement sur le comportement en temps long des processus de Markov, les inégalités fonctionnelles et les techniques relatives. Plus spécifiquement, Je vais présenter les taux de convergence sous-exponentielle explicites des processus de Markov dans deux approches : la méthode Meyn-Tweedie et l'hypocoercivité (faible). Le document se divise en trois parties. Dans la première partie, Je vais présenter quelques résultats importants et des connaissances connexes. D'abord, un aperçu de mon domaine de recherche sera donné. La convergence exponentielle (ou sous-exponentielle) des chaînes de Markov et des processus de Markov (à temps continu) est un sujet d'actualité dans la théorie des probabilité. La méthode traditionnelle développée et popularisée par Meyn-Tweedie est largement utilisée pour ce problème. Dans la plupart des résultats, le taux de convergence n'est pas explicite, et certains d'entre eux seront brièvement présentés. De plus, la fonction de Lyapunov est cruciale dans l'approche Meyn-Tweedie, et elle est aussi liée à certaines inégalités fonctionnelles (par exemple, inégalité de Poincaré). Cette relation entre fonction de Lyapounov et inégalités fonctionnelles sera donnée avec les résultats au sens L2. En outre, pour l'exemple de l'équation cinétique de Fokker-Planck, un résultat de convergence exponentielle explicite de la solution sera introduite à la manière de Villani : l'hypocoercivité. Ces contenus sont les fondements de mon travail, et mon but est d'étudier la décroissance sous-exponentielle. La deuxième partie, fait l'objet d'un article écrit en coopération avec d'autres sur les taux de convergence sous-exponentielle explicites des processus de Markov à temps continu. Comme nous le savons, les résultats sur les taux de convergence explicites ont été donnés pour le cas exponentiel. Nous les étendons au cas sous-exponentielle par l'approche Meyn-Tweedie. La clé de la preuve est l'estimation du temps de passage dans un ensemble "petite", obtenue par Douc, Fort et Guillin, mais pour laquelle nous donnons une preuve plus simple. Nous utilisons aussi la construction du couplage et donnons une ergodicité sous exponentielle explicite. Enfin, nous donnons quelques applications numériques. Dans la dernière partie, mon second article traite de l'équation cinétique de Fokker-Planck. Je prolonge l'hypocoercivité à l'hypocoercivité faible qui correspond à inégalité de Poincaré faible. Grâce à cette extension, on peut obtenir le taux de convergence explicite de la solution, dans des cas sous-exponentiels. La convergence est au sens H1 et au sens L2. A la fin de ce document, j'étudie le cas de l'entropie relative comme Villani, et j'obtiens la convergence au sens de l'entropie. Enfin, Je donne deux exemples pour les potentiels qui impliquent l'inégalité de Poincaré faible ou l'inégalité de Sobolev logarithmique faible pour la mesure invariante.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bouguet, Florian. „Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation“. Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.

Der volle Inhalt der Quelle
Annotation:
L'objet de cette thèse est d'étudier une certaine classe de processus de Markov, dits déterministes par morceaux, ayant de très nombreuses applications en modélisation. Plus précisément, nous nous intéresserons à leur comportement en temps long et à leur vitesse de convergence à l'équilibre lorsqu'ils admettent une mesure de probabilité stationnaire. L'un des axes principaux de ce manuscrit de thèse est l'obtention de bornes quantitatives fines sur cette vitesse, obtenues principalement à l'aide de méthodes de couplage. Le lien sera régulièrement fait avec d'autres domaines des mathématiques dans lesquels l'étude de ces processus est utile, comme les équations aux dérivées partielles. Le dernier chapitre de cette thèse est consacré à l'introduction d'une approche unifiée fournissant des théorèmes limites fonctionnels pour étudier le comportement en temps long de chaînes de Markov inhomogènes, à l'aide de la notion de pseudo-trajectoire asymptotique
The purpose of this Ph.D. thesis is the study of piecewise deterministic Markov processes, which are often used for modeling many natural phenomena. Precisely, we shall focus on their long time behavior as well as their speed of convergence to equilibrium, whenever they possess a stationary probability measure. Providing sharp quantitative bounds for this speed of convergence is one of the main orientations of this manuscript, which will usually be done through coupling methods. We shall emphasize the link between Markov processes and mathematical fields of research where they may be of interest, such as partial differential equations. The last chapter of this thesis is devoted to the introduction of a unified approach to study the long time behavior of inhomogeneous Markov chains, which can provide functional limit theorems with the help of asymptotic pseudotrajectories
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chotard, Alexandre. „Markov chain Analysis of Evolution Strategies“. Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées
In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Convergence of Markov processes"

1

G, Kurtz Thomas, Hrsg. Markov processes: Characterization and convergence. New York: Wiley, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Roberts, Gareth O. Convergence of slice sampler Markov chains. [Toronto: University of Toronto, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Baxter, John Robert. Rates of convergence for everywhere-positive markov chains. [Toronto, Ont.]: University of Toronto, Dept. of Statistics, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Roberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. [Toronto]: University of Toronto, Dept. of Statistics, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. Toronto: University of Toronto, Dept. of Statistics, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Roberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. [Toronto: University of Toronto, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wirsching, Günther J. The dynamical system generated by the 3n + 1 function. Berlin: Springer, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Petrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. Toronto: University of Toronto, Dept. of Statistics, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Convergence of Markov processes"

1

Zhang, Hanjun, Qixiang Mei, Xiang Lin und Zhenting Hou. „Convergence Property of Standard Transition Functions“. In Markov Processes and Controlled Markov Chains, 57–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Altman, Eitan. „Convergence of discounted constrained MDPs“. In Constrained Markov Decision Processes, 193–98. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Altman, Eitan. „Convergence as the horizon tends to infinity“. In Constrained Markov Decision Processes, 199–203. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kersting, G., und F. C. Klebaner. „Explosions in Markov Processes and Submartingale Convergence.“ In Athens Conference on Applied Probability and Time Series Analysis, 127–36. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cai, Yuzhi. „How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used“. In Markov Processes and Controlled Markov Chains, 489–98. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bernou, Armand. „On Subexponential Convergence to Equilibrium of Markov Processes“. In Lecture Notes in Mathematics, 143–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pop-Stojanovic, Z. R. „Convergence in Energy and the Sector Condition for Markov Processes“. In Seminar on Stochastic Processes, 1984, 165–72. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Feng, Jin, und Thomas Kurtz. „Large deviations for Markov processes and nonlinear semigroup convergence“. In Mathematical Surveys and Monographs, 79–96. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Negoro, Akira, und Masaaki Tsuchiya. „Convergence and uniqueness theorems for markov processes associated with Lévy operators“. In Lecture Notes in Mathematics, 348–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zverkina, Galina. „Ergodicity and Polynomial Convergence Rate of Generalized Markov Modulated Poisson Processes“. In Communications in Computer and Information Science, 367–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66242-4_29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Convergence of Markov processes"

1

Majeed, Sultan Javed, und Marcus Hutter. „On Q-learning Convergence for Non-Markov Decision Processes“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.

Der volle Inhalt der Quelle
Annotation:
Temporal-difference (TD) learning is an attractive, computationally efficient framework for model- free reinforcement learning. Q-learning is one of the most widely used TD learning technique that enables an agent to learn the optimal action-value function, i.e. Q-value function. Contrary to its widespread use, Q-learning has only been proven to converge on Markov Decision Processes (MDPs) and Q-uniform abstractions of finite-state MDPs. On the other hand, most real-world problems are inherently non-Markovian: the full true state of the environment is not revealed by recent observations. In this paper, we investigate the behavior of Q-learning when applied to non-MDP and non-ergodic domains which may have infinitely many underlying states. We prove that the convergence guarantee of Q-learning can be extended to a class of such non-MDP problems, in particular, to some non-stationary domains. We show that state-uniformity of the optimal Q-value function is a necessary and sufficient condition for Q-learning to converge even in the case of infinitely many internal states.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Amiri, Mohsen, und Sindri Magnússon. „On the Convergence of TD-Learning on Markov Reward Processes with Hidden States“. In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ding, Dongsheng, Kaiqing Zhang, Tamer Basar und Mihailo R. Jovanovic. „Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes“. In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shi, Chongyang, Yuheng Bu und Jie Fu. „Information-Theoretic Opacity-Enforcement in Markov Decision Processes“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.

Der volle Inhalt der Quelle
Annotation:
The paper studies information-theoretic opacity, an information-flow privacy property, in a setting involving two agents: A planning agent who controls a stochastic system and an observer who partially observes the system states. The goal of the observer is to infer some secret, represented by a random variable, from its partial observations, while the goal of the planning agent is to make the secret maximally opaque to the observer while achieving a satisfactory total return. Modeling the stochastic system using a Markov decision process, two classes of opacity properties are considered---Last-state opacity is to ensure that the observer is uncertain if the last state is in a specific set and initial-state opacity is to ensure that the observer is unsure of the realization of the initial state. As the measure of opacity, we employ the Shannon conditional entropy capturing the information about the secret revealed by the observable. Then, we develop primal-dual policy gradient methods for opacity-enforcement planning subject to constraints on total returns. We propose novel algorithms to compute the policy gradient of entropy for each observation, leveraging message passing within the hidden Markov models. This gradient computation enables us to have stable and fast convergence. We demonstrate our solution of opacity-enforcement control through a grid world example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ferreira Salvador, Paulo J., und Rui J. M. T. Valadas. „Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence“. In ITCom 2001: International Symposium on the Convergence of IT and Communications, herausgegeben von Robert D. van der Mei und Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Takagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto und Yongbing Zhang. „Search process evaluation for a hierarchical menu system by Markov chains“. In ITCom 2001: International Symposium on the Convergence of IT and Communications, herausgegeben von Robert D. van der Mei und Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen und Daiyuan Peng. „Adaptive resource allocation for media services based on semi-Markov decision process“. In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tayeb, Shahab, Miresmaeil Mirnabibaboli und Shahram Latifi. „Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm“. In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chanron, Vincent, und Kemper Lewis. „A Study of Convergence in Decentralized Design“. In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dac-48782.

Der volle Inhalt der Quelle
Annotation:
The decomposition and coordination of decisions in the design of complex engineering systems is a great challenge. Companies who design these systems routinely allocate design responsibility of the various subsystems and components to different people, teams or even suppliers. The mechanisms behind this network of decentralized design decisions create difficult management and coordination issues. However, developing efficient design processes is paramount, especially with market pressures and customer expectations. Standard techniques to modeling and solving decentralized design problems typically fail to understand the underlying dynamics of the decentralized processes and therefore result in suboptimal solutions. This paper aims to model and understand the mechanisms and dynamics behind a decentralized set of decisions within a complex design process. By using concepts from the fields of mathematics and economics, including Game Theory and the Cobweb Model, we model a simple decentralized design problem and provide efficient solutions. This new approach uses numerical series and linear algebra as tools to determine conditions for convergence of such decentralized design problems. The goal of this paper is to establish the first steps towards understanding the mechanisms of decentralized decision processes. This includes two major steps: studying the convergence characteristics, and finding the final equilibrium solution of a decentralized problem. Illustrations of the developments are provided in the form of two decentralized design problems with different underlying behavior.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kuznetsova, Natalia, und Zhanna Pisarenko. „Financial convergence at the world financial market: pension funds and insurance entities prospects: case of China, EU, USA“. In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.037.

Der volle Inhalt der Quelle
Annotation:
Purpose − to find whether large international institutional investors regardless of their country of origin due to the influence of external factors demonstrate convergence on some basic performance indicators. Research methodology is based on testing the set of selected entities for sigma convergence. The paper presents an empirical analysis of financial convergence for autonomous pension funds and insurance corporations of China, the EU and the USA. Findings − the insurance segment of the world financial market is more converged; however, current pension reforms in many countries is supposed to and must unify the requirements for both market segments and will lead to even greater convergence between pension funds and insurance corporations. Practical implications − results of the research are another step towards understanding the convergence processes, which are an objective trajectory for the development of the modern global financial market. Originality/Value − found intersegment convergence of pension and insurance entities, which needs further research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Convergence of Markov processes"

1

Adler, Robert J., Stamatis Gambanis und Gennady Samorodnitsky. On Stable Markov Processes. Fort Belvoir, VA: Defense Technical Information Center, September 1987. http://dx.doi.org/10.21236/ada192892.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Athreya, Krishna B., Hani Doss und Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, Juli 1992. http://dx.doi.org/10.21236/ada255456.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Abdel-Hameed, M. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes Replacement Policies. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1985. http://dx.doi.org/10.21236/ada174646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Newell, Alan. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes and Replacement Policies. Fort Belvoir, VA: Defense Technical Information Center, Mai 1986. http://dx.doi.org/10.21236/ada174995.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cinlar, E. Markov Processes Applied to Control, Reliability and Replacement. Fort Belvoir, VA: Defense Technical Information Center, April 1989. http://dx.doi.org/10.21236/ada208634.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rohlicek, J. R., und A. S. Willsky. Structural Decomposition of Multiple Time Scale Markov Processes,. Fort Belvoir, VA: Defense Technical Information Center, Oktober 1987. http://dx.doi.org/10.21236/ada189739.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Serfozo, Richard F. Poisson Functionals of Markov Processes and Queueing Networks. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1987. http://dx.doi.org/10.21236/ada191217.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Serfozo, R. F. Poisson Functionals of Markov Processes and Queueing Networks,. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1987. http://dx.doi.org/10.21236/ada194289.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Draper, Bruce A., und J. Ross Beveridge. Learning to Populate Geospatial Databases via Markov Processes. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1999. http://dx.doi.org/10.21236/ada374536.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1995. http://dx.doi.org/10.21236/ada308874.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie