Academic literature on the topic 'Markov approximation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Markov approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Markov approximation"

1

Butko, Yana A. "Chernoff approximation of subordinate semigroups." Stochastics and Dynamics 18, no. 03 (May 18, 2018): 1850021. http://dx.doi.org/10.1142/s0219493718500211.

Full text
Abstract:
This note is devoted to the approximation of evolution semigroups generated by some Markov processes and hence to the approximation of transition probabilities of these processes. The considered semigroups correspond to processes obtained by subordination (i.e. by a time-change) of some original (parent) Markov processes with respect to some subordinators, i.e. Lévy processes with a.s. increasing paths (they play the role of the new time). If the semigroup, corresponding to a parent Markov process, is not known explicitly then neither the subordinate semigroup, nor even the generator of the subordinate semigroup are known explicitly too. In this note, some (Chernoff) approximations are constructed for subordinate semigroups (in the case when subordinators have either known transitional probabilities, or known and bounded Lévy measure) under the condition that the parent semigroups are not known but are already Chernoff-approximated. As it has been shown in the recent literature, this condition is fulfilled for several important classes of Markov processes. This fact allows, in particular, to use the constructed Chernoff approximations of subordinate semigroups, in order to approximate semigroups corresponding to subordination of Feller processes and (Feller type) diffusions in Euclidean spaces, star graphs and Riemannian manifolds. Such approximations can be used for direct calculations and simulation of stochastic processes. The method of Chernoff approximation is based on the Chernoff theorem and can be interpreted also as a construction of Markov chains approximating a given Markov process and as the numerical path integration method of solving the corresponding PDE/SDE.
APA, Harvard, Vancouver, ISO, and other styles
2

Patseika, Pavel G., Yauheni A. Rouba, and Kanstantin A. Smatrytski. "On one rational integral operator of Fourier – Chebyshev type and approximation of Markov functions." Journal of the Belarusian State University. Mathematics and Informatics, no. 2 (July 30, 2020): 6–27. http://dx.doi.org/10.33581/2520-6508-2020-2-6-27.

Full text
Abstract:
The purpose of this paper is to construct an integral rational Fourier operator based on the system of Chebyshev – Markov rational functions and to study its approximation properties on classes of Markov functions. In the introduction the main results of well-known works on approximations of Markov functions are present. Rational approximation of such functions is a well-known classical problem. It was studied by A. A. Gonchar, T. Ganelius, J.-E. Andersson, A. A. Pekarskii, G. Stahl and other authors. In the main part an integral operator of the Fourier – Chebyshev type with respect to the rational Chebyshev – Markov functions, which is a rational function of order no higher than n is introduced, and approximation of Markov functions is studied. If the measure satisfies the following conditions: suppμ = [1, a], a > 1, dμ(t) = ϕ(t)dt and ϕ(t) ἆ (t − 1)α on [1, a] the estimates of pointwise and uniform approximation and the asymptotic expression of the majorant of uniform approximation are established. In the case of a fixed number of geometrically distinct poles in the extended complex plane, values of optimal parameters that provide the highest rate of decreasing of this majorant are found, as well as asymptotically accurate estimates of the best uniform approximation by this method in the case of an even number of geometrically distinct poles of the approximating function. In the final part we present asymptotic estimates of approximation of some elementary functions, which can be presented by Markov functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Peköz, Erol A. "Stein's method for geometric approximation." Journal of Applied Probability 33, no. 3 (September 1996): 707–13. http://dx.doi.org/10.2307/3215352.

Full text
Abstract:
The Stein–Chen method for Poisson approximation is adapted to the setting of the geometric distribution. This yields a convenient method for assessing the accuracy of the geometric approximation to the distribution of the number of failures preceding the first success in dependent trials. The results are applied to approximating waiting time distributions for patterns in coin tossing, and to approximating the distribution of the time when a stationary Markov chain first visits a rare set of states. The error bounds obtained are sharper than those obtainable using related Poisson approximations.
APA, Harvard, Vancouver, ISO, and other styles
4

Peköz, Erol A. "Stein's method for geometric approximation." Journal of Applied Probability 33, no. 03 (September 1996): 707–13. http://dx.doi.org/10.1017/s0021900200100142.

Full text
Abstract:
The Stein–Chen method for Poisson approximation is adapted to the setting of the geometric distribution. This yields a convenient method for assessing the accuracy of the geometric approximation to the distribution of the number of failures preceding the first success in dependent trials. The results are applied to approximating waiting time distributions for patterns in coin tossing, and to approximating the distribution of the time when a stationary Markov chain first visits a rare set of states. The error bounds obtained are sharper than those obtainable using related Poisson approximations.
APA, Harvard, Vancouver, ISO, and other styles
5

Heinzmann, Dominik. "Extinction Times in Multitype Markov Branching Processes." Journal of Applied Probability 46, no. 1 (March 2009): 296–307. http://dx.doi.org/10.1239/jap/1238592131.

Full text
Abstract:
In this paper, a distributional approximation to the time to extinction in a subcritical continuous-time Markov branching process is derived. A limit theorem for this distribution is established and the error in the approximation is quantified. The accuracy of the approximation is illustrated in an epidemiological example. Since Markov branching processes serve as approximations to nonlinear epidemic processes in the initial and final stages, our results can also be used to describe the time to extinction for such processes.
APA, Harvard, Vancouver, ISO, and other styles
6

Heinzmann, Dominik. "Extinction Times in Multitype Markov Branching Processes." Journal of Applied Probability 46, no. 01 (March 2009): 296–307. http://dx.doi.org/10.1017/s0021900200005374.

Full text
Abstract:
In this paper, a distributional approximation to the time to extinction in a subcritical continuous-time Markov branching process is derived. A limit theorem for this distribution is established and the error in the approximation is quantified. The accuracy of the approximation is illustrated in an epidemiological example. Since Markov branching processes serve as approximations to nonlinear epidemic processes in the initial and final stages, our results can also be used to describe the time to extinction for such processes.
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Yuanzhen, Hao Xiong, and Nicholas Ruozzi. "Marginal Inference in Continuous Markov Random Fields Using Mixtures." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7834–41. http://dx.doi.org/10.1609/aaai.v33i01.33017834.

Full text
Abstract:
Exact marginal inference in continuous graphical models is computationally challenging outside of a few special cases. Existing work on approximate inference has focused on approximately computing the messages as part of the loopy belief propagation algorithm either via sampling methods or moment matching relaxations. In this work, we present an alternative family of approximations that, instead of approximating the messages, approximates the beliefs in the continuous Bethe free energy using mixture distributions. We show that these types of approximations can be combined with numerical quadrature to yield algorithms with both theoretical guarantees on the quality of the approximation and significantly better practical performance in a variety of applications that are challenging for current state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Anichkin, S. A., and V. V. Kalashnikov. "Approximation of Markov chains." Journal of Soviet Mathematics 32, no. 1 (January 1986): 1–8. http://dx.doi.org/10.1007/bf01084492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rolski, Tomasz. "Approximation of periodic queues." Advances in Applied Probability 19, no. 3 (September 1987): 691–707. http://dx.doi.org/10.2307/1427413.

Full text
Abstract:
In this paper we demonstrate how some characteristics of queues with the periodic Poisson arrivals can be approximated by the respective characteristics in queues with Markov modulated input. These Markov modulated queues were recently studied by Regterschot and de Smit (1984). The approximation theorems are given in terms of the weak convergence of some characteristics and their uniform integrability. The approximations are applicable for the following characteristics: mean workload, mean workload at the time of day, mean delay, mean queue size.
APA, Harvard, Vancouver, ISO, and other styles
10

Rolski, Tomasz. "Approximation of periodic queues." Advances in Applied Probability 19, no. 03 (September 1987): 691–707. http://dx.doi.org/10.1017/s0001867800016827.

Full text
Abstract:
In this paper we demonstrate how some characteristics of queues with the periodic Poisson arrivals can be approximated by the respective characteristics in queues with Markov modulated input. These Markov modulated queues were recently studied by Regterschot and de Smit (1984). The approximation theorems are given in terms of the weak convergence of some characteristics and their uniform integrability. The approximations are applicable for the following characteristics: mean workload, mean workload at the time of day, mean delay, mean queue size.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Markov approximation"

1

Szczegot, Kamil. "Sharp approximation for density dependent Markov chains /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.

Full text
Abstract:
We give an upper bound for the norm distance of (0,1) -valued Markov-exchangeable random variables to mixtures of distributions of Markov processes. A Markov-exchangeable random variable has a distribution that depends only on the starting value and the number of transitions 0-0, 0-1, 1-0 and 1-1. We show that if, for increasing length of variables, the norm distance to mixtures of Markov processes goes to 0, the rate of this convergence may be arbitrarily slow. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
3

Perrine, Serge. "Approximation diophantienne (théorie de Markoff)." Metz, 1988. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1988/Perrine.Serge_1.SMZ8826.pdf.

Full text
Abstract:
Vers 1880, A. Markoff a précisé la structure de l'ensemble des constantes d'approximations des nombres irrationnels plus grandes que 1/3. Sa théorie établit un lien entre ces constantes, des minima arithmétiques de formes quadratiques, et les solutions entières de l'équation diophantienne x2 + y2 +z2 = 3xyz. La présente thèse généralise le formalisme original de Markoff. Ceci introduit la notion de (a,r, E) - théorie de Markoff, dont la (2,0,-1) -théorie recouvre les calculs originaux. L'équation diophantienne correspondante est donnée ainsi qu'une interprétation des calculs. Il en résulte la résolution de l'équation diophantienne x2 + y2 +z2 = (a +1)xyz et diverses constructions arborescentes. Pour la recherche systématique des trous des spectres de Markoff et Perron, l'auteur confirme les résultatd de Scheker et Freiman concernant le rayon de Hall. Il donne des exemples et confirme certains résultats de Kinney et Pitcher
Towards 1880, A. Markoff gave precisions about structure of the set of approximation constants greater than 1/3 for irrational numbers. This theory establishes links between constants, arithmetical minima for quadratic forms, and the solutions of the diophantine equation x2 + y2 +z2 = 3xyz. The present dissertation generalizes the original formalism built by Markoff. It introduces the notion of (a, r,E)-theory of Markoff, among which the (2,0,-1) theory is the original Markoff theory. The corresponding diophantine equation is given with an interpretation for the whole calculus. From that is derives the resolution of the diophantine equation x2 + y2 +z2 = (a +1)xyz and some arborescent constructions. For the systematic research of holes in the Markoff's spectra, the author gives confirmation for the results of Schecker and Freiman, concerning the Hall's ray. He gives examples and gives confirmation for some results of Kinney and Pitcher
APA, Harvard, Vancouver, ISO, and other styles
4

Patrascu, Relu-Eugen. "Linear Approximations For Factored Markov Decision Processes." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1171.

Full text
Abstract:
A Markov Decision Process (MDP) is a model employed to describe problems in which a decision must be made at each one of several stages, while receiving feedback from the environment. This type of model has been extensively studied in the operations research community and fundamental algorithms have been developed to solve associated problems. However, these algorithms are quite inefficient for very large problems, leading to a need for alternatives; since MDP problems are provably hard on compressed representations, one becomes content even with algorithms which may perform well at least on specific classes of problems. The class of problems we deal with in this thesis allows succinct representations for the MDP as a dynamic Bayes network, and for its solution as a weighted combination of basis functions. We develop novel algorithms for producing, improving, and calculating the error of approximate solutions for MDPs using a compressed representation. Specifically, we develop an efficient branch-and-bound algorithm for computing the Bellman error of the compact approximate solution regardless of its provenance. We introduce an efficient direct linear programming algorithm which, using incremental constraints generation, achieves run times significantly smaller than existing approximate algorithms without much loss of accuracy. We also show a novel direct linear programming algorithm which, instead of employing constraints generation, transforms the exponentially many constraints into a compact form more amenable for tractable solutions. In spite of its perceived importance, the efficient optimization of the Bellman error towards an approximate MDP solution has eluded current algorithms; to this end we propose a novel branch-and-bound approximate policy iteration algorithm which makes direct use of our branch-and-bound method for computing the Bellman error. We further investigate another procedure for obtaining an approximate solution based on the dual of the direct, approximate linear programming formulation for solving MDPs. To address both the loss of accuracy resulting from the direct, approximate linear program solution and the question of where basis functions come from we also develop a principled system able not only to produce the initial set of basis functions, but also able to augment it with new basis functions automatically generated such that the approximation error decreases according to the user's requirements and time limitations.
APA, Harvard, Vancouver, ISO, and other styles
5

Lei, Lei. "Markov Approximations: The Characterization of Undermodeling Errors." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1371.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kuntz, Nussio Juan. "Deterministic approximation schemes with computable errors for the distributions of Markov chains." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/59103.

Full text
Abstract:
This thesis is a monograph on Markov chains and deterministic approximation schemes that enable the quantitative analysis thereof. We present schemes that yield approximations of the time-varying law of a Markov chain, of its stationary distributions, and of the exit distributions and occupation measures associated with its exit times. In practice, our schemes reduce to solving systems of linear ordinary differential equations, linear programs, and semidefinite pro- grams. We focus on the theoretical aspects of these schemes, proving convergence and providing computable error bounds for most of them. To a lesser extent, we study their practical use, applying them to a variety of examples and discussing the numerical issues that arise in their implementation.
APA, Harvard, Vancouver, ISO, and other styles
7

Tanaka, Takeyuki. "Studies on Application of a Markov Approximation Methods to Structural Reliability Analyses." Kyoto University, 1995. http://hdl.handle.net/2433/160776.

Full text
Abstract:
本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである
Kyoto University (京都大学)
0048
新制・論文博士
博士(工学)
乙第8872号
論工博第2981号
新制||工||997(附属図書館)
UT51-95-D465
(主査)教授 宗像 豊哲, 教授 茨木 俊秀, 教授 岩井 敏洋
学位規則第4条第2項該当
APA, Harvard, Vancouver, ISO, and other styles
8

Vergne, Nicolas. "Chaînes de Markov régulées et approximation de Poisson pour l'analyse de séquences biologiques." Phd thesis, Université d'Evry-Val d'Essonne, 2008. http://tel.archives-ouvertes.fr/tel-00322434.

Full text
Abstract:
L'analyse statistique des séquences biologiques telles les séquences nucléotidiques (l'ADN et l'ARN) ou d'acides aminés (les protéines) nécessite la conception de différents modèles s'adaptant chacun à un ou plusieurs cas d'étude. Etant donnée la dépendance de la succession des nucléotides dans les séquences d'ADN, les modèles généralement utilisés sont des modèles de Markov. Le problème de ces modèles est de supposer l'homogénéité des séquences. Or, les séquences biologiques ne sont pas homogènes. Un exemple bien connu est la répartition en gc : le long d'une même séquence, alternent des régions riches en gc et des régions pauvres en gc. Pour rendre compte de l'hétérogénéité des séquences, d'autres modèles sont utilisés : les modèles de Markov cachés. La séquence est divisée en plusieurs régions homogènes. Les applications sont nombreuses, telle la recherche des régions codantes. Certaines particularités biologiques ne pouvant apparaître suivant ces modèles, nous proposons de nouveaux modèles, les chaînes de Markov régulées (DMM pour drifting Markov model). Au lieu d'ajuster une matrice de transition sur une séquence entière (modèle de Markov homogène classique) ou différentes matrices de transition sur différentes régions de la séquence (modèles de Markov cachés), nous permettons à la matrice de transition de varier (to drift) du début à la fin de la séquence. A chaque position t dans la séquence, nous avons une matrice de transition Πt/n(où n est la longueur de la séquence) éventuellement différente. Nos modèles sont donc des modèles de Markov hétérogènes contraints. Dans cette thèse, nous donnerons essentiellement deux manières de contraindre les modèles : la modélisation polynomiale et la modélisation par splines. Par exemple, pour une modélisation polynomiale de degré 1 (une dérive linéaire), nous nous donnons une matrice de départ Π0 et une matrice d'arrivée Π1 puis nous passons de l'une à l'autre en fonction de la position t dans la séquence :
Πt/n = (1-t/n) Π0 + t/n Π1.
Cette modélisation correspond à une évolution douce entre deux états. Par exemple cela peut traduire la transition entre deux régimes d'un chaîne de Markov cachée, qui pourrait parfois sembler trop brutale. Ces modèles peuvent donc être vus comme une alternative mais aussi comme un outil complémentaire aux modèles de Markov cachés. Tout au long de ce travail, nous avons considéré des dérives polynomiales de tout degré ainsi que des dérives par splines polynomiales : le but de ces modèles étant de les rendre plus flexibles que ceux des polynômes. Nous avons estimé nos modèles de multiples manières puis évalué la qualité de ces estimateurs avant de les utiliser en vue d'applications telle la recherche de mots exceptionnels. Nous avons mis en oeuvre le software DRIMM (bientôt disponible à http://stat.genopole.cnrs.fr/sg/software/drimm/, dédié à l'estimation de nos modèles. Ce programme regroupe toutes les possibilités offertes par nos modèles, tels le calcul des matrices en chaque position, le calcul des lois stationnaires, des distributions de probabilité en chaque position... L'utilisation de ce programme pour la recherche des mots exceptionnels est proposée dans des programmes auxiliaires (disponibles sur demande).
Plusieurs perspectives à ce travail sont envisageables. Nous avons jusqu'alors décidé de faire varier la matrice seulement en fonction de la position, mais nous pourrions prendre en compte des covariables tels le degré d'hydrophobicité, le pourcentage en gc, un indicateur de la structure des protéines (hélice α, feuillets β...). Nous pourrions aussi envisager de mêler HMM et variation continue, où sur chaque région, au lieu d'ajuster un modèle de Markov, nous ajusterions un modèle de chaînes de Markov régulées.
APA, Harvard, Vancouver, ISO, and other styles
9

GENDRE, LAURENT. "INEGALITES DE MARKOV SINGULIERES ET APPROXIMATION DES FONCTIONS HOLOMORPHES DE LA CLASSE M." Phd thesis, Université Paul Sabatier - Toulouse III, 2005. http://tel.archives-ouvertes.fr/tel-00010810.

Full text
Abstract:
En premier, nous montrons l'existence d'inégalités de Markov sur les courbes algébriques singulières de Rn. Nous donnons une signification géométrique à l'exposant de Markov en montrant qu'il est minoré par la multiplicité de la singularité de la courbe complexifiée dans Cn. Nous construisons une paramétrisation de Puiseux en la singularité réelle de la courbe complexifiée. Nous la prolongeons à un ouvert de C partout dense, afin d'obtenir la propriété d'HCP de la fonction de Green avec pôle à l'infini dans la courbe complexifiée, via la métrique des géodésiques. En second, nous montrons un théorème de type Bernstein pour les classes de fonctions intermédiaires entre les fonctions holomorphes et les fonctions indéfiniment différentiables sur des classes de compacts s-H convexes de Cn . Pour démontrer ce résultat, nous donnons une représentation intégrale sur les compacts s-H convexes de Cn des fonctions de A¥(K) via un noyau adéquat , nous approchons ce noyau par les noyaux à poids de type Henkin-Ramirez. Nous proposons une nouvelle propriété géométrique de la fonction de Green avec pôle à l'infini. Pour finir nous donnons quelques applications et corollaires.
APA, Harvard, Vancouver, ISO, and other styles
10

Gendre, Laurent. "Inégalités de Markov singulières et approximation des fonctions holomorphes de la classe M." Toulouse 3, 2005. http://www.theses.fr/2005TOU30033.

Full text
Abstract:
En premier, nous montrons l'existence d'inégalités de Markov sur les courbes algébriques singulières de Rn. Nous donnons une signification géométrique à l'exposant de Markov en montrant qu'il est minoré par la multiplicité de la singularité de la courbe complexifiée dans Cn. Nous construisons une paramétrisation de Puiseux en la singularité réelle de la courbe complexifiée. Nous la prolongeons à un ouvert de C partout dense, afin d'obtenir la propriété d'HCP de la fonction de Green avec pôle à l'infini dans la courbe complexifiée, via la métrique des géodésiques. En second, nous montrons un théorème de type Bernstein pour les classes de fonctions intermédiaires entre les fonctions holomorphes et les fonctions indéfiniment différentiables sur des classes de compacts s-H convexes de Cn. Pour démontrer ce résultat, nous donnons une représentation intégrale sur les compacts s-H convexes de Cn des fonctions de A¥(K) via un noyau adéquat , nous approchons ce noyau par les noyaux à poids de type Henkin-Ramirez. Nous proposons une nouvelle propriété géométrique de la fonction de Green avec pôle à l'infini. Pour finir nous donnons quelques applications et corollaires
In the first part, we prove that all the singular algebraic curves of Rn admit Markov tangential inequalities. We give a geometric signification of the Markov exponent. We prove that this exponent is less or equal to the multiplicity of the singularity of the complexify curve in Cn. We construct a Puiseux parameterisation on the real singularity and we extended it to a nowhere dense open subset of C. Therefore, we obtain the property HCP of the Green function with pole at infinity by geodesic metric in the complexify curve. In the second part, we prove a Bernstein type theorem for the functions of intermediate classes between holomorphic functions and C¥ functions on subclasses of s-H convex compact subsets of Cn. To prove this result, we give representative kernel on s-H convex compact for functions of A¥(K). We approach this kernel by an other kernel type Henkin-Ramirez. We propose a new geometric property of Green function with pole at infinity and we give some examples
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Markov approximation"

1

Markov operators, positive semigroups, and approximation processes. Berlin: Walter de Gruyter GmbH & Co., KG, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bayer, Valentina. Approximation algorithms for solving cost observable Markov decision processes. Corvallis, OR: Oregon State University, Dept. of Computer Science, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Komorowski, Tomasz. Fluctuations in Markov Processes: Time Symmetry and Martingale Approximation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

American-type options: Stochastic approximation methods. Berlin: Walter de Gruyter GmbH & Co. KG, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Shuheng. Stochastic Approximation Algorithms in the Estimation of Quasi-Stationary Distribution of Finite and General State Space Markov Chains. [New York, N.Y.?]: [publisher not identified], 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aldous, D. J. Probability approximations via the Poisson clumping heuristic. New York: Springer-Verlag, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chang-Jin, Kim. In search of a model that an ARCH-type model may be approximating: The Markov model of heteroskedasticity. [Toronto, Ont: York University, Dept. of Economics, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Koli︠a︡da, S. F. Dynamics and numbers: A special program, June 1-July 31, 2014, Max Planck Institute for Mathematics, Bonn, Germany : international conference, July 21-25, 2014, Max Planck Institute for Mathematics, Bonn, Germany. Edited by Max-Planck-Institut für Mathematik. Providence, Rhode Island: American Mathematical Society, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Altomare, Francesco, Mirella Cappelletti, Ioan Rasa, and Vita Leonessa. Markov Operators, Positive Semigroups and Approximation Processes. de Gruyter GmbH, Walter, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Altomare, Francesco, Mirella Cappelletti, Ioan Rasa, and Vita Leonessa. Markov Operators, Positive Semigroups and Approximation Processes. de Gruyter GmbH, Walter, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Markov approximation"

1

Wang, Xikui. "Monotonic Approximation of the Gittins Index." In Markov Processes and Controlled Markov Chains, 363–67. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kalashnikov, V. V. "Approximation of some stochastic models." In Semi-Markov Models, 319–35. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4899-0574-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Altman, Eitan. "State truncation and approximation." In Constrained Markov Decision Processes, 205–16. Boca Raton: Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Todorović, Branimir T., Svetozar R. Rančić, and Edin H. Mulalić. "Context Hidden Markov Model for Named Entity Recognition." In Approximation and Computation, 447–60. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-6594-3_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Korolyuk, V., and A. Swishchuk. "Double Approximation of Random Evolutions." In Semi-Markov Random Evolutions, 277–88. Dordrecht: Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-011-1010-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elton, John H., and Zheng Yan. "Approximation of Measures by Markov Processes and Homogeneous Affine Iterated Function Systems." In Constructive Approximation, 69–87. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4899-6886-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kushner, Harold J., and Paul Dupuis. "The Markov Chain Approximation Method: Introduction." In Numerical Methods for Stochastic Control Problems in Continuous Time, 67–88. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0007-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kushner, Harold J., and Paul G. Dupuis. "The Markov Chain Approximation Method: Introduction." In Numerical Methods for Stochastic Control Problems in Continuous Time, 67–88. New York, NY: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4684-0441-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Betliński, Paweł. "Markov Blanket Approximation Based on Clustering." In Lecture Notes in Computer Science, 192–202. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21916-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Budhiraja, Amarjit, and Paul Dupuis. "Recursive Markov Systems with Small Noise." In Analysis and Approximation of Rare Events, 79–117. New York, NY: Springer US, 2019. http://dx.doi.org/10.1007/978-1-4939-9579-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Markov approximation"

1

Chen, Minghua, Soung Chang Liew, Ziyu Shao, and Caihong Kai. "Markov Approximation for Combinatorial Network Optimization." In IEEE INFOCOM 2010 - IEEE Conference on Computer Communications. IEEE, 2010. http://dx.doi.org/10.1109/infcom.2010.5461998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Siwei, Xiaoying Gan, Xinxin Feng, Xiaohua Tian, Weijie Wu, and Jing Liu. "Markov approximation for Multi-RAT selection." In 2015 IEEE International Conference on Signal Processing for Communications (ICC). IEEE, 2015. http://dx.doi.org/10.1109/icc.2015.7248791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bouchard-Cote, A., N. Ferns, P. Panangaden, and D. Precup. "An approximation algorithm for labelled Markov processes: towards realistic approximation." In Second International Conference on the Quantitative Evaluation of Systems (QEST'05). IEEE, 2005. http://dx.doi.org/10.1109/qest.2005.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hirata, Kouji, and Miki Yamamoto. "Data center traffic engineering using Markov approximation." In 2017 International Conference on Information Networking (ICOIN). IEEE, 2017. http://dx.doi.org/10.1109/icoin.2017.7899499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dehnie, Sintayehu. "Markov Chain Approximation of Rayeleigh Fading Channel." In 2007 IEEE International Conference on Signal Processing and Communications. IEEE, 2007. http://dx.doi.org/10.1109/icspc.2007.4728568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blanchet, Jose, Guillermo Gallego, and Vineet Goyal. "A markov chain approximation to choice modeling." In EC '13: ACM Conference on Electronic Commerce. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2482540.2482560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Louis, Anand. "Hypergraph Markov Operators, Eigenvalues and Approximation Algorithms." In STOC '15: Symposium on Theory of Computing. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2746539.2746555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shao, Ziyu, Hao Zhang, Minghua Chen, and Kannan Ramchandran. "Reverse-engineering BitTorrent: A Markov approximation perspective." In IEEE INFOCOM 2012 - IEEE Conference on Computer Communications. IEEE, 2012. http://dx.doi.org/10.1109/infcom.2012.6195746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Blanchet, Jose, Guillermo Gallego, and Vineet Goyal. "A markov chain approximation to choice modeling." In the fourteenth ACM conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2492002.2482560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Chi-hsin, and Peter C. Doerschuk. "Markov random fields as a priori information for image restoration." In Signal Recovery and Synthesis. Washington, D.C.: Optica Publishing Group, 1995. http://dx.doi.org/10.1364/srs.1995.rwc2.

Full text
Abstract:
Markov random fields (MRFs) [1, 2, 3, 4] provide attractive statistical models for multidimensional signals. However, unfortunately, optimal Bayesian estimators tend to require large amounts of computation. We present an approximation to a particular Bayesian estimator which requires much reduced computation and an example illustrating low-light unknown-blur imaging. See [7] for an alternative approximation based on approximating the MRF lattice by a system of trees and for an alternative cost function.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Markov approximation"

1

Bhatnagar, Shalabh, Michael C. Fu, Steven I. Marcus, and Shashank Bhatnagar. Randomized Difference Two-Timescale Simultaneous Perturbation Stochastic Approximation Algorithms for Simulation Optimization of Hidden Markov Models. Fort Belvoir, VA: Defense Technical Information Center, May 2000. http://dx.doi.org/10.21236/ada637176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, D.-J., A. M. Makowski, and A. Shwartz. Stochastic Approximations for Finite-State Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, January 1987. http://dx.doi.org/10.21236/ada452264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Morris, M., and T. Jones. Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases. Office of Scientific and Technical Information (OSTI), November 1992. http://dx.doi.org/10.2172/7175933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Morris, M., and T. Jones. Marrow cell kinetics model: Equivalent prompt dose approximations for two special cases. Office of Scientific and Technical Information (OSTI), November 1992. http://dx.doi.org/10.2172/10189151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rofman, Rafael, Joaquín Baliña, and Emanuel López. Evaluating the Impact of COVID-19 on Pension Systems in Latin America and the Caribbean. The Case of Argentina. Inter-American Development Bank, October 2022. http://dx.doi.org/10.18235/0004508.

Full text
Abstract:
This paper presents a first approximation to assess the impact of the COVID-19 outbreak on Argentinas pension system in both the short and medium/long-term. To this end, we have used the Pension Projection Model of the Inter-American Development Bank (IDB) to design and analyze possible scenarios and outcomes, based on alternative scenarios. According to the data analyzed and the projections, the impact of COVID-19 on Argentinas pension system in the short run seems to have been limited, particularly given the rapid recovery during the last months of 2021. The long-term impact is harder to predict. Given the macroeconomic effects of the efforts made by authorities to protect the system and pensioners during the pandemic on the one hand; and the effects of COVID-19 within the labor market on the other, overall consequences are still to be fully understood.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography