Academic literature on the topic 'Markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Markov chain"

1

Verbeken, Brecht, and Marie-Anne Guerry. "Attainability for Markov and Semi-Markov Chains." Mathematics 12, no. 8 (April 19, 2024): 1227. http://dx.doi.org/10.3390/math12081227.

Full text
Abstract:
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
APA, Harvard, Vancouver, ISO, and other styles
2

Arkoubi, Khadija. "MARKOV CHAIN." International Journal of Scientific and Engineering Research 7, no. 3 (March 25, 2016): 706–7. http://dx.doi.org/10.14299/ijser.2016.03.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barker, Richard J., and Matthew R. Schofield. "Putting Markov Chains Back into Markov Chain Monte Carlo." Journal of Applied Mathematics and Decision Sciences 2007 (October 30, 2007): 1–13. http://dx.doi.org/10.1155/2007/98086.

Full text
Abstract:
Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also in developing efficient algorithms for inference. We anticipate that a major area of future research involving mark-recapture data will be the development of hierarchical models that lead to better demographic models that account for all uncertainties in the analysis. A key issue is determining when the chains produced by Markov chain Monte Carlo sampling have converged.
APA, Harvard, Vancouver, ISO, and other styles
4

Valenzuela, Mississippi. "Markov chains and applications." Selecciones Matemáticas 9, no. 01 (June 30, 2022): 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.

Full text
Abstract:
This work has three important purposes: first it is the study of Markov Chains, the second is to show that Markov chains have different applications and finally it is to model a process of this behaves. Throughout this work we will describe what a Markov chain is, what these processes are for and how these chains are classified. We will describe a Markov Chain, that is, analyze what are the primary elements that make up a Markov chain, among others.
APA, Harvard, Vancouver, ISO, and other styles
5

Guyo, X., and C. Hardouin†. "Markow chain markov field dynamics:models and statistics." Statistics 35, no. 4 (January 2001): 593–627. http://dx.doi.org/10.1080/02331880108802756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xiang, Xuyan, Xiao Zhang, and Xiaoyun Mo. "Statistical Identification of Markov Chain on Trees." Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/2036248.

Full text
Abstract:
The theoretical study of continuous-time homogeneous Markov chains is usually based on a natural assumption of a known transition rate matrix (TRM). However, the TRM of a Markov chain in realistic systems might be unknown and might even need to be identified by partially observable data. Thus, an issue on how to identify the TRM of the underlying Markov chain by partially observable information is derived from the great significance in applications. That is what we call the statistical identification of Markov chain. The Markov chain inversion approach has been derived for basic Markov chains by partial observation at few states. In the current letter, a more extensive class of Markov chain on trees is investigated. Firstly, a type of a more operable derivative constraint is developed. Then, it is shown that all Markov chains on trees can be identified only by such derivative constraints of univariate distributions of sojourn time and/or hitting time at a few states. A numerical example is included to demonstrate the correctness of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Takemura, Akimichi, and Hisayuki Hara. "Markov chain Monte Carlo test of toric homogeneous Markov chains." Statistical Methodology 9, no. 3 (May 2012): 392–406. http://dx.doi.org/10.1016/j.stamet.2011.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qi-feng, Yao, Dong Yun, and Wang Zhong-Zhi. "An Entropy Rate Theorem for a Hidden Inhomogeneous Markov Chain." Open Statistics & Probability Journal 8, no. 1 (September 30, 2017): 19–26. http://dx.doi.org/10.2174/1876527001708010019.

Full text
Abstract:
Objective: The main object of our study is to extend some entropy rate theorems to a Hidden Inhomogeneous Markov Chain (HIMC) and establish an entropy rate theorem under some mild conditions. Introduction: A hidden inhomogeneous Markov chain contains two different stochastic processes; one is an inhomogeneous Markov chain whose states are hidden and the other is a stochastic process whose states are observable. Materials and Methods: The proof of theorem requires some ergodic properties of an inhomogeneous Markov chain, and the flexible application of the properties of norm and the bounded conditions of series are also indispensable. Results: This paper presents an entropy rate theorem for an HIMC under some mild conditions and two corollaries for a hidden Markov chain and an inhomogeneous Markov chain. Conclusion: Under some mild conditions, the entropy rates of an inhomogeneous Markov chains, a hidden Markov chain and an HIMC are similar and easy to calculate.
APA, Harvard, Vancouver, ISO, and other styles
9

Masuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions." Advances in Applied Probability 47, no. 1 (March 2015): 83–105. http://dx.doi.org/10.1239/aap/1427814582.

Full text
Abstract:
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
10

Masuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions." Advances in Applied Probability 47, no. 01 (March 2015): 83–105. http://dx.doi.org/10.1017/s0001867800007710.

Full text
Abstract:
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Markov chain"

1

Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from high-dimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. In this thesis we propose a new Monte Carlo framework in which we build efficient high-dimensional proposal distributions using SMC methods. This allows us to design effective MCMC algorithms in complex scenarios where standard strategies fail. We demonstrate these algorithms on a number of example problems, including simulated tempering, nonlinear non-Gaussian state-space model, and protein folding.
APA, Harvard, Vancouver, ISO, and other styles
3

Byrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.

Full text
Abstract:
The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.

Full text
Abstract:
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Estandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yichuan. "Scalable geometric Markov chain Monte Carlo." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is one of the most popular statistical inference methods in machine learning. Recent work shows that a significant improvement of the statistical efficiency of MCMC on complex distributions can be achieved by exploiting geometric properties of the target distribution. This is known as geometric MCMC. However, many such methods, like Riemannian manifold Hamiltonian Monte Carlo (RMHMC), are computationally challenging to scale up to high dimensional distributions. The primary goal of this thesis is to develop novel geometric MCMC methods applicable to large-scale problems. To overcome the computational bottleneck of computing second order derivatives in geometric MCMC, I propose an adaptive MCMC algorithm using an efficient approximation based on Limited memory BFGS. I also propose a simplified variant of RMHMC that is able to work effectively on larger scale than the previous methods. Finally, I address an important limitation of geometric MCMC, namely that is only available for continuous distributions. I investigate a relaxation of discrete variables to continuous variables that allows us to apply the geometric methods. This is a new direction of MCMC research which is of potential interest to many applications. The effectiveness of the proposed methods is demonstrated on a wide range of popular models, including generalised linear models, conditional random fields (CRFs), hierarchical models and Boltzmann machines.
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Youhan. "Efficient Markov Chain Monte Carlo Methods." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.

Full text
Abstract:

Generating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.

One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.

Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.

APA, Harvard, Vancouver, ISO, and other styles
9

Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.

Full text
Abstract:
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées
In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
APA, Harvard, Vancouver, ISO, and other styles
10

Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.

Full text
Abstract:
Die vier in der vorliegenden Dissertation enthaltenen Studien beschäftigen sich vorwiegend mit dem dynamischen Verhalten makroökonomischer Zeitreihen. Diese Dynamiken werden sowohl im Kontext eines einfachen DSGE Modells, als auch aus der Sichtweise reiner Zeitreihenmodelle untersucht.
The four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Markov chain"

1

Liang, Faming, Chuanhai Liu, and Raymond J. Carroll. Advanced Markov Chain Monte Carlo Methods. Chichester, UK: John Wiley & Sons, Ltd, 2010. http://dx.doi.org/10.1002/9780470669723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

R, Gilks W., Richardson S, and Spiegelhalter D. J, eds. Markov chain Monte Carlo in practice. London: Chapman & Hall, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

R, Gilks W., Richardson S, and Spiegelhalter D. J, eds. Markov chain Monte Carlo in practice. Boca Raton, Fla: Chapman & Hall, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Y, Saad, Stewart William J. 1946-, and Research Institute for Advanced Computer Science (U.S.), eds. Numerical methods in Markov chain modeling. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Banisch, Sven. Markov Chain Aggregation for Agent-Based Models. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24877-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

S, Kendall W., Liang F. 1970-, and Wang J. S. 1960-, eds. Markov chain Monte Carlo: Innovations and applications. Singapore: World Scientific, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Joseph, Anosh. Markov Chain Monte Carlo Methods in Quantum Field Theories. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46044-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gamerman, Dani. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. London: Chapman & Hall, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ramlau-Hansen, Henrik. Hattendorff's theorem: A Markov chain and counting process approach. Copenhagen: Laboratory of Actuarial Mathematics, University of Copenhagen, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Freitas, Lopes Hedibert, ed. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. 2nd ed. Boca Raton: Taylor & Francis, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Markov chain"

1

Camacho Olmedo, M. T., and J. F. Mas. "Markov Chain." In Geomatic Approaches for Modeling Land Change Scenarios, 441–45. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60801-3_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fürnkranz, Johannes, Philip K. Chan, Susan Craw, Claude Sammut, William Uther, Adwait Ratnaparkhi, Xin Jin, et al. "Markov Chain." In Encyclopedia of Machine Learning, 639. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Fuyan. "Markov Chain." In Encyclopedia of Systems Biology, 1175. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weik, Martin H. "Markov chain." In Computer Science and Communications Dictionary, 977. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_11086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Hao. "Markov Chain." In Models and Methods for Management Science, 383–403. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1614-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bandyopadhyay, Susmita. "Markov Chain." In Decision Support System, 109–24. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003307655-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rodrigues, Eliane Regina, and Jorge Alberto Achcar. "Markov Chain Models." In Applications of Discrete-time Markov Chains and Poisson Processes to Air Pollution Modeling and Studies, 11–23. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4645-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Møller, Jan Kloppenborg, Marcel Schweiker, Rune Korsholm Andersen, Burak Gunay, Selin Yilmaz, Verena Marie Barthelmes, and Henrik Madsen. "Markov chain models." In Statistical Modelling of Occupant Behaviour, 253–88. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003340812-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stewart, William J. "Markov Chain Equations." In Encyclopedia of Operations Research and Management Science, 921–25. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4419-1153-7_578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chaudhari, Harshal A., Michael Mathioudakis, and Evimaria Terzi. "Markov Chain Monitoring." In Proceedings of the 2018 SIAM International Conference on Data Mining, 441–49. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2018. http://dx.doi.org/10.1137/1.9781611975321.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Markov chain"

1

Zhang, Yu, and Mitchell Bucklew. "Max Markov Chain." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/639.

Full text
Abstract:
In this paper, we introduce Max Markov Chain (MMC), a novel model for sequential data with sparse correlations among the state variables. It may also be viewed as a special class of approximate models for High-order Markov Chains (HMCs). MMC is desirable for domains where the sparse correlations are long-term and vary in their temporal stretches. Although generally intractable, parameter optimization for MMC can be solved analytically. However, based on this result, we derive an approximate solution that is highly efficient empirically. When compared with HMC and approximate HMC models, MMC combines better sample efficiency, model parsimony, and an outstanding computational advantage. Such a quality allows MMC to scale to large domains where the competing models would struggle to perform. We compare MMC with several baselines with synthetic and real-world datasets to demonstrate MMC as a valuable alternative for stochastic modeling.
APA, Harvard, Vancouver, ISO, and other styles
2

HARA, Hisayuki, Satoshi AOKI, and Akimichi TAKEMURA. "Running Markov Chain without Markov Basis." In Harmony of Gröbner Bases and the Modern Industrial Society - The Second CREST-CSBM International Conference. Singapore: World Scientific Publishing Co. Pte. Ltd., 2012. http://dx.doi.org/10.1142/9789814383462_0005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Awiszus, Maren, and Bodo Rosenhahn. "Markov Chain Neural Networks." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. http://dx.doi.org/10.1109/cvprw.2018.00293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Asesh, Aishwarya. "Markov Chain Sequence Modeling." In 2022 3rd International Informatics and Software Engineering Conference (IISEC). IEEE, 2022. http://dx.doi.org/10.1109/iisec56263.2022.9998227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Jian-qiang, and Yan-ping Zhao. "Research on the Supply Chain Product Market Forecasting Based on Markov Chain." In 2010 International Conference on E-Product E-Service and E-Entertainment (ICEEE 2010). IEEE, 2010. http://dx.doi.org/10.1109/iceee.2010.5660723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Varshosaz, Mahsa, and Ramtin Khosravi. "Discrete time Markov chain families." In the 17th International Software Product Line Conference co-located workshops. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2499777.2500725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McClymont, Kent, and Edward C. Keedwell. "Markov chain hyper-heuristic (MCHH)." In the 13th annual conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2001576.2001845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Waller, Ephraim Nii Kpakpo, Pamela Delali Adablah, and Quist-Aphetsi Kester. "Markov Chain: Forecasting Economic Variables." In 2019 International Conference on Computing, Computational Modelling and Applications (ICCMA). IEEE, 2019. http://dx.doi.org/10.1109/iccma.2019.00026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cogill, Randy, and Erik Vargo. "The Poisson equation for reversible Markov chains: Analysis and application to Markov chain samplers." In 2012 IEEE 51st Annual Conference on Decision and Control (CDC). IEEE, 2012. http://dx.doi.org/10.1109/cdc.2012.6425978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

García, Jesús E., S. L. M. Londoño, and Thainá Soares. "Optimal model for a Markov chain with Markov covariates." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2019. AIP Publishing, 2020. http://dx.doi.org/10.1063/5.0026429.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Markov chain"

1

Gelfand, Alan E., and Sujit K. Sahu. On Markov Chain Monte Carlo Acceleration. Fort Belvoir, VA: Defense Technical Information Center, April 1994. http://dx.doi.org/10.21236/ada279393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krebs, William B. Markov Chain Simulations of Binary Matrices. Fort Belvoir, VA: Defense Technical Information Center, January 1992. http://dx.doi.org/10.21236/ada249265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kocherlakota, Narayana. Sluggish Inflation Expectations: A Markov Chain Analysis. Cambridge, MA: National Bureau of Economic Research, February 2016. http://dx.doi.org/10.3386/w22009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calvin, James M. Markov Chain Moment Formulas for Regenerative Simulation. Fort Belvoir, VA: Defense Technical Information Center, June 1989. http://dx.doi.org/10.21236/ada210684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wereley, Norman M., and Bruce K. Walker. Approximate Evaluation of Semi-Markov Chain Reliability Models,. Fort Belvoir, VA: Defense Technical Information Center, February 1988. http://dx.doi.org/10.21236/ada194669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Safta, Cosmin, Mohammad Khalil, and Habib N. Najm. Transitional Markov Chain Monte Carlo Sampler in UQTk. Office of Scientific and Technical Information (OSTI), March 2020. http://dx.doi.org/10.2172/1606084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dabrowski, Christopher, and Fern Hunt. Markov chain analysis for large-scale grid systems. Gaithersburg, MD: National Institute of Standards and Technology, 2009. http://dx.doi.org/10.6028/nist.ir.7566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Resnick, Sidney I., and David Zeber. Asymptotics of Markov Kernels and the Tail Chain. Fort Belvoir, VA: Defense Technical Information Center, January 2013. http://dx.doi.org/10.21236/ada585087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Warnes, Gregory R. HYDRA: A Java Library for Markov Chain Monte Carlo. Fort Belvoir, VA: Defense Technical Information Center, March 2002. http://dx.doi.org/10.21236/ada459649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reddy, S., and A. Crisp. Deep Neural Network Informed Markov Chain Monte Carlo Methods. Office of Scientific and Technical Information (OSTI), November 2023. http://dx.doi.org/10.2172/2283285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography