Academic literature on the topic 'Markov chains'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Markov chains.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Markov chains"

1

Grewal, Jasleen K., Martin Krzywinski, and Naomi Altman. "Markov models—Markov chains." Nature Methods 16, no. 8 (July 30, 2019): 663–64. http://dx.doi.org/10.1038/s41592-019-0476-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Valenzuela, Mississippi. "Markov chains and applications." Selecciones Matemáticas 9, no. 01 (June 30, 2022): 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.

Full text
Abstract:
This work has three important purposes: first it is the study of Markov Chains, the second is to show that Markov chains have different applications and finally it is to model a process of this behaves. Throughout this work we will describe what a Markov chain is, what these processes are for and how these chains are classified. We will describe a Markov Chain, that is, analyze what are the primary elements that make up a Markov chain, among others.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindley, D. V., and J. R. Norris. "Markov Chains." Mathematical Gazette 83, no. 496 (March 1999): 188. http://dx.doi.org/10.2307/3618756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lund, Robert B., and J. R. Norris. "Markov Chains." Journal of the American Statistical Association 94, no. 446 (June 1999): 654. http://dx.doi.org/10.2307/2670196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Verbeken, Brecht, and Marie-Anne Guerry. "Attainability for Markov and Semi-Markov Chains." Mathematics 12, no. 8 (April 19, 2024): 1227. http://dx.doi.org/10.3390/math12081227.

Full text
Abstract:
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n, where each component ni represents the number of entities in the state Si, can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (SR-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR-maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
APA, Harvard, Vancouver, ISO, and other styles
6

Barker, Richard J., and Matthew R. Schofield. "Putting Markov Chains Back into Markov Chain Monte Carlo." Journal of Applied Mathematics and Decision Sciences 2007 (October 30, 2007): 1–13. http://dx.doi.org/10.1155/2007/98086.

Full text
Abstract:
Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also in developing efficient algorithms for inference. We anticipate that a major area of future research involving mark-recapture data will be the development of hierarchical models that lead to better demographic models that account for all uncertainties in the analysis. A key issue is determining when the chains produced by Markov chain Monte Carlo sampling have converged.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhong, Pingping, Weiguo Yang, and Peipei Liang. "THE ASYMPTOTIC EQUIPARTITION PROPERTY FOR ASYMPTOTIC CIRCULAR MARKOV CHAINS." Probability in the Engineering and Informational Sciences 24, no. 2 (March 18, 2010): 279–88. http://dx.doi.org/10.1017/s0269964809990271.

Full text
Abstract:
In this article, we study the asymptotic equipartition property (AEP) for asymptotic circular Markov chains. First, the definition of an asymptotic circular Markov chain is introduced. Then by applying the limit property for the bivariate functions of nonhomogeneous Markov chains, the strong limit theorem on the frequencies of occurrence of states for asymptotic circular Markov chains is established. Next, the strong law of large numbers on the frequencies of occurrence of states for asymptotic circular Markov chains is obtained. Finally, we prove the AEP for asymptotic circular Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
8

Lund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 2 (June 2006): 486–99. http://dx.doi.org/10.1239/jap/1152413736.

Full text
Abstract:
In this paper we identify a monotonicity in all countable-state-space reversible Markov chains and examine several consequences of this structure. In particular, we show that the return times to every state in a reversible chain have a decreasing hazard rate on the subsequence of even times. This monotonicity is used to develop geometric convergence rate bounds for time-reversible Markov chains. Results relating the radius of convergence of the probability generating function of first return times to the chain's rate of convergence are presented. An effort is made to keep the exposition rudimentary.
APA, Harvard, Vancouver, ISO, and other styles
9

Lund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 02 (June 2006): 486–99. http://dx.doi.org/10.1017/s0021900200001777.

Full text
Abstract:
In this paper we identify a monotonicity in all countable-state-space reversible Markov chains and examine several consequences of this structure. In particular, we show that the return times toeverystate in a reversible chain have a decreasing hazard rate on the subsequence of even times. This monotonicity is used to develop geometric convergence rate bounds for time-reversible Markov chains. Results relating the radius of convergence of the probability generating function of first return times to the chain's rate of convergence are presented. An effort is made to keep the exposition rudimentary.
APA, Harvard, Vancouver, ISO, and other styles
10

Janssen, A., and J. Segers. "Markov Tail Chains." Journal of Applied Probability 51, no. 4 (December 2014): 1133–53. http://dx.doi.org/10.1239/jap/1421763332.

Full text
Abstract:
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions inRd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Markov chains"

1

Skorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.

Full text
Abstract:
In the dissertation there is investigated a class of Markov chains defined by iterations of a function possessing a property of asymptotical homogeneity. Two problems are solved: 1) there are established rather general conditions under which the chain has unique stationary distribution; 2) for the chains evolving in a real line there are established conditions under which the stationary distribution of the chain is heavy-tailed.
Disertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
APA, Harvard, Vancouver, ISO, and other styles
2

Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.

Full text
Abstract:

A finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.

The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).

APA, Harvard, Vancouver, ISO, and other styles
3

Dessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.

Full text
Abstract:
This thesis is concerned with studying the hitting time of an absorbing state on Markov chain models that have a countable state space. For many models it is challenging to study the hitting time directly; I present a perturbative approach that allows one to uniformly bound the difference between the hitting time moment generating functions of two Markov chains in a neighbourhood of the origin. I demonstrate how this result can be applied to both discrete and continuous time Markov chains. The motivation for this work came from the field of biology, namely DNA damage and repair. Biophysicists have highlighted that the repair process can lead to Double Strand Breaks; due to the serious nature of such an eventuality it is important to understand the hitting time of this event. There is a phase transition in the model that I consider. In the regime of parameters where the process reaches quasi-stationarity before being absorbed I am able to apply my perturbative technique in order to further understand this hitting time.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Di, Cecco Davide <1980&gt. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.

Full text
Abstract:
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.

Full text
Abstract:
Bibliography: leaves 88-91.
This thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
APA, Harvard, Vancouver, ISO, and other styles
9

Salzman, Julia. "Spectral analysis with Markov chains /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.

Full text
Abstract:
Infertility affects approximately 15% of couples. Testing and interventions are costly, in time, money, and emotional energy. This paper will discuss using Markov decision and multi-armed bandit processes to identify a systematic approach of interventions that will lead to the desired baby while minimizing costs.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Markov chains"

1

Gagniuc, Paul A. Markov Chains. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119387596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brémaud, Pierre. Markov Chains. New York, NY: Springer New York, 1999. http://dx.doi.org/10.1007/978-1-4757-3124-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sericola, Bruno. Markov Chains. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118731543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Douc, Randal, Eric Moulines, Pierre Priouret, and Philippe Soulier. Markov Chains. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97704-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Graham, Carl. Markov Chains. Chichester, UK: John Wiley & Sons, Ltd, 2014. http://dx.doi.org/10.1002/9781118881866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brémaud, Pierre. Markov Chains. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45982-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ching, Wai-Ki, Ximin Huang, Michael K. Ng, and Tak-Kuen Siu. Markov Chains. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4614-6312-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hermanns, Holger. Interactive Markov Chains. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45804-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Privault, Nicolas. Understanding Markov Chains. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0659-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hartfiel, Darald J. Markov Set-Chains. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0094586.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Markov chains"

1

Carlton, Matthew A., and Jay L. Devore. "Markov Chains." In Probability with Applications in Engineering, Science, and Technology, 423–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52401-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lindsey, James K. "Markov Chains." In The Analysis of Stochastic Processes using GLIM, 21–42. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2888-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gebali, Fayez. "Markov Chains." In Analysis of Computer and Communication Networks, 1–57. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-74437-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lakatos, László, László Szeidl, and Miklós Telek. "Markov Chains." In Introduction to Queueing Systems with Telecommunication Applications, 93–177. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15142-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gordon, Hugh. "Markov Chains." In Discrete Probability, 209–49. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1966-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hermanns, Holger. "Markov Chains." In Interactive Markov Chains, 35–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45804-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Serfozo, Richard. "Markov Chains." In Probability and Its Applications, 1–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89332-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ökten, Giray. "Markov Chains." In Probability and Simulation, 81–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-56070-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Robert, Christian P., and George Casella. "Markov Chains." In Springer Texts in Statistics, 139–91. New York, NY: Springer New York, 1999. http://dx.doi.org/10.1007/978-1-4757-3071-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Harris, Carl M. "Markov chains." In Encyclopedia of Operations Research and Management Science, 481–84. New York, NY: Springer US, 2001. http://dx.doi.org/10.1007/1-4020-0611-x_579.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Markov chains"

1

Hunter, Jeffrey J. "Perturbed Markov Chains." In Proceedings of the International Statistics Workshop. WORLD SCIENTIFIC, 2006. http://dx.doi.org/10.1142/9789812772466_0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Saglam, Cenk Oguz, and Katie Byl. "Metastable Markov chains." In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bini, D. A., B. Meini, S. Steffé, and B. Van Houdt. "Structured Markov chains solver." In Proceeding from the 2006 workshop. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1190366.1190378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bini, D. A., B. Meini, S. Steffé, and B. Van Houdt. "Structured Markov chains solver." In Proceeding from the 2006 workshop. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1190366.1190379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kiefer, Stefan, and A. Prasad Sistla. "Distinguishing Hidden Markov Chains." In LICS '16: 31st Annual ACM/IEEE Symposium on Logic in Computer Science. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2933575.2933608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

García, Jesús E. "Combining multivariate Markov chains." In PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014). AIP Publishing LLC, 2015. http://dx.doi.org/10.1063/1.4912373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mitzenmacher, Michael. "Session details: Markov chains." In STOC '09: Symposium on Theory of Computing. New York, NY, USA: ACM, 2009. http://dx.doi.org/10.1145/3257428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yu-Fen, Qun-Feng Zhang, and Rui-Hua Yu. "Markov property of Markov chains and its test." In 2010 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2010. http://dx.doi.org/10.1109/icmlc.2010.5580952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heidergott, Bernd. "Perturbation analysis of Markov chains." In 2008 9th International Workshop on Discrete Event Systems. IEEE, 2008. http://dx.doi.org/10.1109/wodes.2008.4605929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Villacorta, Pablo, Jose_Luis Verdegay, and David Pelta. "Towards fuzzy linguistic Markov chains." In The 8th conference of the European Society for Fuzzy Logic and Technology. Paris, France: Atlantis Press, 2013. http://dx.doi.org/10.2991/eusflat.2013.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Markov chains"

1

Ramezani, Vahid R., and Steven I. Marcus. Risk-Sensitive Probability for Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, September 2002. http://dx.doi.org/10.21236/ada438509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Marie, Raymond, Andrew Reibman, and Kishor Trivedi. Transient Solution of Acyclic Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, August 1985. http://dx.doi.org/10.21236/ada162314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Doerschuk, Peter C., Robert R. Tenney, and Alan S. Willsky. Modeling Electrocardiograms Using Interacting Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada162758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, D.-J., A. M. Makowski, and A. Shwartz. Stochastic Approximations for Finite-State Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, January 1987. http://dx.doi.org/10.21236/ada452264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krakowski, Martin. Models of Coin-Tossing for Markov Chains. Revision. Fort Belvoir, VA: Defense Technical Information Center, December 1987. http://dx.doi.org/10.21236/ada196572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dupuis, Paul, and Hui Wang. Adaptive Importance Sampling for Uniformly Recurrent Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada461913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tsitsiklis, John N. Markov Chains with Rare Transitions and Simulated Annealing. Fort Belvoir, VA: Defense Technical Information Center, September 1985. http://dx.doi.org/10.21236/ada161598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Соловйов, Володимир Миколайович, V. Saptsin, and D. Chabanenko. Markov chains applications to the financial-economic time series predictions. Transport and Telecommunication Institute, 2011. http://dx.doi.org/10.31812/0564/1189.

Full text
Abstract:
In this research the technology of complex Markov chains is applied to predict financial time series. The main distinction of complex or high-order Markov Chains and simple first-order ones is the existing of after-effect or memory. The technology proposes prediction with the hierarchy of time discretization intervals and splicing procedure for the prediction results at the different frequency levels to the single prediction output time series. The hierarchy of time discretizations gives a possibility to use fractal properties of the given time series to make prediction on the different frequencies of the series. The prediction results for world’s stock market indices are presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Harris, Carl M. Rootfinding for Markov Chains with Quasi-Triangular Transition Matrices. Fort Belvoir, VA: Defense Technical Information Center, October 1988. http://dx.doi.org/10.21236/ada202468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thompson, Theodore J., James P. Boyle, and Douglas J. Hentschel. Markov Chains for Random Urinalysis 1: Age-Test Model. Fort Belvoir, VA: Defense Technical Information Center, March 1993. http://dx.doi.org/10.21236/ada263274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography