Academic literature on the topic 'Invariant distribution of Markov processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Invariant distribution of Markov processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Invariant distribution of Markov processes"

1

Arnold, Barry C., and C. A. Robertson. "Autoregressive logistic processes." Journal of Applied Probability 26, no. 3 (September 1989): 524–31. http://dx.doi.org/10.2307/3214410.

Full text
Abstract:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
APA, Harvard, Vancouver, ISO, and other styles
2

Arnold, Barry C., and C. A. Robertson. "Autoregressive logistic processes." Journal of Applied Probability 26, no. 03 (September 1989): 524–31. http://dx.doi.org/10.1017/s0021900200038122.

Full text
Abstract:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
APA, Harvard, Vancouver, ISO, and other styles
3

McDonald, D. "An invariance principle for semi-Markov processes." Advances in Applied Probability 17, no. 1 (March 1985): 100–126. http://dx.doi.org/10.2307/1427055.

Full text
Abstract:
Let (I(t))∞t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μb is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µb on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
APA, Harvard, Vancouver, ISO, and other styles
4

McDonald, D. "An invariance principle for semi-Markov processes." Advances in Applied Probability 17, no. 01 (March 1985): 100–126. http://dx.doi.org/10.1017/s0001867800014683.

Full text
Abstract:
Let (I(t))∞ t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μ b is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µ b on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
APA, Harvard, Vancouver, ISO, and other styles
5

Barnsley, Michael F., and John H. Elton. "A new class of markov processes for image encoding." Advances in Applied Probability 20, no. 1 (March 1988): 14–32. http://dx.doi.org/10.2307/1427268.

Full text
Abstract:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained.Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
APA, Harvard, Vancouver, ISO, and other styles
6

Barnsley, Michael F., and John H. Elton. "A new class of markov processes for image encoding." Advances in Applied Probability 20, no. 01 (March 1988): 14–32. http://dx.doi.org/10.1017/s0001867800017924.

Full text
Abstract:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained. Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
APA, Harvard, Vancouver, ISO, and other styles
7

Kalpazidou, S. "On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case." Journal of Applied Probability 30, no. 1 (March 1993): 28–39. http://dx.doi.org/10.2307/3214619.

Full text
Abstract:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
APA, Harvard, Vancouver, ISO, and other styles
8

Kalpazidou, S. "On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case." Journal of Applied Probability 30, no. 01 (March 1993): 28–39. http://dx.doi.org/10.1017/s0021900200043977.

Full text
Abstract:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
APA, Harvard, Vancouver, ISO, and other styles
9

Avrachenkov, Konstantin, Alexey Piunovskiy, and Yi Zhang. "Markov Processes with Restart." Journal of Applied Probability 50, no. 4 (December 2013): 960–68. http://dx.doi.org/10.1239/jap/1389370093.

Full text
Abstract:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
APA, Harvard, Vancouver, ISO, and other styles
10

Avrachenkov, Konstantin, Alexey Piunovskiy, and Yi Zhang. "Markov Processes with Restart." Journal of Applied Probability 50, no. 04 (December 2013): 960–68. http://dx.doi.org/10.1017/s0021900200013735.

Full text
Abstract:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Invariant distribution of Markov processes"

1

Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Full text
Abstract:
1. Simuler des systèmes actifs et métastables avec des processus de Markov déterministes par morceaux (PDMPs): quelle dynamique choisir pour simuler efficacement des états métastables? comment exploiter directement la nature hors équilibre des PDMPs pour étudier les systèmes physiques modélisés? 2. Modéliser des systèmes actifs avec des PDMPs: quelles conditions doit remplir un système pour être modélisable par un PDMP? dans quels cas le système a-t-il un distribution stationnaire? comment calculer des quantités dynamiques (ex: rates de transition) dans ce cadre? 3. Améliorer les techniques de simulation de systèmes à l'équilibre: peut-on utiliser les résultats obtenus dans le cadre de systèmes hors équilibre pour accélérer la simulation de systèmes à l'équilibre? comment utiliser l'information topologique pour adapter la dynamique en temps réel?
1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
APA, Harvard, Vancouver, ISO, and other styles
2

Casse, Jérôme. "Automates cellulaires probabilistes et processus itérés ad libitum." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0248/document.

Full text
Abstract:
La première partie de cette thèse porte sur les automates cellulaires probabilistes (ACP) sur la ligne et à deux voisins. Pour un ACP donné, nous cherchons l'ensemble de ces lois invariantes. Pour des raisons expliquées en détail dans la thèse, ceci est à l'heure actuelle inenvisageable de toutes les obtenir et nous nous concentrons, dans cette thèse, surles lois invariantes markoviennes. Nous établissons, tout d'abord, un théorème de nature algébrique qui donne des conditions nécessaires et suffisantes pour qu'un ACP admette une ou plusieurs lois invariantes markoviennes dans le cas où l'alphabet E est fini. Par la suite, nous généralisons ce résultat au cas d'un alphabet E polonais après avoir clarifié les difficultés topologiques rencontrées. Enfin, nous calculons la fonction de corrélation du modèleà 8 sommets pour certaines valeurs des paramètres du modèle en utilisant une partie desrésultats précédents
The first part of this thesis is about probabilistic cellular automata (PCA) on the line and with two neighbors. For a given PCA, we look for the set of its invariant distributions. Due to reasons explained in detail in this thesis, it is nowadays unthinkable to get all of them and we concentrate our reections on the invariant Markovian distributions. We establish, first, an algebraic theorem that gives a necessary and sufficient condition for a PCA to have one or more invariant Markovian distributions when the alphabet E is finite. Then, we generalize this result to the case of a polish alphabet E once we have clarified the encountered topological difficulties. Finally, we calculate the 8-vertex model's correlation function for some parameters values using previous results.The second part of this thesis is about infinite iterations of stochastic processes. We establish the convergence of the finite dimensional distributions of the α-stable processes iterated n times, when n goes to infinite, according to parameter of stability and to drift r. Then, we describe the limit distributions. In the iterated Brownian motion case, we show that the limit distributions are linked with iterated functions system
APA, Harvard, Vancouver, ISO, and other styles
3

陳冠全 and Koon-chuen Chen. "Invariant limiting shape distributions for some sequential rectangularmodels." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31238233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Koon-chuen. "Invariant limiting shape distributions for some sequential rectangular models /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20998934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hammer, Matthias [Verfasser]. "Ergodicity and regularity of invariant measure for branching Markov processes with immigration / Matthias Hammer." Mainz : Universitätsbibliothek Mainz, 2012. http://d-nb.info/1029390975/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hurth, Tobias. "Invariant densities for dynamical systems with random switching." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52274.

Full text
Abstract:
We studied invariant measures and invariant densities for dynamical systems with random switching (switching systems, in short). These switching systems can be described by a two-component Markov process whose first component is a stochastic process on a finite-dimensional smooth manifold and whose second component is a stochastic process on a finite collection of smooth vector fields that are defined on the manifold. We identified sufficient conditions for uniqueness and absolute continuity of the invariant measure associated to this Markov process. These conditions consist of a Hoermander-type hypoellipticity condition and a recurrence condition. In the case where the manifold is the real line or a subset of the real line, we studied regularity properties of the invariant densities of absolutely continuous invariant measures. We showed that invariant densities are smooth away from critical points of the vector fields. Assuming in addition that the vector fields are analytic, we derived the asymptotically dominant term for invariant densities at critical points.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaijser, Thomas. "Convergence in distribution for filtering processes associated to Hidden Markov Models with densities." Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.

Full text
Abstract:
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filtering process converge towards a unique limit measure. This problem goes back to a paper of D Blackwell for the case when the Markov chain takes its values in a finite set and it goes back to a paper of H Kunita for the case when the state space of the Markov chain is a compact Hausdor space. Recently, due to work by F Kochmann, J Reeds, P Chigansky and R van Handel, a necessary and sucient condition for the convergence of the distributions of the filtering process has been found for the case when the state space is finite. This condition has since been generalised to the case when the state space is denumerable. In this paper we generalise some of the previous results on convergence in distribution to the case when the Markov chain and the observation sequence of a Hidden Markov Model take their values in complete, separable, metric spaces; it has though been necessary to assume that both the transition probability function of the Markov chain and the transition probability function that generates the observation sequence have densities.
APA, Harvard, Vancouver, ISO, and other styles
8

Talwar, Gaurav. "HMM-based non-intrusive speech quality and implementation of Viterbi score distribution and hiddenness based measures to improve the performance of speech recognition." Laramie, Wyo. : University of Wyoming, 2006. http://proquest.umi.com/pqdweb?did=1288654981&sid=7&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Green, David Anthony. "Departure processes from MAP/PH/1 queues." Title page, contents and abstract only, 1999. http://thesis.library.adelaide.edu.au/public/adt-SUA20020815.092144.

Full text
Abstract:
Bibliography: leaves 145-150. Electronic publication; Full text available in PDF format; abstract in HTML format. A MAP/PH/1 queue is a queue having a Markov arrival process (MAP), and a single server with phase-type (PH-type) distributed service time. This thesis considers the departure process of these types of queues, using matrix analytic methods, the Jordan canonical form of matrices, non-linear filtering and approximation techniques. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001.
APA, Harvard, Vancouver, ISO, and other styles
10

Drton, Mathias. "Maximum likelihood estimation in Gaussian AMP chain graph models and Gaussian ancestral graph models /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8952.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Invariant distribution of Markov processes"

1

Hernández-Lerma, O. Markov Chains and Invariant Probabilities. Basel: Birkhäuser Basel, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liao, Ming. Invariant Markov Processes Under Lie Group Actions. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carlsson, Niclas. Markov chains on metric spaces: Invariant measures and asymptotic behaviour. Åbo: Åbo Akademi University Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Banjevic, Dragan. Recurrent relations for distribution of waiting time in Markov chain. [Toronto]: University of Toronto, Department of Statistics, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

service), SpringerLink (Online, ed. Measure-Valued Branching Markov Processes. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Oswaldo Luiz do Valle Costa. Continuous Average Control of Piecewise Deterministic Markov Processes. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Feinberg, Eugene A. Handbook of Markov Decision Processes: Methods and Applications. Boston, MA: Springer US, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ulrich, Rieder, and SpringerLink (Online service), eds. Markov Decision Processes with Applications to Finance. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Taira, Kazuaki. Semigroups, Boundary Value Problems and Markov Processes. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Milch, Paul R. FORECASTER, a Markovian model to analyze the distribution of Naval Officers. Monterey, Calif: Naval Postgraduate School, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Invariant distribution of Markov processes"

1

Liao, Ming. "Decomposition of Markov Processes." In Invariant Markov Processes Under Lie Group Actions, 305–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pollett, P. K. "Identifying Q-Processes with a Given Finite µ-Invariant Measure." In Markov Processes and Controlled Markov Chains, 41–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dubins, Lester E., Ashok P. Maitra, and William D. Sudderth. "Invariant Gambling Problems and Markov Decision Processes." In International Series in Operations Research & Management Science, 409–28. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-0805-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dudley, R. M. "A note on Lorentz-invariant Markov processes." In Selected Works of R.M. Dudley, 109–15. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-5821-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cocozza-Thivent, Christiane. "Hitting Time Distribution." In Markov Renewal and Piecewise Deterministic Processes, 63–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70447-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liao, Ming. "Lévy Processes in Lie Groups." In Invariant Markov Processes Under Lie Group Actions, 35–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liao, Ming. "Lévy Processes in Homogeneous Spaces." In Invariant Markov Processes Under Lie Group Actions, 73–101. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rong, Wu. "Some Properties of Invariant Functions of Markov Processes." In Seminar on Stochastic Processes, 1988, 239–44. Boston, MA: Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4612-3698-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liao, Ming. "Lévy Processes in Compact Lie Groups." In Invariant Markov Processes Under Lie Group Actions, 103–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liao, Ming. "Inhomogeneous Lévy Processes in Lie Groups." In Invariant Markov Processes Under Lie Group Actions, 169–237. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Invariant distribution of Markov processes"

1

Rajendiran, Shenbageshwaran, Francisco Galdos, Carissa Anne Lee, Sidra Xu, Justin Harvell, Shireen Singh, Sean M. Wu, Elizabeth A. Lipke, and Selen Cremaschi. "Modeling hiPSC-to-Early Cardiomyocyte Differentiation Process using Microsimulation and Markov Chain Models." In Foundations of Computer-Aided Process Design, 344–50. Hamilton, Canada: PSE Press, 2024. http://dx.doi.org/10.69997/sct.152564.

Full text
Abstract:
Cardiomyocytes (CMs), the contractile heart cells that can be derived from human induced pluripotent stem cells (hiPSCs). These hiPSC derived CMs can be used for cardiovascular disease drug testing and regeneration therapies, and they have therapeutic potential. Currently, hiPSC-CM differentiation cannot yet be controlled to yield specific heart cell subtypes consistently. Designing differentiation processes to consistently direct differentiation to specific heart cells is important to realize the full therapeutic potential of hiPSC-CMs. A model that accurately represents the dynamic changes in cell populations from hiPSCs to CMs over the differentiation timeline is a first step towards designing processes for directing differentiation. This paper introduces a microsimulation model for studying temporal changes in the hiPSC-to-early CM differentiation. The differentiation process for each cell in the microsimulation model is represented by a Markov chain model (MCM). The MCM includes cell subtypes representing key developmental stages in hiPSC differentiation to early CMs. These stages include pluripotent stem cells, early primitive streak, late primitive streak, mesodermal progenitors, early cardiac progenitors, late cardiac progenitors, and early CMs. The time taken by a cell to transit from one state to the next state is assumed to be exponentially distributed. The transition probabilities of the Markov chain process and the mean duration parameter of the exponential distribution were estimated using Bayesian optimization. The results predicted by the MCM agree with the data.
APA, Harvard, Vancouver, ISO, and other styles
2

Akshay, S., Blaise Genest, and Nikhil Vyas. "Distribution-based objectives for Markov Decision Processes." In LICS '18: 33rd Annual ACM/IEEE Symposium on Logic in Computer Science. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3209108.3209185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Budgett, Stephanie, Azam Asanjarani, and Heti Afimeimounga. "Visualizing Markov Processes." In Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. International Association for Statistical Education, 2022. http://dx.doi.org/10.52041/iase.icots11.t10f3.

Full text
Abstract:
Researchers and educators have long been aware of the misconceptions prevalent in people’s probabilistic reasoning processes. Calls to reform the teaching of probability from a traditional and predominantly mathematical approach to include an emphasis on modelling using technology have been heeded by many. The purpose of this paper is to present our experiences of including an activity based on an interactive visualisation tool in the Markov processes module of a first-year probability course. Initial feedback suggests that the tool may support students’ understanding of the equilibrium distribution and points to certain aspects of the tool that may be beneficial. A targeted survey, to be administered in Semester 1, 2022, aims to provide more insight.
APA, Harvard, Vancouver, ISO, and other styles
4

Fracasso, Paulo Thiago, Frank Stephenson Barnes, and Anna Helena Reali Costa. "Energy cost optimization in water distribution systems using Markov Decision Processes." In 2013 International Green Computing Conference (IGCC). IEEE, 2013. http://dx.doi.org/10.1109/igcc.2013.6604516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ismail, Muhammad Ali. "Multi-core processor based parallel implementation for finding distribution vectors in Markov processes." In 2013 18th International Conference on Digital Signal Processing (DSP). IEEE, 2013. http://dx.doi.org/10.1109/siecpc.2013.6550997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsukamoto, Hiroki, Song Bian, and Takashi Sato. "Statistical Device Modeling with Arbitrary Model-Parameter Distribution via Markov Chain Monte Carlo." In 2021 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD). IEEE, 2021. http://dx.doi.org/10.1109/sispad54002.2021.9592558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Seungchul, Lin Li, and Jun Ni. "Modeling of Degradation Processes to Obtain an Optimal Solution for Maintenance and Performance." In ASME 2009 International Manufacturing Science and Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/msec2009-84166.

Full text
Abstract:
This paper presents an approach to represent equipment degradation and various maintenance decision processes based on Markov processes. Non-exponential holding time distributions are approximated by inserting multiple intermediate states between the two different degradation states based on a phase-type distribution. Overall system availability then is numerically calculated by recursively solving the balance equations of the Markov process. Preliminary simulation results show that the optimal preventive maintenance intervals for two repairable components system can be achieved by means of the proposed method. By having an adequate model representing both deterioration and maintenance processes, it is also possible to obtain different optimal maintenance policies to maximize the availability or productivity for different configurations of components.
APA, Harvard, Vancouver, ISO, and other styles
8

Sathe, Sumedh, Chinmay Samak, Tanmay Samak, Ajinkya Joglekar, Shyam Ranganathan, and Venkat N. Krovi. "Data Driven Vehicle Dynamics System Identification Using Gaussian Processes." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2022.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">Modeling uncertainties pose a significant challenge in the development and deployment of model-based vehicle control systems. Most model- based automotive control systems require the use of a well estimated vehicle dynamics prediction model. The ability of first principles-based models to represent vehicle behavior becomes limited under complex scenarios due to underlying rigid physical assumptions. Additionally, the increasing complexity of these models to meet ever-increasing fidelity requirements presents challenges for obtaining analytical solutions as well as control design. Alternatively, deterministic data driven techniques including but not limited to deep neural networks, polynomial regression, Sparse Identification of Nonlinear Dynamics (SINDy) have been deployed for vehicle dynamics system identification and prediction. However, under real-world conditions which are often uncertain or time varying, including, but not limited to changing terrain and/or physical, a single time-invariant physics- based or parametric model may not accurately represent vehicle behavior resulting in sub-optimal controller performance. The previously mentioned data-driven system identification techniques, by virtue of being deterministic cannot express these uncertainties, leading to a need for multiple models, or a distribution of models to describe vehicle behavior. Gaussian Process Regression constitutes a cogent approach for capturing and expressing modeling uncertainties through a probability distribution. In this paper, we demonstrate Gaussian Process Regression as an able technique for modeling uncertain vehicle dynamics using a real-world vehicle dataset, acquired by performing benchmark maneuvers using a scaled vehicle observed by a motion-capture system. Using Gaussian Process Regression, we develop single-step as well as multi-step prediction models that are usable for reactive as well as predictive model-based control techniques.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
9

Velasquez, Alvaro. "Steady-State Policy Synthesis for Verifiable Control." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/784.

Full text
Abstract:
In this paper, we introduce the Steady-State Policy Synthesis (SSPS) problem which consists of finding a stochastic decision-making policy that maximizes expected rewards while satisfying a set of asymptotic behavioral specifications. These specifications are determined by the steady-state probability distribution resulting from the Markov chain induced by a given policy. Since such distributions necessitate recurrence, we propose a solution which finds policies that induce recurrent Markov chains within possibly non-recurrent Markov Decision Processes (MDPs). The SSPS problem functions as a generalization of steady-state control, which has been shown to be in PSPACE. We improve upon this result by showing that SSPS is in P via linear programming. Our results are validated using CPLEX simulations on MDPs with over 10000 states. We also prove that the deterministic variant of SSPS is NP-hard.
APA, Harvard, Vancouver, ISO, and other styles
10

Haschka, Markus, and Volker Krebs. "A Direct Approximation of Cole-Cole-Systems for Time-Domain Analysis." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84579.

Full text
Abstract:
Cole-Cole-systems are important in electrochemistry to represent impedances of galvanic elements like fuel cells. Fractional calculus has to be applied for system analysis of Cole-Cole-systems in the time-domain. The representation of fractional differential equations of Cole-Cole-systems is addressed in this contribution. Usually, the fractional derivation is approximated, to ensure that the fractional system can be represented by conventional differential equations of an integer order. This article presents a new opposite approach, which results by direct approximation of the Cole-Cole-systems by conventional linear time invariant systems. The method considered is based on the distribution density function of relaxation times of first order Debye-processes. This distribution density is an alternative representation of the transfer behavior of such a system. Several approximation methods, based on an analysis of the distribution density, are presented in this work. The feasibility of these methods will be demonstrated by a comparison of simulated data of the approximation models to ideal data and reference values, respectively.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Invariant distribution of Markov processes"

1

Stettner, Lukasz. On the Existence and Uniqueness of Invariant Measure for Continuous Time Markov Processes,. Fort Belvoir, VA: Defense Technical Information Center, April 1986. http://dx.doi.org/10.21236/ada174758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography