Literatura científica selecionada sobre o tema "Invariant distribution of Markov processes"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Invariant distribution of Markov processes".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Invariant distribution of Markov processes"

1

Arnold, Barry C., e C. A. Robertson. "Autoregressive logistic processes". Journal of Applied Probability 26, n.º 3 (setembro de 1989): 524–31. http://dx.doi.org/10.2307/3214410.

Texto completo da fonte
Resumo:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Arnold, Barry C., e C. A. Robertson. "Autoregressive logistic processes". Journal of Applied Probability 26, n.º 03 (setembro de 1989): 524–31. http://dx.doi.org/10.1017/s0021900200038122.

Texto completo da fonte
Resumo:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

McDonald, D. "An invariance principle for semi-Markov processes". Advances in Applied Probability 17, n.º 1 (março de 1985): 100–126. http://dx.doi.org/10.2307/1427055.

Texto completo da fonte
Resumo:
Let (I(t))∞t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μb is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µb on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

McDonald, D. "An invariance principle for semi-Markov processes". Advances in Applied Probability 17, n.º 01 (março de 1985): 100–126. http://dx.doi.org/10.1017/s0001867800014683.

Texto completo da fonte
Resumo:
Let (I(t))∞ t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μ b is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µ b on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Barnsley, Michael F., e John H. Elton. "A new class of markov processes for image encoding". Advances in Applied Probability 20, n.º 1 (março de 1988): 14–32. http://dx.doi.org/10.2307/1427268.

Texto completo da fonte
Resumo:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained.Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Barnsley, Michael F., e John H. Elton. "A new class of markov processes for image encoding". Advances in Applied Probability 20, n.º 01 (março de 1988): 14–32. http://dx.doi.org/10.1017/s0001867800017924.

Texto completo da fonte
Resumo:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained. Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kalpazidou, S. "On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case". Journal of Applied Probability 30, n.º 1 (março de 1993): 28–39. http://dx.doi.org/10.2307/3214619.

Texto completo da fonte
Resumo:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Kalpazidou, S. "On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case". Journal of Applied Probability 30, n.º 01 (março de 1993): 28–39. http://dx.doi.org/10.1017/s0021900200043977.

Texto completo da fonte
Resumo:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Avrachenkov, Konstantin, Alexey Piunovskiy e Yi Zhang. "Markov Processes with Restart". Journal of Applied Probability 50, n.º 4 (dezembro de 2013): 960–68. http://dx.doi.org/10.1239/jap/1389370093.

Texto completo da fonte
Resumo:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Avrachenkov, Konstantin, Alexey Piunovskiy e Yi Zhang. "Markov Processes with Restart". Journal of Applied Probability 50, n.º 04 (dezembro de 2013): 960–68. http://dx.doi.org/10.1017/s0021900200013735.

Texto completo da fonte
Resumo:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Invariant distribution of Markov processes"

1

Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Texto completo da fonte
Resumo:
1. Simuler des systèmes actifs et métastables avec des processus de Markov déterministes par morceaux (PDMPs): quelle dynamique choisir pour simuler efficacement des états métastables? comment exploiter directement la nature hors équilibre des PDMPs pour étudier les systèmes physiques modélisés? 2. Modéliser des systèmes actifs avec des PDMPs: quelles conditions doit remplir un système pour être modélisable par un PDMP? dans quels cas le système a-t-il un distribution stationnaire? comment calculer des quantités dynamiques (ex: rates de transition) dans ce cadre? 3. Améliorer les techniques de simulation de systèmes à l'équilibre: peut-on utiliser les résultats obtenus dans le cadre de systèmes hors équilibre pour accélérer la simulation de systèmes à l'équilibre? comment utiliser l'information topologique pour adapter la dynamique en temps réel?
1. Simulating active and metastable systems with piecewise deterministic Markov processes (PDMPs): - Which dynamics to choose to efficiently simulate metastable states? - How to directly exploit the non-equilibrium nature of PDMPs to study the modeled physical systems? 2. Modeling active systems with PDMPs: - What conditions must a system meet to be modeled by a PDMP? - In which cases does the system have a stationary distribution? - How to calculate dynamic quantities (e.g., transition rates) in this framework? 3. Improving simulation techniques for equilibrium systems: - Can results obtained in the context of non-equilibrium systems be used to accelerate the simulation of equilibrium systems? - How to use topological information to adapt the dynamics in real-time?
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Casse, Jérôme. "Automates cellulaires probabilistes et processus itérés ad libitum". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0248/document.

Texto completo da fonte
Resumo:
La première partie de cette thèse porte sur les automates cellulaires probabilistes (ACP) sur la ligne et à deux voisins. Pour un ACP donné, nous cherchons l'ensemble de ces lois invariantes. Pour des raisons expliquées en détail dans la thèse, ceci est à l'heure actuelle inenvisageable de toutes les obtenir et nous nous concentrons, dans cette thèse, surles lois invariantes markoviennes. Nous établissons, tout d'abord, un théorème de nature algébrique qui donne des conditions nécessaires et suffisantes pour qu'un ACP admette une ou plusieurs lois invariantes markoviennes dans le cas où l'alphabet E est fini. Par la suite, nous généralisons ce résultat au cas d'un alphabet E polonais après avoir clarifié les difficultés topologiques rencontrées. Enfin, nous calculons la fonction de corrélation du modèleà 8 sommets pour certaines valeurs des paramètres du modèle en utilisant une partie desrésultats précédents
The first part of this thesis is about probabilistic cellular automata (PCA) on the line and with two neighbors. For a given PCA, we look for the set of its invariant distributions. Due to reasons explained in detail in this thesis, it is nowadays unthinkable to get all of them and we concentrate our reections on the invariant Markovian distributions. We establish, first, an algebraic theorem that gives a necessary and sufficient condition for a PCA to have one or more invariant Markovian distributions when the alphabet E is finite. Then, we generalize this result to the case of a polish alphabet E once we have clarified the encountered topological difficulties. Finally, we calculate the 8-vertex model's correlation function for some parameters values using previous results.The second part of this thesis is about infinite iterations of stochastic processes. We establish the convergence of the finite dimensional distributions of the α-stable processes iterated n times, when n goes to infinite, according to parameter of stability and to drift r. Then, we describe the limit distributions. In the iterated Brownian motion case, we show that the limit distributions are linked with iterated functions system
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

陳冠全 e Koon-chuen Chen. "Invariant limiting shape distributions for some sequential rectangularmodels". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31238233.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Chen, Koon-chuen. "Invariant limiting shape distributions for some sequential rectangular models /". Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20998934.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Hammer, Matthias [Verfasser]. "Ergodicity and regularity of invariant measure for branching Markov processes with immigration / Matthias Hammer". Mainz : Universitätsbibliothek Mainz, 2012. http://d-nb.info/1029390975/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hurth, Tobias. "Invariant densities for dynamical systems with random switching". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52274.

Texto completo da fonte
Resumo:
We studied invariant measures and invariant densities for dynamical systems with random switching (switching systems, in short). These switching systems can be described by a two-component Markov process whose first component is a stochastic process on a finite-dimensional smooth manifold and whose second component is a stochastic process on a finite collection of smooth vector fields that are defined on the manifold. We identified sufficient conditions for uniqueness and absolute continuity of the invariant measure associated to this Markov process. These conditions consist of a Hoermander-type hypoellipticity condition and a recurrence condition. In the case where the manifold is the real line or a subset of the real line, we studied regularity properties of the invariant densities of absolutely continuous invariant measures. We showed that invariant densities are smooth away from critical points of the vector fields. Assuming in addition that the vector fields are analytic, we derived the asymptotically dominant term for invariant densities at critical points.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kaijser, Thomas. "Convergence in distribution for filtering processes associated to Hidden Markov Models with densities". Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.

Texto completo da fonte
Resumo:
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filtering process converge towards a unique limit measure. This problem goes back to a paper of D Blackwell for the case when the Markov chain takes its values in a finite set and it goes back to a paper of H Kunita for the case when the state space of the Markov chain is a compact Hausdor space. Recently, due to work by F Kochmann, J Reeds, P Chigansky and R van Handel, a necessary and sucient condition for the convergence of the distributions of the filtering process has been found for the case when the state space is finite. This condition has since been generalised to the case when the state space is denumerable. In this paper we generalise some of the previous results on convergence in distribution to the case when the Markov chain and the observation sequence of a Hidden Markov Model take their values in complete, separable, metric spaces; it has though been necessary to assume that both the transition probability function of the Markov chain and the transition probability function that generates the observation sequence have densities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Talwar, Gaurav. "HMM-based non-intrusive speech quality and implementation of Viterbi score distribution and hiddenness based measures to improve the performance of speech recognition". Laramie, Wyo. : University of Wyoming, 2006. http://proquest.umi.com/pqdweb?did=1288654981&sid=7&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Green, David Anthony. "Departure processes from MAP/PH/1 queues". Title page, contents and abstract only, 1999. http://thesis.library.adelaide.edu.au/public/adt-SUA20020815.092144.

Texto completo da fonte
Resumo:
Bibliography: leaves 145-150. Electronic publication; Full text available in PDF format; abstract in HTML format. A MAP/PH/1 queue is a queue having a Markov arrival process (MAP), and a single server with phase-type (PH-type) distributed service time. This thesis considers the departure process of these types of queues, using matrix analytic methods, the Jordan canonical form of matrices, non-linear filtering and approximation techniques. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Drton, Mathias. "Maximum likelihood estimation in Gaussian AMP chain graph models and Gaussian ancestral graph models /". Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8952.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Invariant distribution of Markov processes"

1

Hernández-Lerma, O. Markov Chains and Invariant Probabilities. Basel: Birkhäuser Basel, 2003.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Liao, Ming. Invariant Markov Processes Under Lie Group Actions. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Carlsson, Niclas. Markov chains on metric spaces: Invariant measures and asymptotic behaviour. Åbo: Åbo Akademi University Press, 2005.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Banjevic, Dragan. Recurrent relations for distribution of waiting time in Markov chain. [Toronto]: University of Toronto, Department of Statistics, 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

service), SpringerLink (Online, ed. Measure-Valued Branching Markov Processes. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Oswaldo Luiz do Valle Costa. Continuous Average Control of Piecewise Deterministic Markov Processes. New York, NY: Springer New York, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Feinberg, Eugene A. Handbook of Markov Decision Processes: Methods and Applications. Boston, MA: Springer US, 2002.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ulrich, Rieder, e SpringerLink (Online service), eds. Markov Decision Processes with Applications to Finance. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Taira, Kazuaki. Semigroups, Boundary Value Problems and Markov Processes. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Milch, Paul R. FORECASTER, a Markovian model to analyze the distribution of Naval Officers. Monterey, Calif: Naval Postgraduate School, 1990.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Invariant distribution of Markov processes"

1

Liao, Ming. "Decomposition of Markov Processes". In Invariant Markov Processes Under Lie Group Actions, 305–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Pollett, P. K. "Identifying Q-Processes with a Given Finite µ-Invariant Measure". In Markov Processes and Controlled Markov Chains, 41–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Dubins, Lester E., Ashok P. Maitra e William D. Sudderth. "Invariant Gambling Problems and Markov Decision Processes". In International Series in Operations Research & Management Science, 409–28. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-0805-2_13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Dudley, R. M. "A note on Lorentz-invariant Markov processes". In Selected Works of R.M. Dudley, 109–15. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-5821-1_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Cocozza-Thivent, Christiane. "Hitting Time Distribution". In Markov Renewal and Piecewise Deterministic Processes, 63–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70447-6_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Liao, Ming. "Lévy Processes in Lie Groups". In Invariant Markov Processes Under Lie Group Actions, 35–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Liao, Ming. "Lévy Processes in Homogeneous Spaces". In Invariant Markov Processes Under Lie Group Actions, 73–101. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Rong, Wu. "Some Properties of Invariant Functions of Markov Processes". In Seminar on Stochastic Processes, 1988, 239–44. Boston, MA: Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4612-3698-6_16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Liao, Ming. "Lévy Processes in Compact Lie Groups". In Invariant Markov Processes Under Lie Group Actions, 103–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Liao, Ming. "Inhomogeneous Lévy Processes in Lie Groups". In Invariant Markov Processes Under Lie Group Actions, 169–237. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Invariant distribution of Markov processes"

1

Rajendiran, Shenbageshwaran, Francisco Galdos, Carissa Anne Lee, Sidra Xu, Justin Harvell, Shireen Singh, Sean M. Wu, Elizabeth A. Lipke e Selen Cremaschi. "Modeling hiPSC-to-Early Cardiomyocyte Differentiation Process using Microsimulation and Markov Chain Models". In Foundations of Computer-Aided Process Design, 344–50. Hamilton, Canada: PSE Press, 2024. http://dx.doi.org/10.69997/sct.152564.

Texto completo da fonte
Resumo:
Cardiomyocytes (CMs), the contractile heart cells that can be derived from human induced pluripotent stem cells (hiPSCs). These hiPSC derived CMs can be used for cardiovascular disease drug testing and regeneration therapies, and they have therapeutic potential. Currently, hiPSC-CM differentiation cannot yet be controlled to yield specific heart cell subtypes consistently. Designing differentiation processes to consistently direct differentiation to specific heart cells is important to realize the full therapeutic potential of hiPSC-CMs. A model that accurately represents the dynamic changes in cell populations from hiPSCs to CMs over the differentiation timeline is a first step towards designing processes for directing differentiation. This paper introduces a microsimulation model for studying temporal changes in the hiPSC-to-early CM differentiation. The differentiation process for each cell in the microsimulation model is represented by a Markov chain model (MCM). The MCM includes cell subtypes representing key developmental stages in hiPSC differentiation to early CMs. These stages include pluripotent stem cells, early primitive streak, late primitive streak, mesodermal progenitors, early cardiac progenitors, late cardiac progenitors, and early CMs. The time taken by a cell to transit from one state to the next state is assumed to be exponentially distributed. The transition probabilities of the Markov chain process and the mean duration parameter of the exponential distribution were estimated using Bayesian optimization. The results predicted by the MCM agree with the data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Akshay, S., Blaise Genest e Nikhil Vyas. "Distribution-based objectives for Markov Decision Processes". In LICS '18: 33rd Annual ACM/IEEE Symposium on Logic in Computer Science. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3209108.3209185.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Budgett, Stephanie, Azam Asanjarani e Heti Afimeimounga. "Visualizing Markov Processes". In Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. International Association for Statistical Education, 2022. http://dx.doi.org/10.52041/iase.icots11.t10f3.

Texto completo da fonte
Resumo:
Researchers and educators have long been aware of the misconceptions prevalent in people’s probabilistic reasoning processes. Calls to reform the teaching of probability from a traditional and predominantly mathematical approach to include an emphasis on modelling using technology have been heeded by many. The purpose of this paper is to present our experiences of including an activity based on an interactive visualisation tool in the Markov processes module of a first-year probability course. Initial feedback suggests that the tool may support students’ understanding of the equilibrium distribution and points to certain aspects of the tool that may be beneficial. A targeted survey, to be administered in Semester 1, 2022, aims to provide more insight.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Fracasso, Paulo Thiago, Frank Stephenson Barnes e Anna Helena Reali Costa. "Energy cost optimization in water distribution systems using Markov Decision Processes". In 2013 International Green Computing Conference (IGCC). IEEE, 2013. http://dx.doi.org/10.1109/igcc.2013.6604516.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ismail, Muhammad Ali. "Multi-core processor based parallel implementation for finding distribution vectors in Markov processes". In 2013 18th International Conference on Digital Signal Processing (DSP). IEEE, 2013. http://dx.doi.org/10.1109/siecpc.2013.6550997.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Tsukamoto, Hiroki, Song Bian e Takashi Sato. "Statistical Device Modeling with Arbitrary Model-Parameter Distribution via Markov Chain Monte Carlo". In 2021 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD). IEEE, 2021. http://dx.doi.org/10.1109/sispad54002.2021.9592558.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lee, Seungchul, Lin Li e Jun Ni. "Modeling of Degradation Processes to Obtain an Optimal Solution for Maintenance and Performance". In ASME 2009 International Manufacturing Science and Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/msec2009-84166.

Texto completo da fonte
Resumo:
This paper presents an approach to represent equipment degradation and various maintenance decision processes based on Markov processes. Non-exponential holding time distributions are approximated by inserting multiple intermediate states between the two different degradation states based on a phase-type distribution. Overall system availability then is numerically calculated by recursively solving the balance equations of the Markov process. Preliminary simulation results show that the optimal preventive maintenance intervals for two repairable components system can be achieved by means of the proposed method. By having an adequate model representing both deterioration and maintenance processes, it is also possible to obtain different optimal maintenance policies to maximize the availability or productivity for different configurations of components.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sathe, Sumedh, Chinmay Samak, Tanmay Samak, Ajinkya Joglekar, Shyam Ranganathan e Venkat N. Krovi. "Data Driven Vehicle Dynamics System Identification Using Gaussian Processes". In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2022.

Texto completo da fonte
Resumo:
<div class="section abstract"><div class="htmlview paragraph">Modeling uncertainties pose a significant challenge in the development and deployment of model-based vehicle control systems. Most model- based automotive control systems require the use of a well estimated vehicle dynamics prediction model. The ability of first principles-based models to represent vehicle behavior becomes limited under complex scenarios due to underlying rigid physical assumptions. Additionally, the increasing complexity of these models to meet ever-increasing fidelity requirements presents challenges for obtaining analytical solutions as well as control design. Alternatively, deterministic data driven techniques including but not limited to deep neural networks, polynomial regression, Sparse Identification of Nonlinear Dynamics (SINDy) have been deployed for vehicle dynamics system identification and prediction. However, under real-world conditions which are often uncertain or time varying, including, but not limited to changing terrain and/or physical, a single time-invariant physics- based or parametric model may not accurately represent vehicle behavior resulting in sub-optimal controller performance. The previously mentioned data-driven system identification techniques, by virtue of being deterministic cannot express these uncertainties, leading to a need for multiple models, or a distribution of models to describe vehicle behavior. Gaussian Process Regression constitutes a cogent approach for capturing and expressing modeling uncertainties through a probability distribution. In this paper, we demonstrate Gaussian Process Regression as an able technique for modeling uncertain vehicle dynamics using a real-world vehicle dataset, acquired by performing benchmark maneuvers using a scaled vehicle observed by a motion-capture system. Using Gaussian Process Regression, we develop single-step as well as multi-step prediction models that are usable for reactive as well as predictive model-based control techniques.</div></div>
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Velasquez, Alvaro. "Steady-State Policy Synthesis for Verifiable Control". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/784.

Texto completo da fonte
Resumo:
In this paper, we introduce the Steady-State Policy Synthesis (SSPS) problem which consists of finding a stochastic decision-making policy that maximizes expected rewards while satisfying a set of asymptotic behavioral specifications. These specifications are determined by the steady-state probability distribution resulting from the Markov chain induced by a given policy. Since such distributions necessitate recurrence, we propose a solution which finds policies that induce recurrent Markov chains within possibly non-recurrent Markov Decision Processes (MDPs). The SSPS problem functions as a generalization of steady-state control, which has been shown to be in PSPACE. We improve upon this result by showing that SSPS is in P via linear programming. Our results are validated using CPLEX simulations on MDPs with over 10000 states. We also prove that the deterministic variant of SSPS is NP-hard.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Haschka, Markus, e Volker Krebs. "A Direct Approximation of Cole-Cole-Systems for Time-Domain Analysis". In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84579.

Texto completo da fonte
Resumo:
Cole-Cole-systems are important in electrochemistry to represent impedances of galvanic elements like fuel cells. Fractional calculus has to be applied for system analysis of Cole-Cole-systems in the time-domain. The representation of fractional differential equations of Cole-Cole-systems is addressed in this contribution. Usually, the fractional derivation is approximated, to ensure that the fractional system can be represented by conventional differential equations of an integer order. This article presents a new opposite approach, which results by direct approximation of the Cole-Cole-systems by conventional linear time invariant systems. The method considered is based on the distribution density function of relaxation times of first order Debye-processes. This distribution density is an alternative representation of the transfer behavior of such a system. Several approximation methods, based on an analysis of the distribution density, are presented in this work. The feasibility of these methods will be demonstrated by a comparison of simulated data of the approximation models to ideal data and reference values, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Invariant distribution of Markov processes"

1

Stettner, Lukasz. On the Existence and Uniqueness of Invariant Measure for Continuous Time Markov Processes,. Fort Belvoir, VA: Defense Technical Information Center, abril de 1986. http://dx.doi.org/10.21236/ada174758.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia