Gotowa bibliografia na temat „Invariant distribution of Markov processes”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Invariant distribution of Markov processes”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Invariant distribution of Markov processes"

1

Arnold, Barry C., i C. A. Robertson. "Autoregressive logistic processes". Journal of Applied Probability 26, nr 3 (wrzesień 1989): 524–31. http://dx.doi.org/10.2307/3214410.

Pełny tekst źródła
Streszczenie:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
Style APA, Harvard, Vancouver, ISO itp.
2

Arnold, Barry C., i C. A. Robertson. "Autoregressive logistic processes". Journal of Applied Probability 26, nr 03 (wrzesień 1989): 524–31. http://dx.doi.org/10.1017/s0021900200038122.

Pełny tekst źródła
Streszczenie:
A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.
Style APA, Harvard, Vancouver, ISO itp.
3

McDonald, D. "An invariance principle for semi-Markov processes". Advances in Applied Probability 17, nr 1 (marzec 1985): 100–126. http://dx.doi.org/10.2307/1427055.

Pełny tekst źródła
Streszczenie:
Let (I(t))∞t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μb is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µb on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
Style APA, Harvard, Vancouver, ISO itp.
4

McDonald, D. "An invariance principle for semi-Markov processes". Advances in Applied Probability 17, nr 01 (marzec 1985): 100–126. http://dx.doi.org/10.1017/s0001867800014683.

Pełny tekst źródła
Streszczenie:
Let (I(t))∞ t = () be a semi-Markov process with state space II and recurrent probability transition kernel P. Subject to certain mixing conditions, where Δis an invariant probability measure for P and μ b is the expected sojourn time in state b ϵΠ. We show that this limit is robust; that is, for each state b ϵ Πthe sojourn-time distribution may change for each transition, but, as long as the expected sojourn time in b is µ b on the average, the above limit still holds. The kernel P may also vary for each transition as long as Δis invariant.
Style APA, Harvard, Vancouver, ISO itp.
5

Barnsley, Michael F., i John H. Elton. "A new class of markov processes for image encoding". Advances in Applied Probability 20, nr 1 (marzec 1988): 14–32. http://dx.doi.org/10.2307/1427268.

Pełny tekst źródła
Streszczenie:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained.Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
Style APA, Harvard, Vancouver, ISO itp.
6

Barnsley, Michael F., i John H. Elton. "A new class of markov processes for image encoding". Advances in Applied Probability 20, nr 01 (marzec 1988): 14–32. http://dx.doi.org/10.1017/s0001867800017924.

Pełny tekst źródła
Streszczenie:
A new class of iterated function systems is introduced, which allows for the computation of non-compactly supported invariant measures, which may represent, for example, greytone images of infinite extent. Conditions for the existence and attractiveness of invariant measures for this new class of randomly iterated maps, which are not necessarily contractions, in metric spaces such as , are established. Estimates for moments of these measures are obtained. Special conditions are given for existence of the invariant measure in the interesting case of affine maps on . For non-singular affine maps on , the support of the measure is shown to be an infinite interval, but Fourier transform analysis shows that the measure can be purely singular even though its distribution function is strictly increasing.
Style APA, Harvard, Vancouver, ISO itp.
7

Kalpazidou, S. "On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case". Journal of Applied Probability 30, nr 1 (marzec 1993): 28–39. http://dx.doi.org/10.2307/3214619.

Pełny tekst źródła
Streszczenie:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
Style APA, Harvard, Vancouver, ISO itp.
8

Kalpazidou, S. "On Levy's theorem concerning positiveness of transition probabilities of Markov processes: the circuit processes case". Journal of Applied Probability 30, nr 01 (marzec 1993): 28–39. http://dx.doi.org/10.1017/s0021900200043977.

Pełny tekst źródła
Streszczenie:
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
Style APA, Harvard, Vancouver, ISO itp.
9

Avrachenkov, Konstantin, Alexey Piunovskiy i Yi Zhang. "Markov Processes with Restart". Journal of Applied Probability 50, nr 4 (grudzień 2013): 960–68. http://dx.doi.org/10.1239/jap/1389370093.

Pełny tekst źródła
Streszczenie:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
Style APA, Harvard, Vancouver, ISO itp.
10

Avrachenkov, Konstantin, Alexey Piunovskiy i Yi Zhang. "Markov Processes with Restart". Journal of Applied Probability 50, nr 04 (grudzień 2013): 960–68. http://dx.doi.org/10.1017/s0021900200013735.

Pełny tekst źródła
Streszczenie:
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Invariant distribution of Markov processes"

1

Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Pełny tekst źródła
Streszczenie:
Cette thèse étudie le comportement en temps long des particules run-and-tumble (RTPs), un modèle pour les bactéries en physique statistique hors équilibre, en utilisant des processus de Markov déterministes par morceaux (PDMPs). La motivation est d'améliorer la compréhension au niveau particulaire des phénomènes actifs, en particulier la séparation de phase induite par la motilité (MIPS). La mesure invariante pour deux RTPs avec jamming sur un tore 1D est déterminée pour mécanismes de tumble et jamming généraux, révélant deux classes d'universalité hors équilibre. De plus, la dépendance du temps de mélange en fonction des paramètres du modèle est déterminée en utilisant des techniques de couplage et le modèle continu PDMP est rigoureusement relié à un modèle sur réseau connu. Dans le cas de deux RTPs avec jamming sur la droite réelle et interagissant à travers un potentiel attractif, la mesure invariante présente des différences qualitatives en fonction des paramètres du modèle, rappelant des transitions de forme et des classes d'universalité. Des taux de convergence fins sont à nouveau obtenus par des techniques de couplage. Par ailleurs, la mesure invariante explicite de trois RTPs se bloquant sur le tore 1D est calculée. Enfin, les résultats de convergence hypocoercive sont étendus aux RTPs, obtenant ainsi des taux de convergence \( L^2 \) fins dans un cadre général qui couvre également les PDMPs utilisés pour l'échantillonnage et Langevin cinétique
This thesis investigates the long-time behavior of run-and-tumble particles (RTPs), a model for bacteria's moves and interactions in out-of-equilibrium statistical mechanics, using piecewise deterministic Markov processes (PDMPs). The motivation is to improve the particle-level understanding of active phenomena, in particular motility induced phase separation (MIPS). The invariant measure for two jamming RTPs on a 1D torus is determined for general tumbling and jamming, revealing two out-of-equilibrium universality classes. Furthermore, the dependence of the mixing time on model parameters is established using coupling techniques and the continuous PDMP model is rigorously linked to a known on-lattice model. In the case of two jamming RTPs on the real line interacting through an attractive potential, the invariant measure displays qualitative differences based on model parameters, reminiscent of shape transitions and universality classes. Sharp quantitative convergence bounds are again obtained through coupling techniques. Additionally, the explicit invariant measure of three jamming RTPs on the 1D torus is computed. Finally, hypocoercive convergence results are extended to RTPs, achieving sharp \( L^2 \) convergence rates in a general setting that also covers kinetic Langevin and sampling PDMPs
Style APA, Harvard, Vancouver, ISO itp.
2

Casse, Jérôme. "Automates cellulaires probabilistes et processus itérés ad libitum". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0248/document.

Pełny tekst źródła
Streszczenie:
La première partie de cette thèse porte sur les automates cellulaires probabilistes (ACP) sur la ligne et à deux voisins. Pour un ACP donné, nous cherchons l'ensemble de ces lois invariantes. Pour des raisons expliquées en détail dans la thèse, ceci est à l'heure actuelle inenvisageable de toutes les obtenir et nous nous concentrons, dans cette thèse, surles lois invariantes markoviennes. Nous établissons, tout d'abord, un théorème de nature algébrique qui donne des conditions nécessaires et suffisantes pour qu'un ACP admette une ou plusieurs lois invariantes markoviennes dans le cas où l'alphabet E est fini. Par la suite, nous généralisons ce résultat au cas d'un alphabet E polonais après avoir clarifié les difficultés topologiques rencontrées. Enfin, nous calculons la fonction de corrélation du modèleà 8 sommets pour certaines valeurs des paramètres du modèle en utilisant une partie desrésultats précédents
The first part of this thesis is about probabilistic cellular automata (PCA) on the line and with two neighbors. For a given PCA, we look for the set of its invariant distributions. Due to reasons explained in detail in this thesis, it is nowadays unthinkable to get all of them and we concentrate our reections on the invariant Markovian distributions. We establish, first, an algebraic theorem that gives a necessary and sufficient condition for a PCA to have one or more invariant Markovian distributions when the alphabet E is finite. Then, we generalize this result to the case of a polish alphabet E once we have clarified the encountered topological difficulties. Finally, we calculate the 8-vertex model's correlation function for some parameters values using previous results.The second part of this thesis is about infinite iterations of stochastic processes. We establish the convergence of the finite dimensional distributions of the α-stable processes iterated n times, when n goes to infinite, according to parameter of stability and to drift r. Then, we describe the limit distributions. In the iterated Brownian motion case, we show that the limit distributions are linked with iterated functions system
Style APA, Harvard, Vancouver, ISO itp.
3

陳冠全 i Koon-chuen Chen. "Invariant limiting shape distributions for some sequential rectangularmodels". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31238233.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Chen, Koon-chuen. "Invariant limiting shape distributions for some sequential rectangular models /". Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20998934.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Hammer, Matthias [Verfasser]. "Ergodicity and regularity of invariant measure for branching Markov processes with immigration / Matthias Hammer". Mainz : Universitätsbibliothek Mainz, 2012. http://d-nb.info/1029390975/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Hurth, Tobias. "Invariant densities for dynamical systems with random switching". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52274.

Pełny tekst źródła
Streszczenie:
We studied invariant measures and invariant densities for dynamical systems with random switching (switching systems, in short). These switching systems can be described by a two-component Markov process whose first component is a stochastic process on a finite-dimensional smooth manifold and whose second component is a stochastic process on a finite collection of smooth vector fields that are defined on the manifold. We identified sufficient conditions for uniqueness and absolute continuity of the invariant measure associated to this Markov process. These conditions consist of a Hoermander-type hypoellipticity condition and a recurrence condition. In the case where the manifold is the real line or a subset of the real line, we studied regularity properties of the invariant densities of absolutely continuous invariant measures. We showed that invariant densities are smooth away from critical points of the vector fields. Assuming in addition that the vector fields are analytic, we derived the asymptotically dominant term for invariant densities at critical points.
Style APA, Harvard, Vancouver, ISO itp.
7

Kaijser, Thomas. "Convergence in distribution for filtering processes associated to Hidden Markov Models with densities". Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.

Pełny tekst źródła
Streszczenie:
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filtering process converge towards a unique limit measure. This problem goes back to a paper of D Blackwell for the case when the Markov chain takes its values in a finite set and it goes back to a paper of H Kunita for the case when the state space of the Markov chain is a compact Hausdor space. Recently, due to work by F Kochmann, J Reeds, P Chigansky and R van Handel, a necessary and sucient condition for the convergence of the distributions of the filtering process has been found for the case when the state space is finite. This condition has since been generalised to the case when the state space is denumerable. In this paper we generalise some of the previous results on convergence in distribution to the case when the Markov chain and the observation sequence of a Hidden Markov Model take their values in complete, separable, metric spaces; it has though been necessary to assume that both the transition probability function of the Markov chain and the transition probability function that generates the observation sequence have densities.
Style APA, Harvard, Vancouver, ISO itp.
8

Talwar, Gaurav. "HMM-based non-intrusive speech quality and implementation of Viterbi score distribution and hiddenness based measures to improve the performance of speech recognition". Laramie, Wyo. : University of Wyoming, 2006. http://proquest.umi.com/pqdweb?did=1288654981&sid=7&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Green, David Anthony. "Departure processes from MAP/PH/1 queues". Title page, contents and abstract only, 1999. http://thesis.library.adelaide.edu.au/public/adt-SUA20020815.092144.

Pełny tekst źródła
Streszczenie:
Bibliography: leaves 145-150. Electronic publication; Full text available in PDF format; abstract in HTML format. A MAP/PH/1 queue is a queue having a Markov arrival process (MAP), and a single server with phase-type (PH-type) distributed service time. This thesis considers the departure process of these types of queues, using matrix analytic methods, the Jordan canonical form of matrices, non-linear filtering and approximation techniques. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001.
Style APA, Harvard, Vancouver, ISO itp.
10

Drton, Mathias. "Maximum likelihood estimation in Gaussian AMP chain graph models and Gaussian ancestral graph models /". Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8952.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Książki na temat "Invariant distribution of Markov processes"

1

Hernández-Lerma, O. Markov Chains and Invariant Probabilities. Basel: Birkhäuser Basel, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Liao, Ming. Invariant Markov Processes Under Lie Group Actions. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Carlsson, Niclas. Markov chains on metric spaces: Invariant measures and asymptotic behaviour. Åbo: Åbo Akademi University Press, 2005.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Banjevic, Dragan. Recurrent relations for distribution of waiting time in Markov chain. [Toronto]: University of Toronto, Department of Statistics, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

service), SpringerLink (Online, red. Measure-Valued Branching Markov Processes. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Oswaldo Luiz do Valle Costa. Continuous Average Control of Piecewise Deterministic Markov Processes. New York, NY: Springer New York, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Feinberg, Eugene A. Handbook of Markov Decision Processes: Methods and Applications. Boston, MA: Springer US, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ulrich, Rieder, i SpringerLink (Online service), red. Markov Decision Processes with Applications to Finance. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Taira, Kazuaki. Semigroups, Boundary Value Problems and Markov Processes. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Milch, Paul R. FORECASTER, a Markovian model to analyze the distribution of Naval Officers. Monterey, Calif: Naval Postgraduate School, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Części książek na temat "Invariant distribution of Markov processes"

1

Liao, Ming. "Decomposition of Markov Processes". W Invariant Markov Processes Under Lie Group Actions, 305–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Pollett, P. K. "Identifying Q-Processes with a Given Finite µ-Invariant Measure". W Markov Processes and Controlled Markov Chains, 41–55. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dubins, Lester E., Ashok P. Maitra i William D. Sudderth. "Invariant Gambling Problems and Markov Decision Processes". W International Series in Operations Research & Management Science, 409–28. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-0805-2_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Dudley, R. M. "A note on Lorentz-invariant Markov processes". W Selected Works of R.M. Dudley, 109–15. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-5821-1_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cocozza-Thivent, Christiane. "Hitting Time Distribution". W Markov Renewal and Piecewise Deterministic Processes, 63–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70447-6_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Liao, Ming. "Lévy Processes in Lie Groups". W Invariant Markov Processes Under Lie Group Actions, 35–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Liao, Ming. "Lévy Processes in Homogeneous Spaces". W Invariant Markov Processes Under Lie Group Actions, 73–101. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Rong, Wu. "Some Properties of Invariant Functions of Markov Processes". W Seminar on Stochastic Processes, 1988, 239–44. Boston, MA: Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4612-3698-6_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Liao, Ming. "Lévy Processes in Compact Lie Groups". W Invariant Markov Processes Under Lie Group Actions, 103–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Liao, Ming. "Inhomogeneous Lévy Processes in Lie Groups". W Invariant Markov Processes Under Lie Group Actions, 169–237. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92324-6_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Invariant distribution of Markov processes"

1

Rajendiran, Shenbageshwaran, Francisco Galdos, Carissa Anne Lee, Sidra Xu, Justin Harvell, Shireen Singh, Sean M. Wu, Elizabeth A. Lipke i Selen Cremaschi. "Modeling hiPSC-to-Early Cardiomyocyte Differentiation Process using Microsimulation and Markov Chain Models". W Foundations of Computer-Aided Process Design, 344–50. Hamilton, Canada: PSE Press, 2024. http://dx.doi.org/10.69997/sct.152564.

Pełny tekst źródła
Streszczenie:
Cardiomyocytes (CMs), the contractile heart cells that can be derived from human induced pluripotent stem cells (hiPSCs). These hiPSC derived CMs can be used for cardiovascular disease drug testing and regeneration therapies, and they have therapeutic potential. Currently, hiPSC-CM differentiation cannot yet be controlled to yield specific heart cell subtypes consistently. Designing differentiation processes to consistently direct differentiation to specific heart cells is important to realize the full therapeutic potential of hiPSC-CMs. A model that accurately represents the dynamic changes in cell populations from hiPSCs to CMs over the differentiation timeline is a first step towards designing processes for directing differentiation. This paper introduces a microsimulation model for studying temporal changes in the hiPSC-to-early CM differentiation. The differentiation process for each cell in the microsimulation model is represented by a Markov chain model (MCM). The MCM includes cell subtypes representing key developmental stages in hiPSC differentiation to early CMs. These stages include pluripotent stem cells, early primitive streak, late primitive streak, mesodermal progenitors, early cardiac progenitors, late cardiac progenitors, and early CMs. The time taken by a cell to transit from one state to the next state is assumed to be exponentially distributed. The transition probabilities of the Markov chain process and the mean duration parameter of the exponential distribution were estimated using Bayesian optimization. The results predicted by the MCM agree with the data.
Style APA, Harvard, Vancouver, ISO itp.
2

Akshay, S., Blaise Genest i Nikhil Vyas. "Distribution-based objectives for Markov Decision Processes". W LICS '18: 33rd Annual ACM/IEEE Symposium on Logic in Computer Science. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3209108.3209185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Budgett, Stephanie, Azam Asanjarani i Heti Afimeimounga. "Visualizing Markov Processes". W Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. International Association for Statistical Education, 2022. http://dx.doi.org/10.52041/iase.icots11.t10f3.

Pełny tekst źródła
Streszczenie:
Researchers and educators have long been aware of the misconceptions prevalent in people’s probabilistic reasoning processes. Calls to reform the teaching of probability from a traditional and predominantly mathematical approach to include an emphasis on modelling using technology have been heeded by many. The purpose of this paper is to present our experiences of including an activity based on an interactive visualisation tool in the Markov processes module of a first-year probability course. Initial feedback suggests that the tool may support students’ understanding of the equilibrium distribution and points to certain aspects of the tool that may be beneficial. A targeted survey, to be administered in Semester 1, 2022, aims to provide more insight.
Style APA, Harvard, Vancouver, ISO itp.
4

Fracasso, Paulo Thiago, Frank Stephenson Barnes i Anna Helena Reali Costa. "Energy cost optimization in water distribution systems using Markov Decision Processes". W 2013 International Green Computing Conference (IGCC). IEEE, 2013. http://dx.doi.org/10.1109/igcc.2013.6604516.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ismail, Muhammad Ali. "Multi-core processor based parallel implementation for finding distribution vectors in Markov processes". W 2013 18th International Conference on Digital Signal Processing (DSP). IEEE, 2013. http://dx.doi.org/10.1109/siecpc.2013.6550997.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Tsukamoto, Hiroki, Song Bian i Takashi Sato. "Statistical Device Modeling with Arbitrary Model-Parameter Distribution via Markov Chain Monte Carlo". W 2021 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD). IEEE, 2021. http://dx.doi.org/10.1109/sispad54002.2021.9592558.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Lee, Seungchul, Lin Li i Jun Ni. "Modeling of Degradation Processes to Obtain an Optimal Solution for Maintenance and Performance". W ASME 2009 International Manufacturing Science and Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/msec2009-84166.

Pełny tekst źródła
Streszczenie:
This paper presents an approach to represent equipment degradation and various maintenance decision processes based on Markov processes. Non-exponential holding time distributions are approximated by inserting multiple intermediate states between the two different degradation states based on a phase-type distribution. Overall system availability then is numerically calculated by recursively solving the balance equations of the Markov process. Preliminary simulation results show that the optimal preventive maintenance intervals for two repairable components system can be achieved by means of the proposed method. By having an adequate model representing both deterioration and maintenance processes, it is also possible to obtain different optimal maintenance policies to maximize the availability or productivity for different configurations of components.
Style APA, Harvard, Vancouver, ISO itp.
8

Sathe, Sumedh, Chinmay Samak, Tanmay Samak, Ajinkya Joglekar, Shyam Ranganathan i Venkat N. Krovi. "Data Driven Vehicle Dynamics System Identification Using Gaussian Processes". W WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2022.

Pełny tekst źródła
Streszczenie:
<div class="section abstract"><div class="htmlview paragraph">Modeling uncertainties pose a significant challenge in the development and deployment of model-based vehicle control systems. Most model- based automotive control systems require the use of a well estimated vehicle dynamics prediction model. The ability of first principles-based models to represent vehicle behavior becomes limited under complex scenarios due to underlying rigid physical assumptions. Additionally, the increasing complexity of these models to meet ever-increasing fidelity requirements presents challenges for obtaining analytical solutions as well as control design. Alternatively, deterministic data driven techniques including but not limited to deep neural networks, polynomial regression, Sparse Identification of Nonlinear Dynamics (SINDy) have been deployed for vehicle dynamics system identification and prediction. However, under real-world conditions which are often uncertain or time varying, including, but not limited to changing terrain and/or physical, a single time-invariant physics- based or parametric model may not accurately represent vehicle behavior resulting in sub-optimal controller performance. The previously mentioned data-driven system identification techniques, by virtue of being deterministic cannot express these uncertainties, leading to a need for multiple models, or a distribution of models to describe vehicle behavior. Gaussian Process Regression constitutes a cogent approach for capturing and expressing modeling uncertainties through a probability distribution. In this paper, we demonstrate Gaussian Process Regression as an able technique for modeling uncertain vehicle dynamics using a real-world vehicle dataset, acquired by performing benchmark maneuvers using a scaled vehicle observed by a motion-capture system. Using Gaussian Process Regression, we develop single-step as well as multi-step prediction models that are usable for reactive as well as predictive model-based control techniques.</div></div>
Style APA, Harvard, Vancouver, ISO itp.
9

Velasquez, Alvaro. "Steady-State Policy Synthesis for Verifiable Control". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/784.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce the Steady-State Policy Synthesis (SSPS) problem which consists of finding a stochastic decision-making policy that maximizes expected rewards while satisfying a set of asymptotic behavioral specifications. These specifications are determined by the steady-state probability distribution resulting from the Markov chain induced by a given policy. Since such distributions necessitate recurrence, we propose a solution which finds policies that induce recurrent Markov chains within possibly non-recurrent Markov Decision Processes (MDPs). The SSPS problem functions as a generalization of steady-state control, which has been shown to be in PSPACE. We improve upon this result by showing that SSPS is in P via linear programming. Our results are validated using CPLEX simulations on MDPs with over 10000 states. We also prove that the deterministic variant of SSPS is NP-hard.
Style APA, Harvard, Vancouver, ISO itp.
10

Haschka, Markus, i Volker Krebs. "A Direct Approximation of Cole-Cole-Systems for Time-Domain Analysis". W ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84579.

Pełny tekst źródła
Streszczenie:
Cole-Cole-systems are important in electrochemistry to represent impedances of galvanic elements like fuel cells. Fractional calculus has to be applied for system analysis of Cole-Cole-systems in the time-domain. The representation of fractional differential equations of Cole-Cole-systems is addressed in this contribution. Usually, the fractional derivation is approximated, to ensure that the fractional system can be represented by conventional differential equations of an integer order. This article presents a new opposite approach, which results by direct approximation of the Cole-Cole-systems by conventional linear time invariant systems. The method considered is based on the distribution density function of relaxation times of first order Debye-processes. This distribution density is an alternative representation of the transfer behavior of such a system. Several approximation methods, based on an analysis of the distribution density, are presented in this work. The feasibility of these methods will be demonstrated by a comparison of simulated data of the approximation models to ideal data and reference values, respectively.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Invariant distribution of Markov processes"

1

Stettner, Lukasz. On the Existence and Uniqueness of Invariant Measure for Continuous Time Markov Processes,. Fort Belvoir, VA: Defense Technical Information Center, kwiecień 1986. http://dx.doi.org/10.21236/ada174758.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii