Academic literature on the topic 'Branching Markov chains'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Branching Markov chains.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Branching Markov chains"

1

Müller, Sebastian. "Recurrence for branching Markov chains." Electronic Communications in Probability 13 (2008): 576–605. http://dx.doi.org/10.1214/ecp.v13-1424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baier, Christel, Joost-Pieter Katoen, Holger Hermanns, and Verena Wolf. "Comparative branching-time semantics for Markov chains." Information and Computation 200, no. 2 (August 2005): 149–214. http://dx.doi.org/10.1016/j.ic.2005.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schinazi, Rinaldo. "On multiple phase transitions for branching Markov chains." Journal of Statistical Physics 71, no. 3-4 (May 1993): 507–11. http://dx.doi.org/10.1007/bf01058434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Athreya, Krishna B., and Hye-Jeong Kang. "Some limit theorems for positive recurrent branching Markov chains: I." Advances in Applied Probability 30, no. 3 (September 1998): 693–710. http://dx.doi.org/10.1239/aap/1035228124.

Full text
Abstract:
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
APA, Harvard, Vancouver, ISO, and other styles
5

Athreya, Krishna B., and Hye-Jeong Kang. "Some limit theorems for positive recurrent branching Markov chains: I." Advances in Applied Probability 30, no. 03 (September 1998): 693–710. http://dx.doi.org/10.1017/s0001867800008557.

Full text
Abstract:
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
APA, Harvard, Vancouver, ISO, and other styles
6

LIU, YUANYUAN, HANJUN ZHANG, and YIQIANG ZHAO. "COMPUTABLE STRONGLY ERGODIC RATES OF CONVERGENCE FOR CONTINUOUS-TIME MARKOV CHAINS." ANZIAM Journal 49, no. 4 (April 2008): 463–78. http://dx.doi.org/10.1017/s1446181108000114.

Full text
Abstract:
AbstractIn this paper, we investigate computable lower bounds for the best strongly ergodic rate of convergence of the transient probability distribution to the stationary distribution for stochastically monotone continuous-time Markov chains and reversible continuous-time Markov chains, using a drift function and the expectation of the first hitting time on some state. We apply these results to birth–death processes, branching processes and population processes.
APA, Harvard, Vancouver, ISO, and other styles
7

BACCI, GIORGIO, GIOVANNI BACCI, KIM G. LARSEN, and RADU MARDARE. "Converging from branching to linear metrics on Markov chains." Mathematical Structures in Computer Science 29, no. 1 (July 25, 2017): 3–37. http://dx.doi.org/10.1017/s0960129517000160.

Full text
Abstract:
We study two well-known linear-time metrics on Markov chains (MCs), namely, the strong and strutter trace distances. Our interest in these metrics is motivated by their relation to the probabilistic linear temporal logic (LTL)-model checking problem: we prove that they correspond to the maximal differences in the probability of satisfying the same LTL and LTL−X(LTL without next operator) formulas, respectively.The threshold problem for these distances (whether their value exceeds a given threshold) is NP-hard and not known to be decidable. Nevertheless, we provide an approximation schema where each lower and upper approximant is computable in polynomial time in the size of the MC.The upper approximants are bisimilarity-like pseudometrics (hence, branching-time distances) that converge point-wise to the linear-time metrics. This convergence is interesting in itself, because it reveals a non-trivial relation between branching and linear-time metric-based semantics that does not hold in equivalence-based semantics.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Ying, and Arthur F. Veinott. "Markov Branching Decision Chains with Interest-Rate-Dependent Rewards." Probability in the Engineering and Informational Sciences 9, no. 1 (January 1995): 99–121. http://dx.doi.org/10.1017/s0269964800003715.

Full text
Abstract:
Finite-state-and-action Markov branching decision chains are studied with bounded endogenous expected population sizes and interest-rate-dependent one-period rewards that are analytic in the interest rate at zero. The existence of a stationary strong-maximum-present-value policy is established. Miller and Veinott's [1969] strong policy-improvement method is generalized to find in finite time a stationary n-present-value optimal policy and, when the one-period rewards are rational in the interest rate, a stationary strong-maximum-present-value policy. This extends previous studies of Blackwell [1962], Miller and Veinott [1969], Veinott [1974], and Rothblum [1974, 1975], in which the one-period rewards are independent of the interest rate, and Denardo [1971] in which semi-Markov decision chains with small interest rates are studied. The problem of finding a stationary n-present-value optimal policy is also formulated as a staircase linear program in which the objective function and right-hand sides, but not the constraint matrix, depend on the interest rate, and solutions for all small enough positive interest rates are sought. The optimal solutions of the primal and dual are polynomials in the reciprocal of the interest rate. A constructive rule is given for finding a stationary n-present-value optimal policy from an optimal solution of the asymptotic linear program. This generalizes the linear programming approaches for finding maximum-reward-rate and maximum-present-value policies for Markov decision chains studied by Manne [1960], d'Epenoux [1960, 1963], Balinski [1961], Derman [1962], Denardo and Fox [1968], Denardo [1970], Derman and Veinott [1972], Veinott [1973], and Hordijk and Kallenberg [1979, 1984].
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Dihe. "Infinitely dimensional control Markov branching chains in random environments." Science in China Series A 49, no. 1 (January 2006): 27–53. http://dx.doi.org/10.1007/s11425-005-0024-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cox, J. T. "On the ergodic theory of critical branching Markov chains." Stochastic Processes and their Applications 50, no. 1 (March 1994): 1–20. http://dx.doi.org/10.1016/0304-4149(94)90144-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Branching Markov chains"

1

Nordvall, Lagerås Andreas. "Markov Chains, Renewal, Branching and Coalescent Processes : Four Topics in Probability Theory." Doctoral thesis, Stockholm University, Department of Mathematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6637.

Full text
Abstract:

This thesis consists of four papers.

In paper 1, we prove central limit theorems for Markov chains under (local) contraction conditions. As a corollary we obtain a central limit theorem for Markov chains associated with iterated function systems with contractive maps and place-dependent Dini-continuous probabilities.

In paper 2, properties of inverse subordinators are investigated, in particular similarities with renewal processes. The main tool is a theorem on processes that are both renewal and Cox processes.

In paper 3, distributional properties of supercritical and especially immortal branching processes are derived. The marginal distributions of immortal branching processes are found to be compound geometric.

In paper 4, a description of a dynamic population model is presented, such that samples from the population have genealogies as given by a Lambda-coalescent with mutations. Depending on whether the sample is grouped according to litters or families, the sampling distribution is either regenerative or non-regenerative.

APA, Harvard, Vancouver, ISO, and other styles
2

Nordvall, Lagerås Andreas. "Markov chains, renewal, branching and coalescent processes : four topics in probability theory /." Stockholm : Department of Mathematics, Stockholm university, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adam, Etienne. "Persistance et vitesse d'extinction pour des modèles de populations stochastiques multitypes en temps discret." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX019/document.

Full text
Abstract:
Cette thèse porte sur l'étude mathématique de modèles stochastiques de dynamique de populations structurées.Dans le premier chapitre, nous introduisons un modèle stochastique à temps discret prenant en compte les diverses interactions possibles entre les individus, que ce soit de la compétition, de la migration, des mutations, ou bien de la prédation. Nous montrons d'abord un résultat de type ``loi des grands nombres'', où on montre que si la population initiale tend vers l'infini, alors sur un intervalle de temps fini, le processus stochastique converge en probabilité vers un processus déterministe sous-jacent. Nous quantifions aussi les écarts entre ces deux processus par un résultat de type ``théorème central limite''. Enfin, nous donnons un critère de persistance/extinction afin de déterminer le comportement en temps long de notre processus stochastique. Ce critère met en exergue un cas critique qui sera étudié plus en détail dans les chapitres suivants.Dans le deuxième chapitre, nous donnons un critère de croissance illimitée pour des processus vérifiant le cas critique évoqué plus haut. Nous illustrons en particulier ce critère avec l'exemple d'une métapopulation constituée de parcelles de type puits (c'est à dire dont la population s'éteint sans tenir compte de la migration), où l'on montre que la survie de la population est possible.Dans le troisième chapitre, nous nous intéressons au comportement du processus critique lorsqu'il croît vers l'infini. Nous montrons en particulier une convergence en loi vers une loi gamma de notre processus renormalisé et dans un cadre plus général, en renormalisant aussi en temps, nous obtenons une convergence en loi d'une fonction de notre processus vers la solution d'une équation différentielle stochastique appelée un processus de Bessel carré.Dans le quatrième et dernier chapitre, nous nous plac{c}ons dans le cas où le processus critique ne tend pas vers l'infini et étudions le temps d'atteinte de certains ensembles compacts. Nous donnons un encadrement asymptotique de la queue de ce temps d'atteinte. Lorsque le processus s'éteint, ces résultats nous permettent en particulier d'encadrer la queue du temps d'extinction. Dans le cas où notre processus est une chaîne de Markov, nous en déduisons un critère de récurrence nulle ou récurrence positive et dans ce cas, nous obtenons un taux de convergence sous-géométrique du noyau de transition de notre chaîne vers sa mesure de probabilité invariante
This thesis is devoted to the mathematical study of stochastic modelds of structured populations dynamics.In the first chapter, we introduce a discrete time stochastic process taking into account various ecological interactions between individuals, such as competition, migration, mutation, or predation. We first prove a ``law of large numbers'': where we show that if the initial population tends to infinity, then, on any finite interval of time, the stochastic process converges in probability to an underlying deterministic process. We also quantify the discrepancy between these two processes by a kind of ``central limit theorem''. Finally, we give a criterion of persistence/extinction in order to determine the long time behavior of the process. This criterion highlights a critical case which will be studied in more detail in the following chapters.In the second chapter, we give a criterion for the possible unlimited growth in the critical case mentioned above. We apply this criterion to the example of a source-sink metapopulation with two patches of type source, textit{i.e.} the population of each patch goes to extinction if we do not take into account the migration. We prove that there is a possible survival of the metapopulation.In the third chapter, we focus on the behavior of our critical process when it tends to infinity. We prove a convergence in distribution of the scaled process to a gamma distribution, and in a more general framework, by also rescaling time, we obtain a distribution limit of a function of our process to the solution of a stochastic differential equation called a squared Bessel process.In the fourth and last chapter, we study hitting times of some compact sets when our process does not tend to infinity. We give nearly optimal bounds for the tail of these hitting times. If the process goes to extinction almost surely, we deduce from these bounds precise estimates of the tail of the extinction time. Moreover, if the process is a Markov chain, we give a criterion of null recurrence or positive recurrence and in the latter case, we obtain a subgeometric convergence of its transition kernel to its invariant probability measure
APA, Harvard, Vancouver, ISO, and other styles
4

Pham, Thi Da Cam. "Théorèmes limite pour un processus de Galton-Watson multi-type en environnement aléatoire indépendant." Thesis, Tours, 2018. http://www.theses.fr/2018TOUR4005/document.

Full text
Abstract:
La théorie des processus de branchement multi-type en environnement i.i.d. est considérablement moins développée que dans le cas univarié, et les questions fondamentales ne sont pas résolues en totalité à ce jour. Les réponses exigent une compréhension profonde du comportement des produits des matrices i.i.d. à coefficients positifs. Sous des hypothèses assez générales et lorsque les fonctions génératrices de probabilité des lois de reproduction sont “linéaire fractionnaires”, nous montrons que la probabilité de survie à l’instant n du processus de branchement multi-type en environnement aléatoire est proportionnelle à 1/√n lorsque n → ∞. La démonstration de ce résultat suit l’approche développée pour étudier les processus de branchement uni-variés en environnement aléatoire i. i. d. Il utilise de façon cruciale des résultats récents portant sur les fluctuations des normes de produits de matrices aléatoires i.i.d
The theory of multi-type branching process in i.i.d. environment is considerably less developed than for the univariate case, and fundamental questions are up to date unsolved. Answers demand a solid understanding of the behavior of products of i.i.d. matrices with non-negative entries. Under mild assumptions, when the probability generating functions of the reproduction laws are fractional-linear, the survival probability of the multi-type branching process in random environment up to moment n is proportional to 1/√n as n → ∞. Techniques for univariate branching process in random environment and methods from the theory of products of i.i.d. random matrices are required
APA, Harvard, Vancouver, ISO, and other styles
5

Weibel, Julien. "Graphons de probabilités, limites de graphes pondérés aléatoires et chaînes de Markov branchantes cachées." Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1031.

Full text
Abstract:
Les graphes sont des objets mathématiques qui servent à modéliser tout type de réseaux, comme les réseaux électriques, les réseaux de communications et les réseaux sociaux. Formellement un graphe est composé d'un ensemble de sommets et d'un ensemble d'arêtes reliant des paires de sommets. Les sommets représentent par exemple des individus, tandis que les arêtes représentent les interactions entre ces individus. Dans le cas d'un graphe pondéré, chaque arête possède un poids ou une décoration pouvant modéliser une distance, une intensité d'interaction, une résistance. La modélisation de réseaux réels fait souvent intervenir de grands graphes qui ont un grand nombre de sommets et d'arêtes.La première partie de cette thèse est consacrée à l'introduction et à l'étude des propriétés des objets limites des grands graphes pondérés : les graphons de probabilités. Ces objets sont une généralisation des graphons introduits et étudiés par Lovász et ses co-auteurs dans le cas des graphes sans poids sur les arêtes. À partir d'une distance induisant la topologie faible sur les mesures, nous définissons une distance de coupe sur les graphons de probabilités. Nous exhibons un critère de tension pour les graphons de probabilités lié à la compacité relative dans la distance de coupe. Enfin, nous prouvons que cette topologie coïncide avec la topologie induite par la convergence en distribution des sous-graphes échantillonnés. Dans la deuxième partie de cette thèse, nous nous intéressons aux modèles markoviens cachés indexés par des arbres. Nous montrons la consistance forte et la normalité asymptotique de l'estimateur de maximum de vraisemblance pour ces modèles sous des hypothèses standards. Nous montrons un théorème ergodique pour des chaînes de Markov branchantes indexés par des arbres avec des formes générales. Enfin, nous montrons que pour une chaîne stationnaire et réversible, le graphe ligne est la forme d'arbre induisant une variance minimale pour l'estimateur de moyenne empirique parmi les arbres avec un nombre donné de sommets
Graphs are mathematical objects used to model all kinds of networks, such as electrical networks, communication networks, and social networks. Formally, a graph consists of a set of vertices and a set of edges connecting pairs of vertices. The vertices represent, for example, individuals, while the edges represent the interactions between these individuals. In the case of a weighted graph, each edge has a weight or a decoration that can model a distance, an interaction intensity, or a resistance. Modeling real-world networks often involves large graphs with a large number of vertices and edges.The first part of this thesis is dedicated to introducing and studying the properties of the limit objects of large weighted graphs : probability-graphons. These objects are a generalization of graphons introduced and studied by Lovász and his co-authors in the case of unweighted graphs. Starting from a distance that induces the weak topology on measures, we define a cut distance on probability-graphons. We exhibit a tightness criterion for probability-graphons related to relative compactness in the cut distance. Finally, we prove that this topology coincides with the topology induced by the convergence in distribution of the sampled subgraphs. In the second part of this thesis, we focus on hidden Markov models indexed by trees. We show the strong consistency and asymptotic normality of the maximum likelihood estimator for these models under standard assumptions. We prove an ergodic theorem for branching Markov chains indexed by trees with general shapes. Finally, we show that for a stationary and reversible chain, the line graph is the tree shape that induces the minimal variance for the empirical mean estimator among trees with a given number of vertices
APA, Harvard, Vancouver, ISO, and other styles
6

Razetti, Agustina. "Modélisation et caractérisation de la croissance des axones à partir de données in vivo." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4016/document.

Full text
Abstract:
La construction du cerveau et de ses connexions pendant le développement reste une question ouverte dans la communauté scientifique. Des efforts fructueux ont été faits pour élucider les mécanismes de la croissance axonale, tels que la guidance axonale et les molécules de guidage. Cependant, des preuves récentes suggèrent que d'autres acteurs seraient impliqués dans la croissance des neurones in vivo. Notamment, les axones se développent dans des environnements mécaniquement contraints. Ainsi, pour bien comprendre ce processus dynamique, il faut prendre en compte les mécanismes collectifs et les interactions mécaniques au sein des populations axonales. Néanmoins, les techniques pour mesurer directement cela à partir de cerveaux vivants sont aujourd'hui insuffisantes ou lourdes à mettre en œuvre. Cette thèse résulte d'une collaboration multidisciplinaire, pour faire la lumière sur le développement axonal in vivo et les morphologies complexes des axones adultes. Notre travail a été inspiré et validé à partir d'images d'axones y individuels chez la drosophile, de type sauvage et modifiés génétiquement, que nous avons segmentés et normalisés. Nous avons d'abord proposé un cadre mathématique pour l'étude morphologique et la classification des groupes axonaux. A partir de cette analyse, nous avons émis l'hypothèse que la croissance axonale dérive d'un processus stochastique et que la variabilité et la complexité des arbres axonaux résultent de sa nature intrinsèque, ainsi que des stratégies d'élongation développées pour surmonter les contraintes mécaniques du cerveau en développement. Nous avons conçu un modèle mathématique de la croissance d'un axone isolé fondé sur des chaînes de Markov gaussiennes avec deux paramètres, représentant la rigidité axonale et l'attraction du champ cible. Nous avons estimé les paramètres de ce modèle à partir de données réelles et simulé la croissance des axones à l'échelle de populations et avec des contraintes spatiales pour tester notre hypothèse. Nous avons abordé des thèmes de mathématiques appliquées ainsi que de la biologie, et dévoilé des effets inexplorés de la croissance collective sur le développement axonal in vivo
How the brain wires up during development remains an open question in the scientific community across disciplines. Fruitful efforts have been made to elucidate the mechanisms of axonal growth, such as pathfinding and guiding molecules. However, recent evidence suggests other actors to be involved in neuron growth in vivo. Notably, axons develop in populations and embedded in mechanically constrained environments. Thus, to fully understand this dynamic process, one must take into account collective mechanisms and mechanical interactions within the axonal populations. However, techniques to directly measure this from living brains are today lacking or heavy to implement. This thesis emerges from a multidisciplinary collaboration, to shed light on axonal development in vivo and how adult complex axonal morphologies are attained. Our work is inspired and validated from images of single wild type and mutated Drosophila y axons, which we have segmented and normalized. We first proposed a mathematical framework for the morphological study and classification of axonal groups. From this analysis we hypothesized that axon growth derives from a stochastic process, and that the variability and complexity of axonal trees result from its intrinsic nature, as well as from elongation strategies developed to overcome the mechanical constraints of the developing brain. We designed a mathematical model of single axon growth based on Gaussian Markov Chains with two parameters, accounting for axon rigidity and attraction to the target field. We estimated the model parameters from data, and simulated the growing axons embedded in spatially constraint populations to test our hypothesis. We dealt with themes from applied mathematics as well as from biology, and unveiled unexplored effects of collective growth on axonal development in vivo
APA, Harvard, Vancouver, ISO, and other styles
7

Ye, Yinna. "PROBABILITÉ DE SURVIE D'UN PROCESSUS DE BRANCHEMENT DANS UN ENVIRONNEMENT ALÉATOIRE MARKOVIEN." Phd thesis, Université François Rabelais - Tours, 2011. http://tel.archives-ouvertes.fr/tel-00605751.

Full text
Abstract:
L'objet de cette thèse est d'étudier la probabilité de survie d'un processus de branchement en environnement aléatoire markovien et d'étendre dans ce cadre les résultats connus en milieu aléatoire indépendant et identiquement distribué. Le coeur de l'étude repose sur l'utilisation des théorèmes limites locaux pour une marche aléatoire centrée (Sn)n 0 sur R à pas markoviens et pour (mn)n 0, où mn = min (0; S1; ; Sn). Pour traiter le cas d'un environnement aléatoire markovien, nous développons dans un premier temps une étude des théorèmes locaux pour une chaîne semi-markovienne à valeurs réelles en améliorant certains résultats déjà connus et développés initialement par E. L. Presman (voir [22] et [23]). Nous utilisons ensuite ces résultats pour l'étude du comportement asymptotique de la probabilité de survie d'un processus de branchement critique en environnement aléatoire markovien. Les résultats principaux de cette thèse ont été annoncés dans les Comptes Rendus de l'Académie des Sciences ([21]). Un article plus détaillé est soumis pour publication dans la revue Journal of Theoretical Probability. Dans cette thèse, nous précisons les énoncés de ces théorèmes et détaillons leurs démonstrations.
APA, Harvard, Vancouver, ISO, and other styles
8

Olivier, Adelaïde. "Analyse statistique des modèles de croissance-fragmentation." Thesis, Paris 9, 2015. http://www.theses.fr/2015PA090047/document.

Full text
Abstract:
Cette étude théorique est pensée en lien étroit avec un champ d'application : il s'agit de modéliser la croissance d'une population de cellules qui se divisent selon un taux de division inconnu, fonction d’une variable dite structurante – l’âge et la taille des cellules étant les deux exemples paradigmatiques étudiés. Le champ mathématique afférent se situe à l'interface de la statistique des processus, de l’estimation non-paramétrique et de l’analyse des équations aux dérivées partielles. Les trois objectifs de ce travail sont les suivants : reconstruire le taux de division (fonction de l’âge ou de la taille) pour différents schémas d’observation (en temps généalogique ou en temps continu) ; étudier la transmission d'un trait biologique général d'une cellule à une autre et étudier le trait d’une cellule typique ; comparer la croissance de différentes populations de cellules à travers le paramètre de Malthus (après introduction de variabilité dans le taux de croissance par exemple)
This work is concerned with growth-fragmentation models, implemented for investigating the growth of a population of cells which divide according to an unknown splitting rate, depending on a structuring variable – age and size being the two paradigmatic examples. The mathematical framework includes statistics of processes, nonparametric estimations and analysis of partial differential equations. The three objectives of this work are the following : get a nonparametric estimate of the division rate (as a function of age or size) for different observation schemes (genealogical or continuous) ; to study the transmission of a biological feature from one cell to an other and study the feature of one typical cell ; to compare different populations of cells through their Malthus parameter, which governs the global growth (when introducing variability in the growth rate among cells for instance)
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Branching Markov chains"

1

Krell, Nathalie. "Self-Similar Branching Markov Chains." In Lecture Notes in Mathematics, 261–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01763-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dynkin, E. B. "Branching Exit Markov System and their Applications to Partial Differential Equations." In Markov Processes and Controlled Markov Chains, 3–13. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Guangping, and Jinzhao Wu. "Branching Time Equivalences for Interactive Markov Chains." In Lecture Notes in Computer Science, 156–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30233-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baier, Christel, Holger Hermanns, Joost-Pieter Katoen, and Verena Wolf. "Comparative Branching-Time Semantics for Markov Chains." In CONCUR 2003 - Concurrency Theory, 492–507. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45187-7_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bacci, Giorgio, Giovanni Bacci, Kim G. Larsen, and Radu Mardare. "Converging from Branching to Linear Metrics on Markov Chains." In Theoretical Aspects of Computing - ICTAC 2015, 349–67. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25150-9_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arora, Shiraj, and M. V. Panduranga Rao. "Model Checking Branching Time Properties for Incomplete Markov Chains." In Model Checking Software, 20–37. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30923-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hahn, Ernst Moritz, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, and Dominik Wojtczak. "Model-Free Reinforcement Learning for Branching Markov Decision Processes." In Computer Aided Verification, 651–73. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_30.

Full text
Abstract:
AbstractWe study reinforcement learning for the optimal control of Branching Markov Decision Processes (BMDPs), a natural extension of (multitype) Branching Markov Chains (BMCs). The state of a (discrete-time) BMCs is a collection of entities of various types that, while spawning other entities, generate a payoff. In comparison with BMCs, where the evolution of a each entity of the same type follows the same probabilistic pattern, BMDPs allow an external controller to pick from a range of options. This permits us to study the best/worst behaviour of the system. We generalise model-free reinforcement learning techniques to compute an optimal control strategy of an unknown BMDP in the limit. We present results of an implementation that demonstrate the practicality of the approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Grimmett, Geoffrey R., and David R. Stirzaker. "Markov chains." In Probability and Random Processes, 213–304. Oxford University PressOxford, 2001. http://dx.doi.org/10.1093/oso/9780198572237.003.0006.

Full text
Abstract:
Abstract A Markov chain is a random process with the property that, conditional on its present value, the future is independent of the past. The Chapman– Kolmogorov equations are derived, and used to explore the persistence and transience of states. Stationary distributions are studied at length, and the ergodic theorem for irreducible chains is proved using coupling. The reversibility of Markov chains is discussed. After a section devoted to branching processes, the theory of Poisson processes and birth–death processes is considered in depth, and the theory of continuous-time chains is sketched. The technique of imbedding a discrete-time chain inside a continuous-time chain is exploited in different settings. The basic properties of spatial Poisson processes are de- scribed, and the chapter ends with an account of the technique of Markov chain Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
9

Grimmett, Geoffrey R., and David R. Stirzaker. "Renewals." In Probability and Random Processes, 412–39. Oxford University PressOxford, 2001. http://dx.doi.org/10.1093/oso/9780198572237.003.0010.

Full text
Abstract:
Abstract A renewal process is a recurrent-event process with independent identically distributed interevent times. The asymptotic behaviour of a renewal process is described by the renewal theorem and the elementary renewal theorem, and the key renewal theorem is often useful. The waiting-time paradox leads to a discussion of excess and current lifetimes, and their asymptotic distributions are found. Other renewal-type processes are studied, including alternating and delayed renewal processes, and the use of renewal is illustrated in applications to Markov chains and age-dependent branching processes. The asymptotic behaviour of renewal–reward processes is studied, and Little’s formula is proved.
APA, Harvard, Vancouver, ISO, and other styles
10

Grenander, Ulf, and Michael I. Miller. "Probabilistic Directed Acyclic Graphs and Their Entropies." In Pattern Theory. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780198505709.003.0004.

Full text
Abstract:
Probabilistic structures on the representations allow for expressing the variation of natural patterns. In this chapter the structure imposed through probabilistic directed graphs is studied. The essential probabilistic structure enforced through the directedness of the graphs is sites are conditionally independent of their nondescendants given their parents. The entropies and combinatorics of these processes are examined as well. Focus is given to the classical Markov chain and the branching process examples to illustrate the fundamentals of variability descriptions through probability and entropy.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Branching Markov chains"

1

Xia, Ning, Aishuang Li, Guizhi Zhu, Xiaoguo Niu, Chunsheng Hou, and Yangying Gan. "Study of Branching Responses of One Year Old Branches of Apple Trees to Heading Using Hidden Semi-Markov Chains." In 2009 Third International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA). IEEE, 2009. http://dx.doi.org/10.1109/pma.2009.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yan Hua, Qian Zhang, Bao Guo Li, and Bao Gui Zhang. "Characterizing Wheat Root Branching Using a Markov Chain Approach." In 2006 International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA). IEEE, 2006. http://dx.doi.org/10.1109/pma.2006.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography