Dissertations / Theses on the topic 'Markov'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Markov.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Di, Cecco Davide <1980>. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/1/Di_Cecco_Davide_Tesi.pdf.
Full textDi, Cecco Davide <1980>. "Markov exchangeable data and mixtures of Markov Chains." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1547/.
Full textYildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.
Full textTillman, Måns. "On-Line Market Microstructure Prediction Using Hidden Markov Models." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208312.
Full textUnder de senaste decennierna har det gjorts stora framsteg inom finansiell teori för kapitalmarknader. Formuleringen av arbitrageteori medförde möjligheten att konsekvent kunna prissätta finansiella instrument. Men i en tid då högfrekvenshandel numera är standard, har omsättningen av information i pris börjat ske i allt snabbare takt. För att studera dessa fenomen; prispåverkan och informationsomsättning, har mikrostrukturteorin vuxit fram. I den här uppsatsen studerar vi mikrostruktur med hjälp av en dynamisk modell. Historiskt sett har mikrostrukturteorin fokuserat på statiska modeller men med hjälp av icke-linjära dolda Markovmodeller (HMM:er) utökar vi detta till den dynamiska domänen. HMM:er kommer med en naturlig uppdelning mellan observation och dynamik, och är utformade på ett sådant sätt att vi kan dra nytta av domänspecifik kunskap. Genom att formulera lämpliga nyckelantaganden baserade på traditionell mikrostrukturteori specificerar vi en modell—med endast ett fåtal parametrar—som klarar av att beskriva de välkända säsongsbeteenden som statiska modeller inte klarar av. Tack vare nya genombrott inom Monte Carlo-metoder finns det nu kraftfulla verktyg att tillgå för att utföra optimal filtrering med HMM:er i realtid. Vi applicerar ett så kallat bootstrap filter för att sekventiellt filtrera fram tillståndet för modellen och prediktera framtida tillstånd. Tillsammans med tekniken backward smoothing estimerar vi den posteriora simultana fördelningen för varje handelsdag. Denna används sedan för statistisk inlärning av våra hyperparametrar via en sekventiell Monte Carlo Expectation Maximization-algoritm. För att formulera en modell som beskriver omsättningen av information, väljer vi att utgå ifrån volume imbalance, som ofta används för att studera prispåverkan. Vi definierar den relaterade observerbara storheten scaled volume imbalance som syftar till att bibehålla kopplingen till prispåverkan men även går att modellera med en dynamisk process som passar in i ramverket för HMM:er. Vi visar även hur man inom detta ramverk kan utvärdera HMM:er i allmänhet, samt genomför denna analys för vår modell i synnerhet. Modellen testas mot finansiell handelsdata för både terminskontrakt och aktier och visar i bägge fall god predikteringsförmåga.
Desharnais, Josée. "Labelled Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0031/NQ64546.pdf.
Full textBalan, Raluca M. "Set-Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ66119.pdf.
Full textEltannir, Akram A. "Markov interactive processes." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/30745.
Full textWerner, Ivan. "Contractive Markov systems." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/15173.
Full textDurrell, Fernando. "Constrained portfolio selection with Markov and non-Markov processes and insiders." Doctoral thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/4379.
Full textSkorniakov, Viktor. "Asymptotically homogeneous Markov chains." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101223_152954-43357.
Full textDisertacijoje tirta Markovo grandinių klasė, kurios iteracijos nusakomos atsitiktinėmis asimptotiškai homogeninėmis funkcijomis, ir išspręsti du uždaviniai: 1) surastos bendros sąlygos, kurios garantuoja vienintelio stacionaraus skirstinio egzistavimą; 2) vienmatėms grandinėms surastos sąlygos, kurioms esant stacionarus skirstinys turi "sunkias" uodegas.
Mazali, Rogério. "Improving mutual fund market timing measures: a markov switching approach." reponame:Repositório Institucional do FGV, 2001. http://hdl.handle.net/10438/55.
Full textMarket timing performance of mutual funds is usually evaluated with linear models with dummy variables which allow for the beta coefficient of CAPM to vary across two regimes: bullish and bearish market excess returns. Managers, however, use their predictions of the state of nature to deÞne whether to carry low or high beta portfolios instead of the observed ones. Our approach here is to take this into account and model market timing as a switching regime in a way similar to Hamilton s Markov-switching GNP model. We then build a measure of market timing success and apply it to simulated and real world data.
Haag, Florian [Verfasser]. "Asymptotisches Verhalten von Quanten-Markov-Halbgruppen und Quanten-Markov-Prozessen / Florian Haag." Aachen : Shaker, 2006. http://d-nb.info/1170532683/34.
Full textYu, Huizhen Ph D. Massachusetts Institute of Technology. "Approximate solution methods for partially observable Markov and semi-Markov decision processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35299.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 165-169).
We consider approximation methods for discrete-time infinite-horizon partially observable Markov and semi-Markov decision processes (POMDP and POSMDP). One of the main contributions of this thesis is a lower cost approximation method for finite-space POMDPs with the average cost criterion, and its extensions to semi-Markov partially observable problems and constrained POMDP problems, as well as to problems with the undiscounted total cost criterion. Our method is an extension of several lower cost approximation schemes, proposed individually by various authors, for discounted POMDP problems. We introduce a unified framework for viewing all of these schemes together with some new ones. In particular, we establish that due to the special structure of hidden states in a POMDP, there is a class of approximating processes, which are either POMDPs or belief MDPs, that provide lower bounds to the optimal cost function of the original POMDP problem. Theoretically, POMDPs with the long-run average cost criterion are still not fully understood.
(cont.) The major difficulties relate to the structure of the optimal solutions, such as conditions for a constant optimal cost function, the existence of solutions to the optimality equations, and the existence of optimal policies that are stationary and deterministic. Thus, our lower bound result is useful not only in providing a computational method, but also in characterizing the optimal solution. We show that regardless of these theoretical difficulties, lower bounds of the optimal liminf average cost function can be computed efficiently by solving modified problems using multichain MDP algorithms, and the approximating cost functions can be also used to obtain suboptimal stationary control policies. We prove the asymptotic convergence of the lower bounds under certain assumptions. For semi-Markov problems and total cost problems, we show that the same method can be applied for computing lower bounds of the optimal cost function. For constrained average cost POMDPs, we show that lower bounds of the constrained optimal cost function can be computed by solving finite-dimensional LPs. We also consider reinforcement learning methods for POMDPs and MDPs. We propose an actor-critic type policy gradient algorithm that uses a structured policy known as a finite-state controller.
(cont.) We thus provide an alternative to the earlier actor-only algorithm GPOMDP. Our work also clarifies the relationship between the reinforcement learning methods for POMDPs and those for MDPs. For average cost MDPs, we provide a convergence and convergence rate analysis for a least squares temporal difference (TD) algorithm, called LSPE, and previously proposed for discounted problems. We use this algorithm in the critic portion of the policy gradient algorithm for POMDPs with finite-state controllers. Finally, we investigate the properties of the limsup and liminf average cost functions of various types of policies. We show various convexity and concavity properties of these costfunctions, and we give a new necessary condition for the optimal liminf average cost to be constant. Based on this condition, we prove the near-optimality of the class of finite-state controllers under the assumption of a constant optimal liminf average cost. This result provides a theoretical guarantee for the finite-state controller approach.
by Huizhen Yu.
Ph.D.
Yeo, Sungchil. "On estimation for a combined Markov and semi-Markov model with censoring /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487586889187169.
Full textWächter, Matthias. "Markov-Analysen unebener Oberflächen." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=97400460X.
Full textPiskuric, Mojca. "Vector-Valued Markov Games." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2001. http://nbn-resolving.de/urn:nbn:de:swb:14-996482849703-81901.
Full textDas Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt
Santos, Raqueline Azevedo Medeiros. "Cadeias de Markov Quânticas." Laboratório Nacional de Computação Científica, 2010. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=199.
Full textIn Computer Science, random walks are used in randomized algorithms, specially in search algorithms, where we desire to find a marked state in a Markov chain.In this type of algorithm,it is interesting to study the Hitting Time, which is associated to its computational complexity. In this context, we describe the classical theory of Markov chains and random walks,as well as their quantum analogue.In this way,we define the Hitting Time under the scope of quantum Markov chains. Moreover, analytical expressions calculated for the quantum Hitting Time and for the probability of finding a marked element on the complete graph are presented as the new results of this dissertation.
Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.
Full textA finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.
The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).
莊競誠 and King-sing Chong. "Explorations in Markov processes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235682.
Full textMcKee, Bill Frederick. "Optimal hidden Markov models." Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/1698.
Full textJames, Huw William. "Transient Markov decision processes." Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430192.
Full textDessain, Thomas James. "Perturbations of Markov chains." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10619/.
Full textPiskuric, Mojca. "Vector-Valued Markov Games." Doctoral thesis, Technische Universität Dresden, 2000. https://tud.qucosa.de/id/qucosa%3A24773.
Full textDas Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt.
Li, Junping. "Generalized Markov branching models." Thesis, University of Greenwich, 2005. http://gala.gre.ac.uk/6226/.
Full textKu, Ho Ming. "Interacting Markov branching processes." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/2002759/.
Full textChong, King-sing. "Explorations in Markov processes /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18736105.
Full textMedeiros, Sérgio da Silva. "Cadeias de Markov ocultas." reponame:Repositório Institucional da UFABC, 2017.
Find full textDissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Mestrado Profissional em Matemática em Rede Nacional, 2017.
O foco principal deste trabalho é o estudo das Cadeias de Markov e das Cadeias de Markov Ocultas. As cadeias de Markov fornecem uma forma prática para o estudo de conceitos probabilísticos e matriciais. Procuramos utilizar de forma contextualizada a aplicação do produto e potência de matrizes associados ao software Geogebra. Além dos exemplos, estão contidas questões de aprendizagem, sempre com objetivo de torná-los aliados e valiosos ao aprendizado referente a este tema.
The main focus of this work is the study of Markov Chains and the Markov Hidden Chains, which in turn brings the study of probabilistic and matrix concepts into practice. We seek to use in a contextualized way the application of the multiplication and potency of matrices associated to the software Geogebra. In addition to the examples, are contained learning issues, always with the goal of making them allies and valuable to the learning related to this theme.
Karadag, Mehmet Ali. "Analysis Of Turkish Stock Market With Markov Regime Switching Volatility Models." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609787/index.pdf.
Full textHrabovska, Yevheniia <1994>. "A Markov-Switching Model for Bubble Detection in the Stock Market." Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10797.
Full textPötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.
Full textSeries: Forschungsberichte / Institut für Statistik
Bulla, Jan. "Application of Hidden Markov and Hidden Semi-Markov Models to Financial Time Series." Doctoral thesis, [S.l. : s.n.], 2006. http://swbplus.bsz-bw.de/bsz260867136inh.pdf.
Full textStien, Marita. "Sequential Markov random fields and Markov mesh random fields for modelling of geological structures." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9326.
Full textWe have been given a two-dimensional image of a geological structure. This structure is used to construct a three-dimensional statistical model, to be used as prior knowledge in the analysis of seismic data. We consider two classes of discrete lattice models for which efficient simulation is possible; sequential Markov random field (sMRF) and Markov mesh random field (MMRF). We first explore models from these two classes in two dimensions, using the maximum likelihood estimator (MLE). The results indicate that a larger neighbourhood should be considered for all the models. We also develop a second estimator, which is designed to match the model with the observation with respect to a set of specified functions. This estimator is only considered for the sMRF model, since that model proved to be flexible enough to give satisfying results. Due to time limitation of this thesis, we could not wait for the optimization of the estimator to converge. Thus, we can not evaluate this estimator. Finally, we extract useful information from the two-dimensional models and specify a sMRF model in three dimensions. Parameter estimation for this model needs approximative techniques, since we only have given observations in two dimensions. Such techniques have not been investigated in this report, however, we have adjusted the parameters manually and observed that the model is very flexible and might give very satisfying results.
Koserski, Jan. "Analyse der Ratingmigrationen interner Ratingsysteme mit Markov-Ketten, Hidden-Markov-Modellen und Neuronalen Netzen /." Aachen : Shaker, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015604019&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textBakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.
Full textIannuzzi, Alessandra. "Catene di Markov reversibili e applicazioni al metodo Montecarlo basato sulle catene di Markov." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9010/.
Full textPan-Yu, Yiyan. "Spectres de processus de Markov." Phd thesis, Université Joseph Fourier (Grenoble), 1997. http://tel.archives-ouvertes.fr/tel-00004959.
Full textJohr, Sven. "Model checking compositional Markov systems." Aachen Shaker, 2007. http://d-nb.info/988568969/04.
Full textJohr, Sven. "Model checking compositional Markov systems /." Aachen : Shaker, 2008. http://d-nb.info/988568969/04.
Full textHolenstein, Roman. "Particle Markov chain Monte Carlo." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.
Full textFerns, Norman Francis. "Metrics for Markov decision processes." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80263.
Full textChaput, Philippe. "Approximating Markov processes by averaging." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66654.
Full textNous reconsidérons les processus de Markov étiquetés sous une nouvelle approche, dans un certain sens "dual'' au point de vue usuel. Au lieu de considérer les transitions d'état en état en tant qu'une collection de distributions de sous-probabilités sur l'espace d'états, nous les regardons en tant que transformations de fonctions réelles. En généralisant l'opération d'espérance conditionelle, nous construisons une catégorie où les objets sont des processus de Markov étiquetés regardés en tant qu'un rassemblement d'opérateurs; les flèches de cette catégorie se comportent comme des projections sur un espace d'états plus petit. Nous définissons une notion d'équivalence pour de tels processus, que l'on appelle bisimulation, qui est intimement liée avec la définition usuelle pour les processus probabilistes. Nous démontrons que nous pouvons construire, d'une manière catégorique, le plus petit processus bisimilaire à un processus donné, et que ce plus petit object est lié à une logique modale bien connue. Nous développons une méthode d'approximation basée sur cette logique, où l'espace d'états des processus approximatifs est fini; de plus, nous démontrons que ces processus approximatifs convergent, d'une manière catégorique, au plus petit processus bisimilaire.
Byrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.
Full textMatthews, James. "Markov chains for sampling matchings." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3072.
Full textDil, Anton J. "Markov modelling of HVAC systems." Thesis, Loughborough University, 1993. https://dspace.lboro.ac.uk/2134/7301.
Full textQamber, I. S. H. "Markov modelling of equipment behaviour." Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381011.
Full textWilson, David Bruce. "Exact sampling with Markov chains." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38402.
Full textVan, Gael Jurgen. "Bayesian nonparametric hidden Markov models." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610196.
Full textVieira, Francisco Zuilton Gonçalves. "Cadeias de Markov homogêneas discretas." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306581.
Full textDissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-17T21:13:11Z (GMT). No. of bitstreams: 1 Vieira_FranciscoZuiltonGoncalves_M.pdf: 2460011 bytes, checksum: bb34e809ab256fe3bb3b1bd74fc35eec (MD5) Previous issue date: 2011
Resumo: Esta dissertação tem como tema o estudo das cadeias de Markov discretas com valores em um espaço de estados enumerável. Cadeias de Markov são processos estocásticos no seguinte sentido: dado o momento presente, o futuro não depende do passado, mas somente do momento presente. Nosso estudo é realizado sobre cadeias de Markov homogêneas (CMH) discretas. Inicialmente, introduzimos a definição e conceitos básicos das CMH discretas. Tais estudos nos conduzem ao conceito de topologia das matrizes de Transição associada as CMH. A topologia de tais cadeias é a ferramenta necessária para o estudo dos conjuntos recorrentes e transcientes, os quais são de grande importância nesta teoria. O estudo de estados estacionários e a propriedade forte de Markov também são abordados. Esta última propriedade serve para construção do conceito de estado recorrente. A partir deste último conceito trabalhamos com os conceitos de positivo e nulo recorrente. Por fim, estudamos o importante conceito de tempo absorção, o qual é entendido como o tempo que algum estado é absorvido a um conjunto recorrente
Abstract: This dissertation deals with the study of discrete Markov chains with values in a countable state space. Markov chains are processes stochastic in the following sense: given the present moment, the future does not depend on the past, but only in the present moment. Our study is conducted on homogeneous Markov chains (HMC) discrete. Initially, we introduced the definition and the basic concepts of discrete HMC. Such studies lead us to understand the concept of topology Transition matrices associated to HMC. The topology of these chains is a necessary tool for the study of the recurrent and transient sets, which are of great importance in this theory. The study of steady states and the strong Markov properties are also addressed. This latter property serves to build the concept of recurrent state. From this latter concept we work with the concepts of positive and null recurrent. Finally, we studied the important concept of absorption time, which is understood as the time that some state is absorbed to a set recurrent
Mestrado
Matematica
Mestre em Matemática
Mestern, Mark Andrew. "Distributed analysis of Markov chains." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/9693.
Full textThis thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.
McGrath, Michael. "Markov random field image modelling." Master's thesis, University of Cape Town, 2003. http://hdl.handle.net/11427/5166.
Full textThis work investigated some of the consequences of using a priori information in image processing using computer tomography (CT) as an example. Prior information is information about the solution that is known apart from measurement data. This information can be represented as a probability distribution. In order to define a probability density distribution in high dimensional problems like those found in image processing it becomes necessary to adopt some form of parametric model for the distribution. Markov random fields (MRFs) provide just such a vehicle for modelling the a priori distribution of labels found in images. In particular, this work investigated the suitability of MRF models for modelling a priori information about the distribution of attenuation coefficients found in CT scans.