Дисертації з теми "Markov approximation"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Markov approximation".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Szczegot, Kamil. "Sharp approximation for density dependent Markov chains /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Повний текст джерелаPötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.
Повний текст джерелаSeries: Forschungsberichte / Institut für Statistik
Perrine, Serge. "Approximation diophantienne (théorie de Markoff)." Metz, 1988. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1988/Perrine.Serge_1.SMZ8826.pdf.
Повний текст джерелаTowards 1880, A. Markoff gave precisions about structure of the set of approximation constants greater than 1/3 for irrational numbers. This theory establishes links between constants, arithmetical minima for quadratic forms, and the solutions of the diophantine equation x2 + y2 +z2 = 3xyz. The present dissertation generalizes the original formalism built by Markoff. It introduces the notion of (a, r,E)-theory of Markoff, among which the (2,0,-1) theory is the original Markoff theory. The corresponding diophantine equation is given with an interpretation for the whole calculus. From that is derives the resolution of the diophantine equation x2 + y2 +z2 = (a +1)xyz and some arborescent constructions. For the systematic research of holes in the Markoff's spectra, the author gives confirmation for the results of Schecker and Freiman, concerning the Hall's ray. He gives examples and gives confirmation for some results of Kinney and Pitcher
Patrascu, Relu-Eugen. "Linear Approximations For Factored Markov Decision Processes." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1171.
Повний текст джерелаLei, Lei. "Markov Approximations: The Characterization of Undermodeling Errors." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1371.pdf.
Повний текст джерелаKuntz, Nussio Juan. "Deterministic approximation schemes with computable errors for the distributions of Markov chains." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/59103.
Повний текст джерелаTanaka, Takeyuki. "Studies on Application of a Markov Approximation Methods to Structural Reliability Analyses." Kyoto University, 1995. http://hdl.handle.net/2433/160776.
Повний текст джерелаKyoto University (京都大学)
0048
新制・論文博士
博士(工学)
乙第8872号
論工博第2981号
新制||工||997(附属図書館)
UT51-95-D465
(主査)教授 宗像 豊哲, 教授 茨木 俊秀, 教授 岩井 敏洋
学位規則第4条第2項該当
Vergne, Nicolas. "Chaînes de Markov régulées et approximation de Poisson pour l'analyse de séquences biologiques." Phd thesis, Université d'Evry-Val d'Essonne, 2008. http://tel.archives-ouvertes.fr/tel-00322434.
Повний текст джерелаΠt/n = (1-t/n) Π0 + t/n Π1.
Cette modélisation correspond à une évolution douce entre deux états. Par exemple cela peut traduire la transition entre deux régimes d'un chaîne de Markov cachée, qui pourrait parfois sembler trop brutale. Ces modèles peuvent donc être vus comme une alternative mais aussi comme un outil complémentaire aux modèles de Markov cachés. Tout au long de ce travail, nous avons considéré des dérives polynomiales de tout degré ainsi que des dérives par splines polynomiales : le but de ces modèles étant de les rendre plus flexibles que ceux des polynômes. Nous avons estimé nos modèles de multiples manières puis évalué la qualité de ces estimateurs avant de les utiliser en vue d'applications telle la recherche de mots exceptionnels. Nous avons mis en oeuvre le software DRIMM (bientôt disponible à http://stat.genopole.cnrs.fr/sg/software/drimm/, dédié à l'estimation de nos modèles. Ce programme regroupe toutes les possibilités offertes par nos modèles, tels le calcul des matrices en chaque position, le calcul des lois stationnaires, des distributions de probabilité en chaque position... L'utilisation de ce programme pour la recherche des mots exceptionnels est proposée dans des programmes auxiliaires (disponibles sur demande).
Plusieurs perspectives à ce travail sont envisageables. Nous avons jusqu'alors décidé de faire varier la matrice seulement en fonction de la position, mais nous pourrions prendre en compte des covariables tels le degré d'hydrophobicité, le pourcentage en gc, un indicateur de la structure des protéines (hélice α, feuillets β...). Nous pourrions aussi envisager de mêler HMM et variation continue, où sur chaque région, au lieu d'ajuster un modèle de Markov, nous ajusterions un modèle de chaînes de Markov régulées.
GENDRE, LAURENT. "INEGALITES DE MARKOV SINGULIERES ET APPROXIMATION DES FONCTIONS HOLOMORPHES DE LA CLASSE M." Phd thesis, Université Paul Sabatier - Toulouse III, 2005. http://tel.archives-ouvertes.fr/tel-00010810.
Повний текст джерелаGendre, Laurent. "Inégalités de Markov singulières et approximation des fonctions holomorphes de la classe M." Toulouse 3, 2005. http://www.theses.fr/2005TOU30033.
Повний текст джерелаIn the first part, we prove that all the singular algebraic curves of Rn admit Markov tangential inequalities. We give a geometric signification of the Markov exponent. We prove that this exponent is less or equal to the multiplicity of the singularity of the complexify curve in Cn. We construct a Puiseux parameterisation on the real singularity and we extended it to a nowhere dense open subset of C. Therefore, we obtain the property HCP of the Green function with pole at infinity by geodesic metric in the complexify curve. In the second part, we prove a Bernstein type theorem for the functions of intermediate classes between holomorphic functions and C¥ functions on subclasses of s-H convex compact subsets of Cn. To prove this result, we give representative kernel on s-H convex compact for functions of A¥(K). We approach this kernel by an other kernel type Henkin-Ramirez. We propose a new geometric property of Green function with pole at infinity and we give some examples
Vergne, Nicolas Prum Bernard. "Chaînes de Markov régulées et approximation de Poisson pour l'analyse de séquences biologiques." S. l. : Evry-Val d'Essonne, 2008. http://www.biblio.univ-evry.fr/theses/2008/2008EVRY0006.pdf.
Повний текст джерелаSitaraman, Hariharakrishnan. "Approximation of a class of Markov-modulated Poisson processes with a large state-space." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184828.
Повний текст джерелаHaro, Antonio. "Example Based Processing For Image And Video Synthesis." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5283.
Повний текст джерелаCrudu, Alina. "Approximations hybrides de processus de Markov à sauts multi-échelles : applications aux modèles de réseaux de gènes en biologie moléculaire." Phd thesis, Université Rennes 1, 2009. http://tel.archives-ouvertes.fr/tel-00454886.
Повний текст джерелаTracol, Mathieu. "Vérification approchée de systèmes probabilistes." Paris 11, 2010. http://www.theses.fr/2010PA112055.
Повний текст джерелаLn this dissertation, we consider several types of probabilistic systems, which model the behaviors of real processes in a randomized environment. Such systems can be Communication Protocols, Algorithms executed on computers, Hybrid Systems. . . Our goal is to study their evolution. We focus our study on the quantitative comparison between Probabilistic Systems, and on the analysis of their properties. Our study relies on the following questions of evaluation and comparison: *Given the model of a real system, can we decide efficiently how the system behaves? *Given two models, how can we compare them? We try to prove that approximation algorithms can be usefuI for the analysis of Probabilistic Systems. Ln particular, we show that problems whose exact solution is not efficiently computable can be efficiently approximated. Ln our work, we use four models of probabilistic systems: the model of Finite Probabilistic Automata, on finite and infinite words, the model of Labeled Markov Processes, the model of Markovian Decision Processes (MDP), and a model of network of MDPs. Our main results concern quantitative comparison methods between systems, and approximation methods for complex systems
Cech, Markus. "Fahrspurschätzung aus monokularen Bildfolgen für innerstädtische Fahrerassistenzanwendungen." Karlsruhe Univ.-Verl. Karlsruhe, 2008. http://d-nb.info/994134843/04.
Повний текст джерелаFu, Shuting. "Bayesian Logistic Regression Model with Integrated Multivariate Normal Approximation for Big Data." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/451.
Повний текст джерелаBousfiha, Amina. "Approximation des systèmes semi-markoviens via les distributions de type phase et application en fiabilité." Compiègne, 1998. http://www.theses.fr/1998COMP1092.
Повний текст джерелаLaroche, Pierre. "Processus décisionnels de Markov appliqués à la planification sous incertitude." Nancy 1, 2000. http://docnum.univ-lorraine.fr/public/SCD_T_2000_0012_LAROCHE.pdf.
Повний текст джерелаSen, Sanjoy Kumar. "Analysis of Memory Interference in Buffered Multi-processor Systems in Presence of Hot Spots and Favorite Memories." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc278426/.
Повний текст джерелаCottrell, Marie. "Modélisation de réseaux de neurones par des chaines de Markov et autres applications." Paris 11, 1988. http://www.theses.fr/1988PA112232.
Повний текст джерелаThe first part of the thesis consists of a paper published in IEEE Trans. Aut. Control (vol. AC-28, n°9, 1983), with J. C. Fort and G. Malgouyres. It gives two methods of calculating the exit time of a Markov chain from an attraction domain this time is extremely long, sa we use an exponential change of probability (that of large deviations theory), for a fast simulation and a non-standard approximation by diffusion. The second part includes two papers published with J. C. Fort in the Annales de l'IHP, Probabilités and Statistiques (vol. 23, n° 1, 1987}, and in Biological Cybernetics (n° 53, 1986). In the first one, we prove the convergence of Kohonen's self-organizing algorithm, in dimension 1. In the second one, we define another self-organizing algorithm, which is a simplified variant of Kohonen's, and we prove its convergence in dimensions 1 and 2. In the third part, published in Biological Cybernetics (n°58, 1988), we solve the problem of the connection matrix calculus for a Mac-Culloch or Hopfield neural network, so as to get the largest attractivity for the deterministic algorithm and non-orthogonal patterns. Then we calculate the attractivity of each memorized pattern, for a given connection matrix. The last part is devoted to the study of the role of inhibition in a nearest-neighbours-connected neural network. The model closely ressembles the biological reality of the young animal's cerebellar cortex. We prove that, when inhibition is smaller than a certain threshold, the network is ergodic and works in a stationary way. Conversely, when inhibition increases, striped or moiré responses appear, whose form and width depend on the considered neighbourhood size
McDougall, Jeffrey Michael. "Low complexity channel models for approximating flat Rayleigh fading in network simulations." Texas A&M University, 2003. http://hdl.handle.net/1969/294.
Повний текст джерелаLi, Jun. "Learning Average Reward Irreducible Stochastic Games: Analysis and Applications." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000136.
Повний текст джерелаLangenau, Holger. "Best constants in Markov-type inequalities with mixed weights." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-200815.
Повний текст джерелаMarkovungleichungen liefern obere Schranken an die Norm einer (höheren) Ableitung eines algebraischen Polynoms in Bezug auf die Norm des Polynoms selbst. Diese vorliegende Arbeit betrachtet den Fall, dass die Normen vom Laguerre-, Gegenbauer- oder Hermitetyp sind, wobei die entsprechenden Gewichte auf beiden Seiten unterschiedlich gewählt werden. Es wird die kleinste Konstante bestimmt, sodass diese Ungleichung für jedes Polynom vom Grad höchstens n erfüllt ist. Die gesuchte kleinste Konstante kann als die Operatornorm des Differentialoperators dargestellt werden. Diese fällt aber mit der Spektralnorm der Matrixdarstellung in einem Paar geeignet gewählter Orthonormalbasen zusammen und kann daher gut behandelt werden. Zur Abschätzung dieser Normen kommen verschiedene Methoden zum Einsatz, die durch die Differenz der in den Gewichten auftretenden Parameter bestimmt werden. Bis auch eine kleine Lücke im Parameterbereich wird das asymptotische Verhalten der kleinsten Konstanten in jedem der betrachteten Fälle ermittelt
Li, Jun 1974. "Learning average reward irreducible stochastic games [electronic resource] : analysis and applications / by Jun Li." University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000136.
Повний текст джерелаTitle from PDF of title page.
Document formatted into pages; contains 111 pages.
Thesis (Ph.D.)--University of South Florida, 2003.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: A large class of sequential decision making problems under uncertainty with multiple competing decision makers/agents can be modeled as stochastic games. Stochastic games having Markov properties are called Markov games or competitive Markov decision processes. This dissertation presents an approach to solve non cooperative stochastic games, in which each decision maker makes her/his own decision independently and each has an individual payoff function. In stochastic games, the environment is nonstationary and each agent's payoff is affected by joint decisions of all agents, which results in the conflict of interest among the decision makers. In this research, the theory of Markov decision processes (MDPs) is combined with the game theory to analyze the structure of Nash equilibrium for stochastic games. In particular, the Laurent series expansion technique is used to extend the results of discounted reward stochastic games to average reward stochastic games.
ABSTRACT: As a result, auxiliary matrix games are developed that have equivalent equilibrium points and values to a class of stochastic games that are irreducible and have average reward performance metric. R-learning is a well known machine learning algorithm that deals with average reward MDPs. The R-learning algorithm is extended to develop a Nash-R reinforcement learning algorithm for obtaining the equivalent auxiliary matrices. A convergence analysis of the Nash-R algorithm is developed from the study of the asymptotic behavior of its two time scale stochastic approximation scheme, and the stability of the associated ordinary differential equations (ODEs). The Nash-R learning algorithm is tested and then benchmarked with MDP based learning methods using a well known grid game. Subsequently, a real life application of stochastic games in deregulated power market is explored.
ABSTRACT: According to the current literature, Cournot, Bertrand, and Supply Function Equilibrium (SFEs) are the three primary equilibrium models that are used to evaluate the power market designs. SFE is more realistic for pool type power markets. However, for a complicated power system, the convex assumption for optimization problems is violated in most cases, which makes the problems more difficult to solve. The SFE concept in adopted in this research, and the generators' behaviors are modeled as a stochastic game instead of one shot game. The power market is considered to have features such as multi-settlement (bilateral, day-ahead market, spot markets and transmission congestion contracts), and demand elasticity. Such a market consisting of multiple competing suppliers (generators) is modeled as a competitive Markov decision processes and is studied using the Nash-R algorithm.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Garcia, Pascal. "Exploration guidée et induction de comportements génériques en apprentissage par renforcement." Rennes, INSA, 2004. http://www.theses.fr/2004ISAR0010.
Повний текст джерелаReinforcement learning is a general framework in which an autonomous agent learns which actions to choose in particular situations (states) in order to optimize some reinforcements (rewards or punitions) in the long run. Even if a lot of tasks can be formulated in this framework, there are two problems with the standard reinforcement learning algorithms: 1. Due to the learning time of those algorithms, in practice, tasks with a moderatly large state space are not solvable in reasonable time. 2. Given several problems to solve in some domains, a standard reinforcement learning agent learns an optimal policy from scratch for each problem. It would be far more useful to have systems that can solve several problems over time, using the knowledge obtained from previous problem instances to guide in learning on new problems. We propose some methods to address those issues: 1. We define two formalisms to introduce a priori knowledge to guide the agent on a given task. The agent has an initial behaviour which can be modified during the learning process. 2. We define a method to induce generic behaviours,based on the previously solved tasks and on basicbuilding blocks. Those behaviours will be added to the primitive actions of a new related task tohelp the agent solve it
Langenau, Holger. "Best constants in Markov-type inequalities with mixed weights." Doctoral thesis, Universitätsverlag der Technischen Universität Chemnitz, 2015. https://monarch.qucosa.de/id/qucosa%3A20429.
Повний текст джерелаMarkovungleichungen liefern obere Schranken an die Norm einer (höheren) Ableitung eines algebraischen Polynoms in Bezug auf die Norm des Polynoms selbst. Diese vorliegende Arbeit betrachtet den Fall, dass die Normen vom Laguerre-, Gegenbauer- oder Hermitetyp sind, wobei die entsprechenden Gewichte auf beiden Seiten unterschiedlich gewählt werden. Es wird die kleinste Konstante bestimmt, sodass diese Ungleichung für jedes Polynom vom Grad höchstens n erfüllt ist. Die gesuchte kleinste Konstante kann als die Operatornorm des Differentialoperators dargestellt werden. Diese fällt aber mit der Spektralnorm der Matrixdarstellung in einem Paar geeignet gewählter Orthonormalbasen zusammen und kann daher gut behandelt werden. Zur Abschätzung dieser Normen kommen verschiedene Methoden zum Einsatz, die durch die Differenz der in den Gewichten auftretenden Parameter bestimmt werden. Bis auch eine kleine Lücke im Parameterbereich wird das asymptotische Verhalten der kleinsten Konstanten in jedem der betrachteten Fälle ermittelt.
Touyar, Narjiss. "Approximation de Poisson du nombre de répétitions dans des chaînes de Markov d'ordre m ≥1 : Application à l'étude de significativité dans des séquences d'ADN." Rouen, 2006. http://www.theses.fr/2006ROUES007.
Повний текст джерелаGenomes are dynamic and redondant structures which are regularly subject to mutations, deletions, duplications and inversions. In order to better understand the structure of genomes and their mecanism of evolution, it is important to make some statistical significance analyses of repeats. The goal of this thesis consists in studying the statistical significance of the number of repeats of by a given length t observed in a given sequence, denoted Nobst. This statistical study relies on the evaluation of the distribution of the random count Nt in some relevant random sequences. It will then allow to calculate the p-value. We start by studying the one-order Markov chain model and treat the general case of m-order Markov chain models m ≥1. We have used the Chen-Stein method to bound the approximation error when the number of repeats of length t is approximated by a Poisson variable. We show that this error converges to 0. To validate the Poisson approximation, some simulations were done. The calculation of the p-value has been implemented for several genomes
Derode, Anne-Sophie. "Approximations de la fiabilité : cas des systèmes à réparations différées pu à composants vieillissants." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-75.pdf.
Повний текст джерелаSimpson, Daniel Peter. "Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/29751/1/Simpson_Final_Thesis.pdf.
Повний текст джерелаSimpson, Daniel Peter. "Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion." Queensland University of Technology, 2008. http://eprints.qut.edu.au/29751/.
Повний текст джерелаŠnipas, Mindaugas. "Stochastinių sistemų aproksimavimas Markovo modeliais." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080902_100259-01228.
Повний текст джерелаApplication of numerical methods with approximation allows to extend a class of systems represented by Markovian processes under investigation compared with analytical methods. In this paper we used approximation of positive distribution functions, using phase-type distributions: mixtures of Erlang distributions and Coxian distribution – both 2 and 3 moments-matching algorithms was used. Analysis of M/G/1 and G/M/1 queueing systems showed, that moment-based queueing approximation gives high accuracy. In purpose to compute characteristics of M/G/1 and G/M/1 systems described in an event-based language, algorithms and software was created. Comparison to simulation results shows, that event-based language enables to get more precise results. Analysis of G/G/1 systems showed, that moment-based approximation can be used to analyse difficult queueing systems.
Fralix, Brian Haskel. "Stability and Non-stationary Characteristics of Queues." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14569.
Повний текст джерелаBlanchet, Juliette. "Modèles markoviens et extensions pour la classification de données complexes." Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00195271.
Повний текст джерелаLe premier concerne la classification de données lorsque celles-ci sont de grande dimension. Pour un tel problème, nous adoptons un modèle markovien gaussien non diagonal tirant partie du fait que la plupart des observations de grande dimension vivent en réalité dans des sous-espaces propres à chacune des classes et dont les dimensions intrinsèques sont faibles. De ce fait, le nombre de paramètres libres du modèle reste raisonnable.
Le deuxième point abordé s'attache à relâcher l'hypothèse simplificatrice de bruit indépendant unimodal, et en particulier gaussien. Nous considérons pour cela le modèle récent de champ de Markov triplet et proposons une nouvelle famille de Markov triplet adaptée au cadre d'une classification supervisée. Nous illustrons la flexibilité et les performances de nos modèles sur une application à la reconnaissance d'images réelles de textures.
Enfin, nous nous intéressons au problème de la classification d'observations dites incomplètes, c'est-à-dire pour lesquelles certaines valeurs sont manquantes. Nous développons pour cela une méthode markovienne ne nécessitant pas le remplacement préalable des observations manquantes. Nous présentons une application de cette méthodologie à un problème réel de classification de gènes.
Quagliaro, Laurence. "Une nouvelle méthode pour l'analyse quantitative de la sûreté de fonctionnement des systèmes : la méthode des graphes fictifs." Compiègne, 1993. http://www.theses.fr/1993COMPD657.
Повний текст джерелаDandoush, Abdulhalim. "Analysis and optimization of peer-to-peer storage-bacup systems." Nice, 2010. http://www.theses.fr/2010NICE4003.
Повний текст джерелаThis thesis characterizes the performance of peer-to-peer storage systems in terms of the delivered data lifetime and data availability. Two schemes for recovering lost data are modeled and analyzed: the first is centralized and relies on a server that recovers multiple losses at once, whereas the second is distributed and recovers one loss at a time. For each scheme, we propose several Markovian models that equally apply to many distributed environments as shown through numerical computations. These allow to assess the impact of each system parameter on the performance. In particular, we provide guidelines on how to tune the system parameters in order to provide desired lifetime and/or availability of data. The key assumptions made in the models are validated through intensive packet-level simulations or real traces collected from different distributed environments. In fact, we propose a realistic simulation model implemented on the Network Simulator (NS-2) for both download and recovery processes. Although this simulator can accurately predict the behaviour of the latter processes while considering the impact of several constraints such as the heterogeneity of peers and the the underlying network topologies, this simulator requires however relatively long time. To overcome this scalability limitation, we propose and analyze an algorithm. The algorithm is efficient in time and quite simple and uses the concept of ``Progressive-Filling'' (or max-min fairness). The validation of this algorithm consists in characterizing the distribution of the response time of parallel downloads in a distributed storage system, through simulations
Cantarutti, Nicola. "Option pricing in exponential Lévy models with transaction costs." Doctoral thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20786.
Повний текст джерелаIn this thesis we present a new model for pricing European call options in presence of proportional transaction costs, when the stock price follows a general exponential Lévy process. The model is a generalization of the celebrated work of Davis, Panas and Zariphopoulou, where the value of the option is defined as the utility indifference price. This approach requires the solution of a stochastic singular control problem in finite time. We introduce the general formulation of the problem, and derive the associated Hamilton-Jacobi-Bellman equation (HJB), which is a nonlinear partial integro-differential equation, with the form of a variational inequality. We prove that the value function of the problem is a solution of the HJB equation in the viscosity sense. The original problem is then simplified for the specific case of the exponential utility function, under the assumption of absence of default for the investor's portfolio. We solve numerically the optimization problems using the Markov chain approximation method. We also apply the multinomial method to the Variance Gamma process, which is an alternative and more efficient approach to discretize the continuous time process. We provide a numerical scheme and prove that it is monotone, stable and consistent and that the solution converges to the viscosity solution of the original HJB equation. Several numerical solutions are presented for both the original problem and the simplified problem. Numerical results are obtained for the cases of diffusion, Merton and Variance Gamma processes. We provide convergence and time complexity analysis and comparisons with option prices computed using the standard martingale pricing theory.
info:eu-repo/semantics/publishedVersion
Ben, Henda Noomene. "Infinite-state Stochastic and Parameterized Systems." Doctoral thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8915.
Повний текст джерелаA major current challenge consists in extending formal methods in order to handle infinite-state systems. Infiniteness stems from the fact that the system operates on unbounded data structure such as stacks, queues, clocks, integers; as well as parameterization.
Systems with unbounded data structure are natural models for reasoning about communication protocols, concurrent programs, real-time systems, etc. While parameterized systems are more suitable if the system consists of an arbitrary number of identical processes which is the case for cache coherence protocols, distributed algorithms and so forth.
In this thesis, we consider model checking problems for certain fundamental classes of probabilistic infinite-state systems, as well as the verification of safety properties in parameterized systems. First, we consider probabilistic systems with unbounded data structures. In particular, we study probabilistic extensions of Lossy Channel Systems (PLCS), Vector addition Systems with States (PVASS) and Noisy Turing Machine (PNTM). We show how we can describe the semantics of such models by infinite-state Markov chains; and then define certain abstract properties, which allow model checking several qualitative and quantitative problems.
Then, we consider parameterized systems and provide a method which allows checking safety for several classes that differ in the topologies (linear or tree) and the semantics (atomic or non-atomic). The method is based on deriving an over-approximation which allows the use of a symbolic backward reachability scheme. For each class, the over-approximation we define guarantees monotonicity of the induced approximate transition system with respect to an appropriate order. This property is convenient in the sense that it preserves upward closedness when computing sets of predecessors.
Liu, Yi. "Time-Varying Coefficient Models for Recurrent Events." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/97999.
Повний текст джерелаPHD
Haddani, Mostafa. "Étude de modèles probabilistes de réseaux de télécommunication." Paris 6, 2001. http://www.theses.fr/2001PA066515.
Повний текст джерелаRyan, Elizabeth G. "Contributions to Bayesian experimental design." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79628/1/Elizabeth_Ryan_Thesis.pdf.
Повний текст джерелаSchwarzenegger, Rafael. "Matematické modely spolehlivosti v technické praxi." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318802.
Повний текст джерелаWegmann, Bertil. "Bayesian Inference in Structural Second-Price Auctions." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-57278.
Повний текст джерелаAt the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Epub ahead of print. Paper 2: Manuscript. Paper 3: Manuscript. Paper 4: Manuscript.
Goubet, Étienne. "Contrôle non destructif par analyse supervisée d'images 3D ultrasonores." Cachan, Ecole normale supérieure, 1999. http://www.theses.fr/1999DENS0011.
Повний текст джерелаAustad, Haakon Michael. "Approximations of Binary Markov Random Fields." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14922.
Повний текст джерелаEstandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.
Повний текст джерелаChaput, Philippe. "Approximating Markov processes by averaging." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66654.
Повний текст джерелаNous reconsidérons les processus de Markov étiquetés sous une nouvelle approche, dans un certain sens "dual'' au point de vue usuel. Au lieu de considérer les transitions d'état en état en tant qu'une collection de distributions de sous-probabilités sur l'espace d'états, nous les regardons en tant que transformations de fonctions réelles. En généralisant l'opération d'espérance conditionelle, nous construisons une catégorie où les objets sont des processus de Markov étiquetés regardés en tant qu'un rassemblement d'opérateurs; les flèches de cette catégorie se comportent comme des projections sur un espace d'états plus petit. Nous définissons une notion d'équivalence pour de tels processus, que l'on appelle bisimulation, qui est intimement liée avec la définition usuelle pour les processus probabilistes. Nous démontrons que nous pouvons construire, d'une manière catégorique, le plus petit processus bisimilaire à un processus donné, et que ce plus petit object est lié à une logique modale bien connue. Nous développons une méthode d'approximation basée sur cette logique, où l'espace d'états des processus approximatifs est fini; de plus, nous démontrons que ces processus approximatifs convergent, d'une manière catégorique, au plus petit processus bisimilaire.
Nguepedja, Nankep Mac jugal. "Modélisation stochastique de systèmes biologiques multi-échelles et inhomogènes en espace." Thesis, Rennes, École normale supérieure, 2018. http://www.theses.fr/2018ENSR0012/document.
Повний текст джерелаThe growing needs of precise predictions for complex systems lead to introducing stronger mathematical models, taking into account an increasing number of parameters added to time: space, stochasticity, scales of dynamics. Combining these parameters gives rise to spatial --or spatially inhomogeneous-- multiscale stochastic models. However, such models are difficult to study and their simulation is extremely time consuming, making their use not easy. Still, their analysis has allowed one to develop powerful tools for one scale models, among which are the law of large numbers (LLN) and the central limit theorem (CLT), and, afterward, to derive simpler models and accelrated algorithms. In that deduction process, the so-called hybrid models and algorithms have arisen in the multiscale case, but without any prior rigorous analysis. The question of hybrid approximation then shows up, and its consistency is a particularly important motivation of this PhD thesis.In 2012, criteria for hybrid approximations of some homogeneous regulation gene network models were established by Crudu, Debussche, Muller and Radulescu. The aim of this PhD thesis is to complete their work and generalize it afterward to a spatial framework.We have developed and simplified different models. They all are time continuous pure jump Markov processes. The approach points out the conditions allowing on the the one hand deterministic approximations by solutions of evolution equations of type reaction-advection-diffusion, and, on the other hand, hybrid approximations by hybrid stochastic processes. In the field of biochemical reaction networks, we establish a CLT. It corresponds to a hybrid approximation of a simplified homogeneous model (due to Crudu et al.). Then a LLN is obtained for a spatial model with two time scales. Afterward, a hybrid approximation is established, for a two time-space scales spatial model. Finally, the asymptotic behaviour in large population and long time are respectively presented for a model of cholera epidemic, through a LLN followed by the upper bound for compact sets, in the context of a corresponding large deviation principle (LDP).Interesting future works would be, among others, to study other spatial geometries, to generalize the CLT, to complete the LDP estimates, and to study complex systems from other fields
Patrascu, Relu-Eugen. "Linear approximations from factored Markov Dicision Processes." Waterloo, Ont. : University of Waterloo, 2004. http://etd.uwaterloo.ca/etd/rpatrasc2004.pdf.
Повний текст джерела"A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science". Includes bibliographical references.
Thiéry, Christophe. "Itération sur les politiques optimiste et apprentissage du jeu de Tetris." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10128/document.
Повний текст джерелаThis thesis studies policy iteration methods with linear approximation of the value function for large state space problems in the reinforcement learning context. We first introduce a unified algorithm that generalizes the main stochastic optimal control methods. We show the convergence of this unified algorithm to the optimal value function in the tabular case, and a performance bound in the approximate case when the value function is estimated. We then extend the literature of second-order linear approximation algorithms by proposing a generalization of Least-Squares Policy Iteration (LSPI) (Lagoudakis and Parr, 2003). Our new algorithm, Least-Squares [lambda] Policy Iteration (LS[lambda]PI), adds to LSPI an idea of [lambda]-Policy Iteration (Bertsekas and Ioffe, 1996): the damped (or optimistic) evaluation of the value function, which allows to reduce the variance of the estimation to improve the sampling efficiency. Thus, LS[lambda]PI offers a bias-variance trade-off that may improve the estimation of the value function and the performance of the policy obtained. In a second part, we study in depth the game of Tetris, a benchmark application that several works from the literature attempt to solve. Tetris is a difficult problem because of its structure and its large state space. We provide the first full review of the literature that includes reinforcement learning works, evolutionary methods that directly explore the policy space and handwritten controllers. We observe that reinforcement learning is less successful on this problem than direct policy search approaches such as the cross-entropy method (Szita et Lorincz, 2006). We finally show how we built a controller that outperforms the previously known best controllers, and shortly discuss how it allowed us to win the Tetris event of the 2008 Reinforcement Learning Competition