Rozprawy doktorskie na temat „Sequential Monte Carlo (SMC) method”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 21 najlepszych rozpraw doktorskich naukowych na temat „Sequential Monte Carlo (SMC) method”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
GONZATO, LUCA. "Application of Sequential Monte Carlo Methods to Dynamic Asset Pricing Models". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/295144.
Pełny tekst źródłaIn this thesis we consider the application of Sequential Monte Carlo (SMC) methods to continuous-time asset pricing models. The first chapter of the thesis gives a self-contained overview on SMC methods. In particular, starting from basic Monte Carlo techniques we move to recent state of the art SMC algorithms. In the second chapter we review existing methods for the exact simulation of Hawkes processes. From our analysis we infer that the simulation scheme of Dassios and Zaho (2013) outperforms the other algorithms, including the most popular thinning method proposed by Ogata (1980). This chapter serves also as introduction to self-exciting jump processes, which are the subject of Chapter 3. Hence, in the third chapter we propose a new self-exciting jump diffusion model in order to describe oil price dynamics. We estimate the model by applying a state of the art SMC sampler on both spot and futures data. From the estimation results we find evidence of self-excitation in the oil market, which leads to an improved fit and a better out of sample futures forecasting performance with respect to jump-diffusion models with constant intensity. Furthermore, we compute and discuss two optimal hedging strategies based on futures trading. The optimality of the first hedging strategy proposed is based on the variance minimization, while the second strategy takes into account also the third-order moment contribution in considering the investors attitudes. A comparison between the two strategies in terms of hedging effectiveness is provided. Finally, in the fourth chapter we consider the estimation of continuous-time Wishart stochastic volatility models by observing portfolios of weighted options as in Orlowski (2019). In this framework we don't know the likelihood in closed-form; then we aim to estimate it using SMC techniques. To this end, we marginalize latent states and perform marginal likelihood estimation by adapting the recently proposed controlled SMC algorithm (Heng et. Al. 2019). From the numerical experiments we show that the proposed methodology gives much better results with respect to standard filtering techniques. Therefore, the great stability of our SMC method opens the door for effective joint estimation of latent states and unknown parameters in a Bayesian fashion. This last step amounts to design an SMC sampler based on a pseudo-marginal argument and is currently under preparation.
Ozgur, Soner. "Reduced Complexity Sequential Monte Carlo Algorithms for Blind Receivers". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10518.
Pełny tekst źródłaCreal, Drew D. "Essays in sequential Monte Carlo methods for economics and finance /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/7444.
Pełny tekst źródłaLang, Lixin. "Advancing Sequential Monte Carlo For Model Checking, Prior Smoothing And Applications In Engineering And Science". The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1204582289.
Pełny tekst źródłaKuhlenschmidt, Bernd. "On the stability of sequential Monte Carlo methods for parameter estimation". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709098.
Pełny tekst źródłaSkrivanek, Zachary. "Sequential Imputation and Linkage Analysis". The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1039121487.
Pełny tekst źródłaChen, Wen-shiang. "Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systems". Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1086146309.
Pełny tekst źródłaTitle from first page of PDF file. Document formatted into pages; contains xiv, 117 p. : ill. (some col.). Advisors: Bhavik R. Bakshi and Prem K. Goel, Department of Chemical Engineering. Includes bibliographical references (p. 114-117).
Valdes, LeRoy I. "Analysis Of Sequential Barycenter Random Probability Measures via Discrete Constructions". Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3304/.
Pełny tekst źródłaFuglesang, Rutger. "Particle-Based Online Bayesian Learning of Static Parameters with Application to Mixture Models". Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279847.
Pełny tekst źródłaDetta examensarbete undersöker möjligheten att använda Sekventiella Monte Carlo metoder (SMC) för att utveckla en algoritm med syfte att utvinna parametrar i realtid givet en okänd modell. Då statistisk slutledning från dataströmmar medför svårigheter, särskilt i parameter-modeller, kommer arbetets fokus ligga i utvecklandet av en Monte Carlo algoritm vars uppgift är att sekvensiellt nyttja modellens posteriori fördelningar. Resultatet är att okända, statistiska parametrar kommer att förflyttas mot det krympande stödet av posterioren med hjälp utav en artificiell Markov dynamik, vilket tillåter en korrekt pseudo-marginalisering utav mål-distributionen. Algoritmen kommer sedan att testas på en enkel Gaussisk-modell, en Gaussisk mixturmodell (GMM) och till sist en GMM vars dimension är okänd. Kodningen i detta projekt har utförts i Matlab.
Carr, Michael John. "Estimating parameters of a stochastic cell invasion model with Fluorescent cell cycle labelling using approximate Bayesian computation". Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/226946/1/Michael_Carr_Thesis.pdf.
Pełny tekst źródłaJohansson, Anders. "Acoustic Sound Source Localisation and Tracking : in Indoor Environments". Doctoral thesis, Blekinge Tekniska Högskola [bth.se], School of Engineering - Dept. of Signal Processing, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00401.
Pełny tekst źródłaAllaya, Mouhamad M. "Méthodes de Monte-Carlo EM et approximations particulaires : application à la calibration d'un modèle de volatilité stochastique". Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010072/document.
Pełny tekst źródłaThis thesis pursues a double perspective in the joint use of sequential Monte Carlo methods (SMC) and the Expectation-Maximization algorithm (EM) under hidden Markov models having a Markov dependence structure of order grater than one in the unobserved component signal. Firstly, we begin with a brief description of the theoretical basis of both statistical concepts through Chapters 1 and 2 that are devoted. In a second hand, we focus on the simultaneous implementation of both concepts in Chapter 3 in the usual setting where the dependence structure is of order 1. The contribution of SMC methods in this work lies in their ability to effectively approximate any bounded conditional functional in particular, those of filtering and smoothing quantities in a non-linear and non-Gaussian settings. The EM algorithm is itself motivated by the presence of both observable and unobservable ( or partially observed) variables in Hidden Markov Models and particularly the stochastic volatility models in study. Having presented the EM algorithm as well as the SMC methods and some of their properties in Chapters 1 and 2 respectively, we illustrate these two statistical tools through the calibration of a stochastic volatility model. This application is clone for exchange rates and for some stock indexes in Chapter 3. We conclude this chapter on a slight departure from canonical stochastic volatility model as well Monte Carlo simulations on the resulting model. Finally, we strive in Chapters 4 and 5 to provide the theoretical and practical foundation of sequential Monte Carlo methods extension including particle filtering and smoothing when the Markov structure is more pronounced. As an illustration, we give the example of a degenerate stochastic volatility model whose approximation has such a dependence property
Montazeri, Shahtori Narges. "Quantifying the impact of contact tracing on ebola spreading". Thesis, Kansas State University, 2016. http://hdl.handle.net/2097/34540.
Pełny tekst źródłaDepartment of Electrical and Computer Engineering
Faryad Darabi Sahneh
Recent experience of Ebola outbreak of 2014 highlighted the importance of immediate response to impede Ebola transmission at its very early stage. To this aim, efficient and effective allocation of limited resources is crucial. Among standard interventions is the practice of following up with physical contacts of individuals diagnosed with Ebola virus disease -- known as contact tracing. In an effort to objectively understand the effect of possible contact tracing protocols, we explicitly develop a model of Ebola transmission incorporating contact tracing. Our modeling framework has several features to suit early–stage Ebola transmission: 1) the network model is patient–centric because when number of infected cases are small only the myopic networks of infected individuals matter and the rest of possible social contacts are irrelevant, 2) the Ebola disease model is individual–based and stochastic because at the early stages of spread, random fluctuations are significant and must be captured appropriately, 3) the contact tracing model is parameterizable to analyze the impact of critical aspects of contact tracing protocols. Notably, we propose an activity driven network approach to contact tracing, and develop a Monte-Carlo method to compute the basic reproductive number of the disease spread in different scenarios. Exhaustive simulation experiments suggest that while contact tracing is important in stopping the Ebola spread, it does not need to be done too urgently. This result is due to rather long incubation period of Ebola disease infection. However, immediate hospitalization of infected cases is crucial and requires the most attention and resource allocation. Moreover, to investigate the impact of mitigation strategies in the 2014 Ebola outbreak, we consider reported data in Guinea, one the three West Africa countries that had experienced the Ebola virus disease outbreak. We formulate a multivariate sequential Monte Carlo filter that utilizes mechanistic models for Ebola virus propagation to simultaneously estimate the disease progression states and the model parameters according to reported incidence data streams. This method has the advantage of performing the inference online as the new data becomes available and estimating the evolution of the basic reproductive ratio R₀(t) throughout the Ebola outbreak. Our analysis identifies a peak in the basic reproductive ratio close to the time of Ebola cases reports in Europe and the USA.
Velmurugan, Rajbabu. "Implementation Strategies for Particle Filter based Target Tracking". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14611.
Pełny tekst źródłaDi, Caro Gianni. "Ant colony optimization and its application to adaptive routing in telecommunication networks". Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211149.
Pełny tekst źródłaThe simultaneous presence of these and other fascinating and unique characteristics have made ant societies an attractive and inspiring model for building new algorithms and new multi-agent systems. In the last decade, ant societies have been taken as a reference for an ever growing body of scientific work, mostly in the fields of robotics, operations research, and telecommunications.
Among the different works inspired by ant colonies, the Ant Colony Optimization metaheuristic (ACO) is probably the most successful and popular one. The ACO metaheuristic is a multi-agent framework for combinatorial optimization whose main components are: a set of ant-like agents, the use of memory and of stochastic decisions, and strategies of collective and distributed learning.
It finds its roots in the experimental observation of a specific foraging behavior of some ant colonies that, under appropriate conditions, are able to select the shortest path among few possible paths connecting their nest to a food site. The pheromone, a volatile chemical substance laid on the ground by the ants while walking and affecting in turn their moving decisions according to its local intensity, is the mediator of this behavior.
All the elements playing an essential role in the ant colony foraging behavior were understood, thoroughly reverse-engineered and put to work to solve problems of combinatorial optimization by Marco Dorigo and his co-workers at the beginning of the 1990's.
From that moment on it has been a flourishing of new combinatorial optimization algorithms designed after the first algorithms of Dorigo's et al. and of related scientific events.
In 1999 the ACO metaheuristic was defined by Dorigo, Di Caro and Gambardella with the purpose of providing a common framework for describing and analyzing all these algorithms inspired by the same ant colony behavior and by the same common process of reverse-engineering of this behavior. Therefore, the ACO metaheuristic was defined a posteriori, as the result of a synthesis effort effectuated on the study of the characteristics of all these ant-inspired algorithms and on the abstraction of their common traits.
The ACO's synthesis was also motivated by the usually good performance shown by the algorithms (e.g. for several important combinatorial problems like the quadratic assignment, vehicle routing and job shop scheduling, ACO implementations have outperformed state-of-the-art algorithms).
The definition and study of the ACO metaheuristic is one of the two fundamental goals of the thesis. The other one, strictly related to this former one, consists in the design, implementation, and testing of ACO instances for problems of adaptive routing in telecommunication networks.
This thesis is an in-depth journey through the ACO metaheuristic, during which we have (re)defined ACO and tried to get a clear understanding of its potentialities, limits, and relationships with other frameworks and with its biological background. The thesis takes into account all the developments that have followed the original 1999's definition, and provides a formal and comprehensive systematization of the subject, as well as an up-to-date and quite comprehensive review of current applications. We have also identified in dynamic problems in telecommunication networks the most appropriate domain of application for the ACO ideas. According to this understanding, in the most applicative part of the thesis we have focused on problems of adaptive routing in networks and we have developed and tested four new algorithms.
Adopting an original point of view with respect to the way ACO was firstly defined (but maintaining full conceptual and terminological consistency), ACO is here defined and mainly discussed in the terms of sequential decision processes and Monte Carlo sampling and learning.
More precisely, ACO is characterized as a policy search strategy aimed at learning the distributed parameters (called pheromone variables in accordance with the biological metaphor) of the stochastic decision policy which is used by so-called ant agents to generate solutions. Each ant represents in practice an independent sequential decision process aimed at constructing a possibly feasible solution for the optimization problem at hand by using only information local to the decision step.
Ants are repeatedly and concurrently generated in order to sample the solution set according to the current policy. The outcomes of the generated solutions are used to partially evaluate the current policy, spot the most promising search areas, and update the policy parameters in order to possibly focus the search in those promising areas while keeping a satisfactory level of overall exploration.
This way of looking at ACO has facilitated to disclose the strict relationships between ACO and other well-known frameworks, like dynamic programming, Markov and non-Markov decision processes, and reinforcement learning. In turn, this has favored reasoning on the general properties of ACO in terms of amount of complete state information which is used by the ACO's ants to take optimized decisions and to encode in pheromone variables memory of both the decisions that belonged to the sampled solutions and their quality.
The ACO's biological context of inspiration is fully acknowledged in the thesis. We report with extensive discussions on the shortest path behaviors of ant colonies and on the identification and analysis of the few nonlinear dynamics that are at the very core of self-organized behaviors in both the ants and other societal organizations. We discuss these dynamics in the general framework of stigmergic modeling, based on asynchronous environment-mediated communication protocols, and (pheromone) variables priming coordinated responses of a number of ``cheap' and concurrent agents.
The second half of the thesis is devoted to the study of the application of ACO to problems of online routing in telecommunication networks. This class of problems has been identified in the thesis as the most appropriate for the application of the multi-agent, distributed, and adaptive nature of the ACO architecture.
Four novel ACO algorithms for problems of adaptive routing in telecommunication networks are throughly described. The four algorithms cover a wide spectrum of possible types of network: two of them deliver best-effort traffic in wired IP networks, one is intended for quality-of-service (QoS) traffic in ATM networks, and the fourth is for best-effort traffic in mobile ad hoc networks.
The two algorithms for wired IP networks have been extensively tested by simulation studies and compared to state-of-the-art algorithms for a wide set of reference scenarios. The algorithm for mobile ad hoc networks is still under development, but quite extensive results and comparisons with a popular state-of-the-art algorithm are reported. No results are reported for the algorithm for QoS, which has not been fully tested. The observed experimental performance is excellent, especially for the case of wired IP networks: our algorithms always perform comparably or much better than the state-of-the-art competitors.
In the thesis we try to understand the rationale behind the brilliant performance obtained and the good level of popularity reached by our algorithms. More in general, we discuss the reasons of the general efficacy of the ACO approach for network routing problems compared to the characteristics of more classical approaches. Moving further, we also informally define Ant Colony Routing (ACR), a multi-agent framework explicitly integrating learning components into the ACO's design in order to define a general and in a sense futuristic architecture for autonomic network control.
Most of the material of the thesis comes from a re-elaboration of material co-authored and published in a number of books, journal papers, conference proceedings, and technical reports. The detailed list of references is provided in the Introduction.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Acosta, Argueta Lesly María. "Particle filtering estimation for linear and nonlinear state-space models". Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134356.
Pełny tekst źródłaL’estimació seqüencial dels estats (filtratge) i la corresponent estimació simultània dels estats i els paràmetres fixos d’unmodel dinàmic formulat en forma d’espai d’estat –sigui lineal o no– constitueix un problema de rellevada importància enmolts camps, com ser a l’àrea de finances. L’objectiu principal d’aquesta tesi és el d’estimar seqüencialment i de manera eficient –des d’un punt de vista bayesià i usant lametodologia de filtratge de partícules– els estats i/o els paràmetres fixos d’unmodel d’espai d’estat dinàmic no estàndard: possiblement no lineal, no gaussià o no estacionari. El present treball consisteix de 7 capítols i s’organitza en dues parts. El Capítol 1 hi introdueix conceptes bàsics, lamotivació, el propòsit i l’estructura de la tesi. La primera part d’aquesta tesi (capítols 2 a 4) se centra únicament en l’estimació dels estats. El Capítol 2 presenta una revisió exhaustiva dels algorismes més clàssics no basats en simulacions (KF, EKF, UKF2) i els basats en simulacions (SIS, SIR, ASIR, EPF, UPF). Per a aquests filtres, tots esmentats en la literatura, amés de descriure’ls detalladament, s’ha unificat la notació amb l’objectiu que aquesta sigui consistent i comparable entre els diferents algorismes implementats al llarg d’aquest treball. Els capítols 3 i 4 se centren en la realització d’estudis Monte Carlo (MC) extensos que confirmen l’eficiència de la metodologia de filtratge de partícules per estimar els estats latents d’un procés dinàmic formulat en forma d’espai d’estat, sigui lineal o no. Alguns estudis MC complementaris es duen a terme per avaluar diferents aspectes de la metodologia de filtratge de partícules, com ser el problema de la degeneració, l’elecció de l’estratègia de remostreig, el nombre de partícules usades o la grandària de la sèrie temporal. Específicament, el Capítol 3 il·lustra el comportament de la metodologia de filtratge de partícules en un context lineal i gaussià en comparació de l’òptim i exacte filtre de Kalman. La capacitat de filtratge de les quatre variants de filtre de partícules estudiades (SIR, SIRopt, ASIR, KPF; l’últim sent un cas especial de l’algorisme EPF) es va avaluar sobre la base de dos processos de sèries temporals aparentment simples però importants: els anomenats Local Level Model (LLM) i el AR (1) plus noise, que són no estacionari i estacionari, respectivament. Aquest capítol estudia en profunditat temes rellevants dins de l’enfocament adoptat, coml’impacte en l’estimació de la relació entre el senyal i el soroll (SNR: signal-to-noise-ratio, en aquesta tesi), de la longitud de la sèrie temporal i del nombre de partícules. El Capítol 4 avalua i il·lustra el comportament de la metodologia de filtratge de partícules en un context no lineal. En concret, s’utilitza un model d’espai d’estat no lineal, no gaussià i no estacionari pres de la literatura per il·lustrar el comportament de quatre filtres de partícules (SIR, ASIR, EPF, UPF) en contraposició a dos filtres no basats en simulació ben coneguts (EKF, UKF). Aquí es comparen els esquemes de remostreig residual i estratificat i s’avalua l’efecte d’augmentar el nombre de partícules. A la segona part (capítols 5 i 6), es duen a terme també estudis MC extensos, però ara l’objectiu principal és l’estimació simultània dels estats i paràmetres fixos de certsmodels seleccionats. Aquesta àrea de recerca segueix sentmolt activa i és on aquesta tesi hi contribueixmés. El Capítol 5 proveeix una revisió parcial dels mètodes per dur a terme l’estimació simultània dels estats i paràmetres fixos a través de la metodologia de filtratge de partícules. Aquests filtres són una extensió d’aquells adoptats anteriorment només per estimar els estats. Aquí es realitza un estudi MC per estimar l’estat (nivell) i els dos paràmetres de variància del model LLM no estacionari; s’utilitzen quatre variants (LW, SIRJ, SIRoptJ, KPFJ) de filtre de partícules, sis escenaris típics del SNR i dos escenaris per a l’anomenat factor de descompte necessari en el pas de diversificació. En aquest capítol, es proposa la variant de filtre de partícules SIRJ (Sample Importance Resampling with Jittering) com a alternativa al filtre de referència de Liu iWest (LWPF). També es proposa i explora l’ús combinat d’una distribució d’importància basada en el filtre de Kalman i un pas de diversificació (jittering) que dóna lloc a la variant del filtre de partícules anomenada Kalman Particle Filteringwith Jittering (KPFJ). El Capítol 6 se centra en l’estimació dels estats i dels paràmetres fixos delmodel bàsic no estàndard de volatilitat estocàstica denominat Stochastic autoregressive model of order one: SARV (1). Després d’una introducció i descripció detallada de les característiques pròpies de sèries temporals financeres, es demostra mitjançant estudis MC la capacitat d’estimació de dues variants de filtre de partícules (SIRJ vs. LW(Liu iWest)) utilitzant dades simulades. El capítol acaba amb una aplicació a dos conjunts de dades reals dins de l’àrea financera: l’índex de rendiments espanyol IBEX 35 i els preus al comptat (en dòlars) del Brent europeu. La contribució en els capítols 5 i 6 consisteix en proposar noves variants de filtres de partícules, compoden ser el KPFJ, el SIRJ i el SIRoptJ (un cas especial de l’algorisme SIRJ utilitzant una distribució d’importància òptima) que s’han desenvolupat al llarg d’aquest treball. També se suggereix que els anomenats filtres de partícules EPFJ (Extended Particle Filter with Jittering) i UPFJ (Unscented Particle Filter with Jittering) podrien ser opcions raonables quan es tracta de models altament no lineals; el KPFJ sent un cas especial de l’algorisme EPFJ. En aquesta part, també es tracten aspectes rellevants dins de la metodologia de filtratge de partícules, com ser l’impacte potencial en l’estimació de la longitud de la sèrie temporal, el paràmetre de factor de descompte i el nombre de partícules. Al llarg d’aquest treball s’han escrit (i implementat en el llenguatge R) els pseudo-codis per a tots els filtres estudiats. Els resultats presentats s’obtenenmitjançant simulacionsMonte Carlo (MC) extenses, tenint en compte variats escenaris descrits en la tesi. Les característiques intrínseques del model baix estudi van guiar l’elecció dels filtres a comparar en cada situació específica. Amés, la comparació dels filtres es basa en el RMSE (RootMean Square Error), el temps de CPU i el grau de degeneració. Finalment, el Capítol 7 presenta la discussió, les contribucions i les línies futures de recerca. Alguns aspectes teòrics i pràctics complementaris es presenten en els apèndixs.
La estimación secuencial de los estados (filtrado) y la correspondiente estimación simultánea de los estados y los parámetros fijos de un modelo dinámico formulado en forma de espacio de estado –sea lineal o no– constituye un problema de relevada importancia enmuchos campos, como ser en el área de finanzas. El objetivo principal de esta tesis es el de estimar secuencialmente y de manera eficiente –desde un punto de vista bayesiano y usando la metodología de filtrado de partículas– los estados y/o los parámetros fijos de un modelo de espacio de estado dinámico no estándar: posiblemente no lineal, no gaussiano o no estacionario. El presente trabajo consta de 7 capítulos y se organiza en dos partes. El Capítulo 1 introduce conceptos básicos, la motivación, el propósito y la estructura de la tesis. La primera parte de esta tesis (capítulos 2 a 4) se centra únicamente en la estimación de los estados. El Capítulo 2 presenta una revisión exhaustiva de los algoritmos más clásicos no basados en simulaciones (KF, EKF,UKF3) y los basados en simulaciones (SIS, SIR, ASIR, EPF, UPF). Para todos estos filtros, mencionados en la literatura, además de describirlos en detalle, se ha unificado la notación con el objetivo de que ésta sea consistente y comparable entre los diferentes algoritmos implementados a lo largo de este trabajo. Los capítulos 3 y 4 se centran en la realización de estudios Monte Carlo (MC) extensos que confirman la eficiencia de la metodología de filtrado de partículas para estimar los estados latentes de un proceso dinámico formulado en forma de espacio de estado, sea lineal o no. Algunos estudios MC complementarios se llevan a cabo para evaluar varios aspectos de la metodología de filtrado de partículas, como ser el problema de la degeneración, la elección de la estrategia de remuestreo, el número de partículas usadas o el tamaño de la serie temporal. Específicamente, el Capítulo 3 ilustra el comportamiento de lametodología de filtrado de partículas en un contexto lineal y gaussiano en comparación con el óptimo y exacto filtro de Kalman. La capacidad de filtrado de las cuatro variantes de filtro de partículas estudiadas (SIR, SIRopt, ASIR, KPF; el último siendo un caso especial del algoritmo EPF) se evaluó en base a dos procesos de series temporales aparentemente simples pero importantes: los denominados Local Level Model (LLM) y el AR (1) plus noise, que son no estacionario y estacionario, respectivamente. Este capítulo estudia en profundidad temas relevantes dentro del enfoque adoptado, como el impacto en la estimación de la relación entre la señal y el ruido (SNR: signal-to-noise-ratio, en esta tesis), de la longitud de la serie temporal y del número de partículas. El Capítulo 4 evalúa e ilustra el comportamiento de la metodología de filtrado de partículas en un contexto no lineal. En concreto, se utiliza un modelo de espacio de estado no lineal, no gaussiano y no estacionario tomado de la literatura para ilustrar el comportamiento de cuatro filtros de partículas (SIR, ASIR, EPF, UPF) en contraposición a dos filtros no basados en simulación bien conocidos (EKF, UKF). Aquí se comparan los esquemas de remuestreo residual y estratificado y se evalúa el efecto de aumentar el número de partículas. En la segunda parte (capítulos 5 y 6), se llevan a cabo también estudios MC extensos, pero ahora el objetivo principal es la estimación simultánea de los estados y parámetros fijos de ciertos modelos seleccionados. Esta área de investigación sigue siendo muy activa y es donde esta tesis contribuye más. El Capítulo 5 provee una revisión parcial de losmétodos para llevar a cabo la estimación simultánea de los estados y parámetros fijos a través de lametodología de filtrado de partículas. Dichos filtros son una extensión de aquellos adoptados anteriormente sólo para estimar los estados. Aquí se realiza un estudio MC para estimar el estado (nivel) y los dos parámetros de varianza del modelo LLM no estacionario; se utilizan cuatro variantes (LW, SIRJ, SIRoptJ, KPFJ) de filtro de partículas, seis escenarios típicos del SNR y dos escenarios para el llamado factor de descuento necesario en el paso de diversificación. En este capítulo, se propone la variante de filtro de partículas SIRJ (Sample Importance resampling with Jittering) como alternativa al filtro de referencia de Liu y West (LW PF). También se propone y explora el uso combinado de una distribución de importancia basada en el filtro de Kalman y un paso de diversificación (jittering) que da lugar a la variante del filtro de partículas denominada Kalman Particle Filteringwith Jittering (KPFJ). El Capítulo 6 se centra en la estimación de los estados y de los parámetros fijos del modelo básico no estándar de volatilidad estocástica denominado Stochastic autoregressivemodel of order one: SARV (1). Después de una introducción y descripción detallada de las características propias de series temporales financieras, se demuestra mediante estudios MC la capacidad de estimación de dos variantes de filtro de partículas (SIRJ vs. LW (Liu y West)) utilizando datos simulados. El capítulo termina con una aplicación a dos conjuntos de datos reales dentro del área financiera: el índice de rendimientos español IBEX 35 y los precios al contado (en dólares) del Brent europeo. La contribución en los capítulos 5 y 6 consiste en proponer nuevas variantes de filtros de partículas, como pueden ser el KPFJ, el SIRJ y el SIRoptJ (Caso especial del algoritmo SIRJ utilizando una distribución de importancia óptima) que se han desarrollado a lo largo de este trabajo. También se sugiere que los llamados filtros de partículas EPFJ (Extended Particle Filter with Jittering) y UPFJ (Unscented Particle Filter with Jittering) podrían ser opciones razonables cuando se trata de modelos altamente no lineales; el KPFJ siendo un caso especial del algoritmo EPFJ. En esta parte, también se tratan aspectos relevantes dentro de lametodología de filtrado de partículas, como ser el impacto potencial en la estimación de la longitud de la serie temporal, el parámetro de factor de descuento y el número de partículas. A lo largo de este trabajo se han escrito (e implementado en el lenguaje R) los pseudo-códigos para todos los filtros estudiados. Los resultados presentados se obtienen mediante simulaciones Monte Carlo (MC) extensas, teniendo en cuenta variados escenarios descritos en la tesis. Las características intrínsecas del modelo bajo estudio guiaron la elección de los filtros a comparar en cada situación específica. Además, la comparación de los filtros se basa en el RMSE (Root Mean Square Error), el tiempo de CPU y el grado de degeneración. Finalmente, el Capítulo 7 presenta la discusión, las contribuciones y las líneas futuras de investigación. Algunos aspectos teóricos y prácticos complementarios se presentan en los apéndices.
YADAV, SATYENDRA. "Improvement in Sensor Node Localization". Thesis, 2013. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14223.
Pełny tekst źródłaMr. VINOD KUMAR Associate Professor Delhi Technological University
Chen, Yu-Lung, i 陳郁龍. "A Method for Improving Sequential Monte Carlo Method in Video Object Tracking". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/83502138929579300795.
Pełny tekst źródła國立東華大學
資訊工程學系
94
In recent years, moving object tracking in video attracts the interest of many researchers. This technology leads to numerous applications, particularly those in computer vision such as human-machine interface, video transmission and compression, and surveillance. Among the many techniques for moving object tracking, the Sequential Monte Carlo method, which also referred as the particle filter, has been a commonly-used option due to its simplicity. However, the particle filter suffers from “drifting problem” when deciding the position of the target object. Therefore, the method is not good for applications requiring higher tracking accuracy. In addition, the particle filter usually fails to track objects with complicated geometries. This thesis proposes a multi-component tracking method to attain more robust results for tracking objects with complicated geometries.
Bloem-Reddy, Benjamin Michael. "Random Walk Models, Preferential Attachment, and Sequential Monte Carlo Methods for Analysis of Network Data". Thesis, 2017. https://doi.org/10.7916/D8348R5Q.
Pełny tekst źródłaLehmann, Eric Andr{u00E9}. "Particle filtering methods for acoustic source localisation and tracking". Phd thesis, 2004. http://hdl.handle.net/1885/149771.
Pełny tekst źródłaJi, Chunlin. "Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMC". Diss., 2009. http://hdl.handle.net/10161/1609.
Pełny tekst źródłaThe modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, computation and applications in biological contexts. Chapter 2 describes and develops models for spatial point processes in which the point outcomes are latent, where indirect observations related to the point outcomes are available, and in which the underlying spatial intensity functions are typically highly heterogenous. Spatial intensities of inhomogeneous Poisson processes are represented via flexible nonparametric Bayesian mixture models. Computational approaches are presented for this new class of spatial point process mixtures and extended to the context of unobserved point process outcomes. Two examples drawn from a central, motivating context, that of immunofluorescence histology analysis in biological studies generating high-resolution imaging data, demonstrate the modelling approach and computational methodology. Chapters 3 and 4 extend this framework to define a class of flexible Bayesian nonparametric models for inhomogeneous spatio-temporal point processes, adding dynamic models for underlying intensity patterns. Dependent Dirichlet process mixture models are introduced as core components of this new time-varying spatial model. Utilizing such nonparametric mixture models for the spatial process intensity functions allows the introduction of time variation via dynamic, state-space models for parameters characterizing the intensities. Bayesian inference and model-fitting is addressed via novel particle filtering ideas and methods. Illustrative simulation examples include studies in problems of extended target tracking and substantive data analysis in cell fluorescent microscopic imaging tracking problems.
The second part of the thesis, consisting of chapters 5 and chapter 6, concerns advances in computational methods for some core and generic Bayesian inferential problems. Chapter 5 develops a novel approach to estimation of upper and lower bounds for marginal likelihoods in Bayesian modelling using refinements of existing variational methods. Traditional variational approaches only provide lower bound estimation; this new lower/upper bound analysis is able to provide accurate and tight bounds in many problems, so facilitates more reliable computation for Bayesian model comparison while also providing a way to assess adequacy of variational densities as approximations to exact, intractable posteriors. The advances also include demonstration of the significant improvements that may be achieved in marginal likelihood estimation by marginalizing some parameters in the model. A distinct contribution to Bayesian computation is covered in Chapter 6. This concerns a generic framework for designing adaptive MCMC algorithms, emphasizing the adaptive Metropolized independence sampler and an effective adaptation strategy using a family of mixture distribution proposals. This work is coupled with development of a novel adaptive approach to computation in nonparametric modelling with large data sets; here a sequential learning approach is defined that iteratively utilizes smaller data subsets. Under the general framework of importance sampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of both synthetic data examples, and in a real data analysis arising in astro-statistics.
Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.
Dissertation