Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Sequential Monte Carlo (SMC) method.

Rozprawy doktorskie na temat „Sequential Monte Carlo (SMC) method”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 21 najlepszych rozpraw doktorskich naukowych na temat „Sequential Monte Carlo (SMC) method”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

GONZATO, LUCA. "Application of Sequential Monte Carlo Methods to Dynamic Asset Pricing Models". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/295144.

Pełny tekst źródła
Streszczenie:
In questa tesi si considera l’applicazione di metodi Monte Carlo sequenziali per modelli di asset pricing di tipo dinamico. Il primo capitolo della tesi presenta una panoramica generale sui metodi Monte Carlo sequenziali. Nello specifico, partendo da metodi Monte Carlo standard si giunge fino allo stato dell’arte per quanto riguarda i metodi Monte Carlo sequenziali. Il secondo capitolo costituisce una review della letteratura sui metodi di simulazione esatta per processi di Hawkes. Dall’analisi svolta si evince che lo schema proposto da Dassios e Zaho (2013) performa meglio degli altri algoritmi, incluso il più noto metodo “thinning” proposto da Ogata (1981). Questo capitolo serve inoltre come introduzione ai processi di salto di tipo auto eccitante, che saranno oggetto di studio del Capitolo 3. Nel terzo capitolo, quindi, viene proposto un nuovo modello diffusivo con salti auto eccitati per descrivere la dinamica del prezzo del petrolio. Il modello viene stimato implementando una recente metodologia di tipo Monte Carlo sequenziale utilizzando dati spot e futures. Dalla stima viene confermata la presenza di salti auto eccitati nel mercato del petrolio; questo conduce ad un migliore adattamento del modello ai dati e a migliori performance in termini di previsione dei futures rispetto ad un modello con intensità costante. Inoltre, vengono calcolate e discusse due strategie di copertura ottimali basate sul trading di contratti futures. La prima strategia è basata sulla minimizzazione della varianza, mentre la seconda tiene in considerazione anche la skewness. Viene infine proposto un confronto tra le due strategie in termini di efficacia della copertura Nel quarto capitolo si considera la stima di modelli a volatilità stocastica a tempo continuo basati su processi di Wishart, osservando portafogli di opzioni come in Orlowski (2019). In questo contesto la funzione di verosimiglianza non è nota esplicitamente, quindi verrà stimata ricorrendo a metodi Monte Carlo sequenziali. A questo proposito, i processi latenti vengono marginalizzati e la stima della verosimiglianza viene effettuata adattando metodi Monte Carlo sequenziali “controllati”, recentemente proposti da Heng et. Al. (2019). Dai risultati numerici si mostra come la metodologia proposta dia risultati decisamente migliori rispetto a metodi standard. Pertanto, l’elevata stabilità della metodologia proposta permetterà di costruire algoritmi per la stima congiunta di processi latenti e parametri utilizzando un approccio Bayesiano. Quest’ultimo step si traduce nel costruire un così detto SMC sampler, il quale è attualmente in fase di studio.
In this thesis we consider the application of Sequential Monte Carlo (SMC) methods to continuous-time asset pricing models. The first chapter of the thesis gives a self-contained overview on SMC methods. In particular, starting from basic Monte Carlo techniques we move to recent state of the art SMC algorithms. In the second chapter we review existing methods for the exact simulation of Hawkes processes. From our analysis we infer that the simulation scheme of Dassios and Zaho (2013) outperforms the other algorithms, including the most popular thinning method proposed by Ogata (1980). This chapter serves also as introduction to self-exciting jump processes, which are the subject of Chapter 3. Hence, in the third chapter we propose a new self-exciting jump diffusion model in order to describe oil price dynamics. We estimate the model by applying a state of the art SMC sampler on both spot and futures data. From the estimation results we find evidence of self-excitation in the oil market, which leads to an improved fit and a better out of sample futures forecasting performance with respect to jump-diffusion models with constant intensity. Furthermore, we compute and discuss two optimal hedging strategies based on futures trading. The optimality of the first hedging strategy proposed is based on the variance minimization, while the second strategy takes into account also the third-order moment contribution in considering the investors attitudes. A comparison between the two strategies in terms of hedging effectiveness is provided. Finally, in the fourth chapter we consider the estimation of continuous-time Wishart stochastic volatility models by observing portfolios of weighted options as in Orlowski (2019). In this framework we don't know the likelihood in closed-form; then we aim to estimate it using SMC techniques. To this end, we marginalize latent states and perform marginal likelihood estimation by adapting the recently proposed controlled SMC algorithm (Heng et. Al. 2019). From the numerical experiments we show that the proposed methodology gives much better results with respect to standard filtering techniques. Therefore, the great stability of our SMC method opens the door for effective joint estimation of latent states and unknown parameters in a Bayesian fashion. This last step amounts to design an SMC sampler based on a pseudo-marginal argument and is currently under preparation.
Style APA, Harvard, Vancouver, ISO itp.
2

Ozgur, Soner. "Reduced Complexity Sequential Monte Carlo Algorithms for Blind Receivers". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10518.

Pełny tekst źródła
Streszczenie:
Monte Carlo algorithms can be used to estimate the state of a system given relative observations. In this dissertation, these algorithms are applied to physical layer communications system models to estimate channel state information, to obtain soft information about transmitted symbols or multiple access interference, or to obtain estimates of all of these by joint estimation. Initially, we develop and analyze a multiple access technique utilizing mutually orthogonal complementary sets (MOCS) of sequences. These codes deliberately introduce inter-chip interference, which is naturally eliminated during processing at the receiver. However, channel impairments can destroy their orthogonality properties and additional processing becomes necessary. We utilize Monte Carlo algorithms to perform joint channel and symbol estimation for systems utilizing MOCS sequences as spreading codes. We apply Rao-Blackwellization to reduce the required number of particles. However, dense signaling constellations, multiuser environments, and the interchannel interference introduced by the spreading codes all increase the dimensionality of the symbol state space significantly. A full maximum likelihood solution is computationally expensive and generally not practical. However, obtaining the optimum solution is critical, and looking at only a part of the symbol space is generally not a good solution. We have sought algorithms that would guarantee that the correct transmitted symbol is considered, while only sampling a portion of the full symbol space. The performance of the proposed method is comparable to the Maximum Likelihood (ML) algorithm. While the computational complexity of ML increases exponentially with the dimensionality of the problem, the complexity of our approach increases only quadratically. Markovian structures such as the one imposed by MOCS spreading sequences can be seen in other physical layer structures as well. We have applied this partitioning approach with some modification to blind equalization of frequency selective fading channel and to multiple-input multiple output receivers that track channel changes. Additionally, we develop a method that obtains a metric for quantifying the convergence rate of Monte Carlo algorithms. Our approach yields an eigenvalue based method that is useful in identifying sources of slow convergence and estimation inaccuracy.
Style APA, Harvard, Vancouver, ISO itp.
3

Creal, Drew D. "Essays in sequential Monte Carlo methods for economics and finance /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/7444.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Lang, Lixin. "Advancing Sequential Monte Carlo For Model Checking, Prior Smoothing And Applications In Engineering And Science". The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1204582289.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kuhlenschmidt, Bernd. "On the stability of sequential Monte Carlo methods for parameter estimation". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709098.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Skrivanek, Zachary. "Sequential Imputation and Linkage Analysis". The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1039121487.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Wen-shiang. "Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systems". Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1086146309.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiv, 117 p. : ill. (some col.). Advisors: Bhavik R. Bakshi and Prem K. Goel, Department of Chemical Engineering. Includes bibliographical references (p. 114-117).
Style APA, Harvard, Vancouver, ISO itp.
8

Valdes, LeRoy I. "Analysis Of Sequential Barycenter Random Probability Measures via Discrete Constructions". Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3304/.

Pełny tekst źródła
Streszczenie:
Hill and Monticino (1998) introduced a constructive method for generating random probability measures with a prescribed mean or distribution on the mean. The method involves sequentially generating an array of barycenters that uniquely defines a probability measure. This work analyzes statistical properties of the measures generated by sequential barycenter array constructions. Specifically, this work addresses how changing the base measures of the construction affects the statististics of measures generated by the SBA construction. A relationship between statistics associated with a finite level version of the SBA construction and the full construction is developed. Monte Carlo statistical experiments are used to simulate the effect changing base measures has on the statistics associated with the finite level construction.
Style APA, Harvard, Vancouver, ISO itp.
9

Fuglesang, Rutger. "Particle-Based Online Bayesian Learning of Static Parameters with Application to Mixture Models". Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279847.

Pełny tekst źródła
Streszczenie:
This thesis investigates the possibility of using Sequential Monte Carlo methods (SMC) to create an online algorithm to infer properties from a dataset, such as unknown model parameters. Statistical inference from data streams tends to be difficult, and this is particularly the case for parametric models, which will be the focus of this paper. We develop a sequential Monte Carlo algorithm sampling sequentially from the model's posterior distributions. As a key ingredient of this approach, unknown static parameters are jittered towards the shrinking support of the posterior on the basis of an artificial Markovian dynamic allowing for correct pseudo-marginalisation of the target distributions. We then test the algorithm on a simple Gaussian model, a Gausian Mixture Model (GMM), as well as a variable dimension GMM. All tests and coding were done using Matlab. The outcome of the simulation is promising, but more extensive comparisons to other online algorithms for static parameter models are needed to really gauge the computational efficiency of the developed algorithm.
Detta examensarbete undersöker möjligheten att använda Sekventiella Monte Carlo metoder (SMC) för att utveckla en algoritm med syfte att utvinna parametrar i realtid givet en okänd modell. Då statistisk slutledning från dataströmmar medför svårigheter, särskilt i parameter-modeller, kommer arbetets fokus ligga i utvecklandet av en Monte Carlo algoritm vars uppgift är att sekvensiellt nyttja modellens posteriori fördelningar. Resultatet är att okända, statistiska parametrar kommer att förflyttas mot det krympande stödet av posterioren med hjälp utav en artificiell Markov dynamik, vilket tillåter en korrekt pseudo-marginalisering utav mål-distributionen. Algoritmen kommer sedan att testas på en enkel Gaussisk-modell, en Gaussisk mixturmodell (GMM) och till sist en GMM vars dimension är okänd. Kodningen i detta projekt har utförts i Matlab.
Style APA, Harvard, Vancouver, ISO itp.
10

Carr, Michael John. "Estimating parameters of a stochastic cell invasion model with Fluorescent cell cycle labelling using approximate Bayesian computation". Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/226946/1/Michael_Carr_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Understanding the underlying mechanisms of melanoma cell behaviour is crucial to developing better drug treatment methods. In this thesis, we use advanced mathematical modelling and statistical inference techniques to obtain, for the first time, accurate estimates of the rates at which cells multiply and spread at multiple stages of the cell cycle. The mathematical model is fitted to data that uses fluorescent cell cycle labelling technology to visualise different phases of the cell cycle in real time. The accurately calibrated mathematical model enables a deeper understanding of cell behaviour and has potential for more precisely assessing treatment efficacy.
Style APA, Harvard, Vancouver, ISO itp.
11

Johansson, Anders. "Acoustic Sound Source Localisation and Tracking : in Indoor Environments". Doctoral thesis, Blekinge Tekniska Högskola [bth.se], School of Engineering - Dept. of Signal Processing, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00401.

Pełny tekst źródła
Streszczenie:
With advances in micro-electronic complexity and fabrication, sophisticated algorithms for source localisation and tracking can now be deployed in cost sensitive appliances for both consumer and commercial markets. As a result, such algorithms are becoming ubiquitous elements of contemporary communication, robotics and surveillance systems. Two of the main requirements of acoustic localisation and tracking algorithms are robustness to acoustic disturbances (to maximise localisation accuracy), and low computational complexity (to minimise power-dissipation and cost of hardware components). The research presented in this thesis covers both advances in robustness and in computational complexity for acoustic source localisation and tracking algorithms. This thesis also presents advances in modelling of sound propagation in indoor environments; a key to the development and evaluation of acoustic localisation and tracking algorithms. As an advance in the field of tracking, this thesis also presents a new method for tracking human speakers in which the problem of the discontinuous nature of human speech is addressed using a new state-space filter based algorithm which incorporates a voice activity detector. The algorithm is shown to achieve superior tracking performance compared to traditional approaches. Furthermore, the algorithm is implemented in a real-time system using a method which yields a low computational complexity. Additionally, a new method is presented for optimising the parameters for the dynamics model used in a state-space filter. The method features an evolution strategy optimisation algorithm to identify the optimum dynamics’ model parameters. Results show that the algorithm is capable of real-time online identification of optimum parameters for different types of dynamics models without access to ground-truth data. Finally, two new localisation algorithms are developed and compared to older well established methods. In this context an analytic analysis of noise and room reverberation is conducted, considering its influence on the performance of localisation algorithms. The algorithms are implemented in a real-time system and are evaluated with respect to robustness and computational complexity. Results show that the new algorithms outperform their older counterparts, both with regards to computational complexity, and robustness to reverberation and background noise. The field of acoustic modelling is advanced in a new method for predicting the energy decay in impulse responses simulated using the image source method. The new method is applied to the problem of designing synthetic rooms with a defined reverberation time, and is compared to several well established methods for reverberation time prediction. This comparison reveals that the new method is the most accurate.
Style APA, Harvard, Vancouver, ISO itp.
12

Allaya, Mouhamad M. "Méthodes de Monte-Carlo EM et approximations particulaires : application à la calibration d'un modèle de volatilité stochastique". Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010072/document.

Pełny tekst źródła
Streszczenie:
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à Travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel ou la structure de dépendance est d'ordre 1, l'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables, et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les modèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCS ainsi que quelques une de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d’illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance
This thesis pursues a double perspective in the joint use of sequential Monte Carlo methods (SMC) and the Expectation-Maximization algorithm (EM) under hidden Mar­kov models having a Markov dependence structure of order grater than one in the unobserved component signal. Firstly, we begin with a brief description of the theo­retical basis of both statistical concepts through Chapters 1 and 2 that are devoted. In a second hand, we focus on the simultaneous implementation of both concepts in Chapter 3 in the usual setting where the dependence structure is of order 1. The contribution of SMC methods in this work lies in their ability to effectively approximate any bounded conditional functional in particular, those of filtering and smoothing quantities in a non-linear and non-Gaussian settings. The EM algorithm is itself motivated by the presence of both observable and unobservable ( or partially observed) variables in Hidden Markov Models and particularly the stochastic volatility models in study. Having presented the EM algorithm as well as the SMC methods and some of their properties in Chapters 1 and 2 respectively, we illustrate these two statistical tools through the calibration of a stochastic volatility model. This application is clone for exchange rates and for some stock indexes in Chapter 3. We conclude this chapter on a slight departure from canonical stochastic volatility model as well Monte Carlo simulations on the resulting model. Finally, we strive in Chapters 4 and 5 to provide the theoretical and practical foundation of sequential Monte Carlo methods extension including particle filtering and smoothing when the Markov structure is more pronounced. As an illustration, we give the example of a degenerate stochastic volatility model whose approximation has such a dependence property
Style APA, Harvard, Vancouver, ISO itp.
13

Montazeri, Shahtori Narges. "Quantifying the impact of contact tracing on ebola spreading". Thesis, Kansas State University, 2016. http://hdl.handle.net/2097/34540.

Pełny tekst źródła
Streszczenie:
Master of Science
Department of Electrical and Computer Engineering
Faryad Darabi Sahneh
Recent experience of Ebola outbreak of 2014 highlighted the importance of immediate response to impede Ebola transmission at its very early stage. To this aim, efficient and effective allocation of limited resources is crucial. Among standard interventions is the practice of following up with physical contacts of individuals diagnosed with Ebola virus disease -- known as contact tracing. In an effort to objectively understand the effect of possible contact tracing protocols, we explicitly develop a model of Ebola transmission incorporating contact tracing. Our modeling framework has several features to suit early–stage Ebola transmission: 1) the network model is patient–centric because when number of infected cases are small only the myopic networks of infected individuals matter and the rest of possible social contacts are irrelevant, 2) the Ebola disease model is individual–based and stochastic because at the early stages of spread, random fluctuations are significant and must be captured appropriately, 3) the contact tracing model is parameterizable to analyze the impact of critical aspects of contact tracing protocols. Notably, we propose an activity driven network approach to contact tracing, and develop a Monte-Carlo method to compute the basic reproductive number of the disease spread in different scenarios. Exhaustive simulation experiments suggest that while contact tracing is important in stopping the Ebola spread, it does not need to be done too urgently. This result is due to rather long incubation period of Ebola disease infection. However, immediate hospitalization of infected cases is crucial and requires the most attention and resource allocation. Moreover, to investigate the impact of mitigation strategies in the 2014 Ebola outbreak, we consider reported data in Guinea, one the three West Africa countries that had experienced the Ebola virus disease outbreak. We formulate a multivariate sequential Monte Carlo filter that utilizes mechanistic models for Ebola virus propagation to simultaneously estimate the disease progression states and the model parameters according to reported incidence data streams. This method has the advantage of performing the inference online as the new data becomes available and estimating the evolution of the basic reproductive ratio R₀(t) throughout the Ebola outbreak. Our analysis identifies a peak in the basic reproductive ratio close to the time of Ebola cases reports in Europe and the USA.
Style APA, Harvard, Vancouver, ISO itp.
14

Velmurugan, Rajbabu. "Implementation Strategies for Particle Filter based Target Tracking". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14611.

Pełny tekst źródła
Streszczenie:
This thesis contributes new algorithms and implementations for particle filter-based target tracking. From an algorithmic perspective, modifications that improve a batch-based acoustic direction-of-arrival (DOA), multi-target, particle filter tracker are presented. The main improvements are reduced execution time and increased robustness to target maneuvers. The key feature of the batch-based tracker is an image template-matching approach that handles data association and clutter in measurements. The particle filter tracker is compared to an extended Kalman filter~(EKF) and a Laplacian filter and is shown to perform better for maneuvering targets. Using an approach similar to the acoustic tracker, a radar range-only tracker is also developed. This includes developing the state update and observation models, and proving observability for a batch of range measurements. From an implementation perspective, this thesis provides new low-power and real-time implementations for particle filters. First, to achieve a very low-power implementation, two mixed-mode implementation strategies that use analog and digital components are developed. The mixed-mode implementations use analog, multiple-input translinear element (MITE) networks to realize nonlinear functions. The power dissipated in the mixed-mode implementation of a particle filter-based, bearings-only tracker is compared to a digital implementation that uses the CORDIC algorithm to realize the nonlinear functions. The mixed-mode method that uses predominantly analog components is shown to provide a factor of twenty improvement in power savings compared to a digital implementation. Next, real-time implementation strategies for the batch-based acoustic DOA tracker are developed. The characteristics of the digital implementation of the tracker are quantified using digital signal processor (DSP) and field-programmable gate array (FPGA) implementations. The FPGA implementation uses a soft-core or hard-core processor to implement the Newton search in the particle proposal stage. A MITE implementation of the nonlinear DOA update function in the tracker is also presented.
Style APA, Harvard, Vancouver, ISO itp.
15

Di, Caro Gianni. "Ant colony optimization and its application to adaptive routing in telecommunication networks". Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211149.

Pełny tekst źródła
Streszczenie:
In ant societies, and, more in general, in insect societies, the activities of the individuals, as well as of the society as a whole, are not regulated by any explicit form of centralized control. On the other hand, adaptive and robust behaviors transcending the behavioral repertoire of the single individual can be easily observed at society level. These complex global behaviors are the result of self-organizing dynamics driven by local interactions and communications among a number of relatively simple individuals.

The simultaneous presence of these and other fascinating and unique characteristics have made ant societies an attractive and inspiring model for building new algorithms and new multi-agent systems. In the last decade, ant societies have been taken as a reference for an ever growing body of scientific work, mostly in the fields of robotics, operations research, and telecommunications.

Among the different works inspired by ant colonies, the Ant Colony Optimization metaheuristic (ACO) is probably the most successful and popular one. The ACO metaheuristic is a multi-agent framework for combinatorial optimization whose main components are: a set of ant-like agents, the use of memory and of stochastic decisions, and strategies of collective and distributed learning.

It finds its roots in the experimental observation of a specific foraging behavior of some ant colonies that, under appropriate conditions, are able to select the shortest path among few possible paths connecting their nest to a food site. The pheromone, a volatile chemical substance laid on the ground by the ants while walking and affecting in turn their moving decisions according to its local intensity, is the mediator of this behavior.

All the elements playing an essential role in the ant colony foraging behavior were understood, thoroughly reverse-engineered and put to work to solve problems of combinatorial optimization by Marco Dorigo and his co-workers at the beginning of the 1990's.

From that moment on it has been a flourishing of new combinatorial optimization algorithms designed after the first algorithms of Dorigo's et al. and of related scientific events.

In 1999 the ACO metaheuristic was defined by Dorigo, Di Caro and Gambardella with the purpose of providing a common framework for describing and analyzing all these algorithms inspired by the same ant colony behavior and by the same common process of reverse-engineering of this behavior. Therefore, the ACO metaheuristic was defined a posteriori, as the result of a synthesis effort effectuated on the study of the characteristics of all these ant-inspired algorithms and on the abstraction of their common traits.

The ACO's synthesis was also motivated by the usually good performance shown by the algorithms (e.g. for several important combinatorial problems like the quadratic assignment, vehicle routing and job shop scheduling, ACO implementations have outperformed state-of-the-art algorithms).

The definition and study of the ACO metaheuristic is one of the two fundamental goals of the thesis. The other one, strictly related to this former one, consists in the design, implementation, and testing of ACO instances for problems of adaptive routing in telecommunication networks.

This thesis is an in-depth journey through the ACO metaheuristic, during which we have (re)defined ACO and tried to get a clear understanding of its potentialities, limits, and relationships with other frameworks and with its biological background. The thesis takes into account all the developments that have followed the original 1999's definition, and provides a formal and comprehensive systematization of the subject, as well as an up-to-date and quite comprehensive review of current applications. We have also identified in dynamic problems in telecommunication networks the most appropriate domain of application for the ACO ideas. According to this understanding, in the most applicative part of the thesis we have focused on problems of adaptive routing in networks and we have developed and tested four new algorithms.

Adopting an original point of view with respect to the way ACO was firstly defined (but maintaining full conceptual and terminological consistency), ACO is here defined and mainly discussed in the terms of sequential decision processes and Monte Carlo sampling and learning.

More precisely, ACO is characterized as a policy search strategy aimed at learning the distributed parameters (called pheromone variables in accordance with the biological metaphor) of the stochastic decision policy which is used by so-called ant agents to generate solutions. Each ant represents in practice an independent sequential decision process aimed at constructing a possibly feasible solution for the optimization problem at hand by using only information local to the decision step.

Ants are repeatedly and concurrently generated in order to sample the solution set according to the current policy. The outcomes of the generated solutions are used to partially evaluate the current policy, spot the most promising search areas, and update the policy parameters in order to possibly focus the search in those promising areas while keeping a satisfactory level of overall exploration.

This way of looking at ACO has facilitated to disclose the strict relationships between ACO and other well-known frameworks, like dynamic programming, Markov and non-Markov decision processes, and reinforcement learning. In turn, this has favored reasoning on the general properties of ACO in terms of amount of complete state information which is used by the ACO's ants to take optimized decisions and to encode in pheromone variables memory of both the decisions that belonged to the sampled solutions and their quality.

The ACO's biological context of inspiration is fully acknowledged in the thesis. We report with extensive discussions on the shortest path behaviors of ant colonies and on the identification and analysis of the few nonlinear dynamics that are at the very core of self-organized behaviors in both the ants and other societal organizations. We discuss these dynamics in the general framework of stigmergic modeling, based on asynchronous environment-mediated communication protocols, and (pheromone) variables priming coordinated responses of a number of ``cheap' and concurrent agents.

The second half of the thesis is devoted to the study of the application of ACO to problems of online routing in telecommunication networks. This class of problems has been identified in the thesis as the most appropriate for the application of the multi-agent, distributed, and adaptive nature of the ACO architecture.

Four novel ACO algorithms for problems of adaptive routing in telecommunication networks are throughly described. The four algorithms cover a wide spectrum of possible types of network: two of them deliver best-effort traffic in wired IP networks, one is intended for quality-of-service (QoS) traffic in ATM networks, and the fourth is for best-effort traffic in mobile ad hoc networks.

The two algorithms for wired IP networks have been extensively tested by simulation studies and compared to state-of-the-art algorithms for a wide set of reference scenarios. The algorithm for mobile ad hoc networks is still under development, but quite extensive results and comparisons with a popular state-of-the-art algorithm are reported. No results are reported for the algorithm for QoS, which has not been fully tested. The observed experimental performance is excellent, especially for the case of wired IP networks: our algorithms always perform comparably or much better than the state-of-the-art competitors.

In the thesis we try to understand the rationale behind the brilliant performance obtained and the good level of popularity reached by our algorithms. More in general, we discuss the reasons of the general efficacy of the ACO approach for network routing problems compared to the characteristics of more classical approaches. Moving further, we also informally define Ant Colony Routing (ACR), a multi-agent framework explicitly integrating learning components into the ACO's design in order to define a general and in a sense futuristic architecture for autonomic network control.

Most of the material of the thesis comes from a re-elaboration of material co-authored and published in a number of books, journal papers, conference proceedings, and technical reports. The detailed list of references is provided in the Introduction.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

Style APA, Harvard, Vancouver, ISO itp.
16

Acosta, Argueta Lesly María. "Particle filtering estimation for linear and nonlinear state-space models". Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134356.

Pełny tekst źródła
Streszczenie:
The sequential estimation of the states (filtering) and the corresponding simultaneous estimation of the states and fixed parameters of a dynamic state-space model, being linear or not, is an important probleminmany fields of research, such as in the area of finance. The main objective of this research is to estimate sequ entially and efficiently –from a Bayesian perspective via the particle filtering methodology– the states and/or the fixed parameters of a nonstandard dynamic state-spacemodel: one that is possibly nonlinear, non-stationary or non-Gaussian. The present thesis consists of seven chapters and is structured into two parts. Chapter 1 introduces basic concepts, themotivation, the purpose, and the outline of the thesis. Chapters 2-4, the first part of the thesis, focus on the estimation of the states. Chapter 2 provides a comprehensive review of themost classic algorithms (non-simulation based: KF, EKF, and UKF; and simulation based: SIS, SIR, ASIR, EPF, and UPF1) used for filtering solely the states of a dynamic statespacemodel. All these filters scattered in the literature are not only described in detail, but also placed in a unified notation for the sake of consistency, readability and comparability. Chapters 3 and 4 confirm the efficiency of the well-established particle filtering methodology, via extensive Monte Carlo (MC) studies, when estimating only the latent states for a dynamic state-space model, being linear or not. Also, complementary MC studies are conducted to analyze some relevant issues within the adopted approach, such as the degeneracy problem, the resampling strategy, or the possible impact on estimation of the number of particles used and the time series length. Chapter 3 specifically illustrates the performance of the particle filtering methodology in a linear and Gaussian context, using the exact Kalman filter as a benchmark. The performance of the four studied particle filter variants (SIR, SIRopt, ASIR, KPF, the latter being a special case of the EPF algorithm) is assessed using two apparently simple, but important time series processes: the so-called Local Level Model (LLM) and the AR(1) plus noise model, which are non-stationary and stationary, respectively. An exhaustive study on the effect of the signal-to-noise ratio (SNR) over the quality of the estimation is additionally performed. ComplementaryMC studies are conducted to assess the degree of degeneracy and the possible effect of increasing the number of particles and the time series length. Chapter 4 assesses and illustrates the performance of the particle filtering methodology in a nonlinear context. Specifically, a synthetic nonlinear, non Gaussian and non-stationary state space model taken from literature is used to illustrate the performance of the four competing particle filters under study (SIR, ASIR, EPF, UPF) in contraposition to two well-known non-simulation based filters (EKF, UKF). In this chapter, the residual and stratified resampling schemes are compared and the effect of increasing the number of particles is addressed. In the second part (Chapters 5 and 6), extensive MC studies are carried out, but the main goal is the simultaneous estimation of states and fixed model parameters for chosen non-standard dynamic models. This area of research is still very active and it is within this area where this thesis contributes themost. Chapter 5 provides a partial survey of particle filter variants used to conduct the simultaneous estimation of states and fixed parameters. Such filters are an extension of those previously adopted for estimating solely the states. Additionally, a MC study is carried out to estimate the state (level) and the two fixed variance parameters of the non-stationary local level model; we use four particle filter variants (LW, SIRJ, SIRoptJ, KPFJ), six typical settings of the SNR and two settings for the discount factor needed in the jittering step. In this chapter, the SIRJ particle filter variant is proposed as an alternative to the well-established filter of Liu West (LW PF). The combined use of a Kalman-based proposal distribution and a jittering step is proposed and explored, which gives rise to the particle filter variant called: the Kalman Particle Filter plus Jittering (KPFJ). Chapter 6 focuses on estimating the states and three fixed parameters of the non-standard basic stochastic volatility model known as stochastic autoregressive volatility model of order one: SARV(1). After an introduction and detailed description of the stylized features of financial time series, the estimation ability of two competing particle filter variants (SIRJ vs LW(Liu andWest)) is shown empirically using simulated data. The chapter ends with an application to real data sets from the financial area: the Spanish IBEX 35 returns index and the Europe Brent Spot prices (in dollars). The contribution in chapters 5 and 6 is to propose new variants of particle filters, such as the KPFJ, the SIRJ, and the SIRoptJ (a special case of the SIRJ that uses an optimal proposal distribution) that have developed along this work. The thesis also suggests that the so-called EPFJ (Extended Particle Filter with Jittering) and the UPFJ (Unscented Particle Filter with Jittering) algorithms could be reasonable choices when dealingwith highly nonlinearmodels. In this part, also relevant issueswithin the particle filteringmethodology are discussed, such as the potential impact on estimation of the discount factor parameter, the time series length, and the number of particles used. Throughout this work, pseudo-codes are written for all filters studied and are implemented in RLanguage. The reported findings are obtained as the result of extensive MC studies, considering a variety of case-scenarios described in the thesis. The intrinsic characteristics of the model at hand guided -according to suitability– the choice of filters in each specific situation. The comparison of filters is based on the RMSE, the elapsed CPU-time and the degree of degeneracy. Finally, Chapter 7 includes the discussion, contributions, and future lines of research. Some complementary theoretical and practical aspects are presented in the appendix.
L’estimació seqüencial dels estats (filtratge) i la corresponent estimació simultània dels estats i els paràmetres fixos d’unmodel dinàmic formulat en forma d’espai d’estat –sigui lineal o no– constitueix un problema de rellevada importància enmolts camps, com ser a l’àrea de finances. L’objectiu principal d’aquesta tesi és el d’estimar seqüencialment i de manera eficient –des d’un punt de vista bayesià i usant lametodologia de filtratge de partícules– els estats i/o els paràmetres fixos d’unmodel d’espai d’estat dinàmic no estàndard: possiblement no lineal, no gaussià o no estacionari. El present treball consisteix de 7 capítols i s’organitza en dues parts. El Capítol 1 hi introdueix conceptes bàsics, lamotivació, el propòsit i l’estructura de la tesi. La primera part d’aquesta tesi (capítols 2 a 4) se centra únicament en l’estimació dels estats. El Capítol 2 presenta una revisió exhaustiva dels algorismes més clàssics no basats en simulacions (KF, EKF, UKF2) i els basats en simulacions (SIS, SIR, ASIR, EPF, UPF). Per a aquests filtres, tots esmentats en la literatura, amés de descriure’ls detalladament, s’ha unificat la notació amb l’objectiu que aquesta sigui consistent i comparable entre els diferents algorismes implementats al llarg d’aquest treball. Els capítols 3 i 4 se centren en la realització d’estudis Monte Carlo (MC) extensos que confirmen l’eficiència de la metodologia de filtratge de partícules per estimar els estats latents d’un procés dinàmic formulat en forma d’espai d’estat, sigui lineal o no. Alguns estudis MC complementaris es duen a terme per avaluar diferents aspectes de la metodologia de filtratge de partícules, com ser el problema de la degeneració, l’elecció de l’estratègia de remostreig, el nombre de partícules usades o la grandària de la sèrie temporal. Específicament, el Capítol 3 il·lustra el comportament de la metodologia de filtratge de partícules en un context lineal i gaussià en comparació de l’òptim i exacte filtre de Kalman. La capacitat de filtratge de les quatre variants de filtre de partícules estudiades (SIR, SIRopt, ASIR, KPF; l’últim sent un cas especial de l’algorisme EPF) es va avaluar sobre la base de dos processos de sèries temporals aparentment simples però importants: els anomenats Local Level Model (LLM) i el AR (1) plus noise, que són no estacionari i estacionari, respectivament. Aquest capítol estudia en profunditat temes rellevants dins de l’enfocament adoptat, coml’impacte en l’estimació de la relació entre el senyal i el soroll (SNR: signal-to-noise-ratio, en aquesta tesi), de la longitud de la sèrie temporal i del nombre de partícules. El Capítol 4 avalua i il·lustra el comportament de la metodologia de filtratge de partícules en un context no lineal. En concret, s’utilitza un model d’espai d’estat no lineal, no gaussià i no estacionari pres de la literatura per il·lustrar el comportament de quatre filtres de partícules (SIR, ASIR, EPF, UPF) en contraposició a dos filtres no basats en simulació ben coneguts (EKF, UKF). Aquí es comparen els esquemes de remostreig residual i estratificat i s’avalua l’efecte d’augmentar el nombre de partícules. A la segona part (capítols 5 i 6), es duen a terme també estudis MC extensos, però ara l’objectiu principal és l’estimació simultània dels estats i paràmetres fixos de certsmodels seleccionats. Aquesta àrea de recerca segueix sentmolt activa i és on aquesta tesi hi contribueixmés. El Capítol 5 proveeix una revisió parcial dels mètodes per dur a terme l’estimació simultània dels estats i paràmetres fixos a través de la metodologia de filtratge de partícules. Aquests filtres són una extensió d’aquells adoptats anteriorment només per estimar els estats. Aquí es realitza un estudi MC per estimar l’estat (nivell) i els dos paràmetres de variància del model LLM no estacionari; s’utilitzen quatre variants (LW, SIRJ, SIRoptJ, KPFJ) de filtre de partícules, sis escenaris típics del SNR i dos escenaris per a l’anomenat factor de descompte necessari en el pas de diversificació. En aquest capítol, es proposa la variant de filtre de partícules SIRJ (Sample Importance Resampling with Jittering) com a alternativa al filtre de referència de Liu iWest (LWPF). També es proposa i explora l’ús combinat d’una distribució d’importància basada en el filtre de Kalman i un pas de diversificació (jittering) que dóna lloc a la variant del filtre de partícules anomenada Kalman Particle Filteringwith Jittering (KPFJ). El Capítol 6 se centra en l’estimació dels estats i dels paràmetres fixos delmodel bàsic no estàndard de volatilitat estocàstica denominat Stochastic autoregressive model of order one: SARV (1). Després d’una introducció i descripció detallada de les característiques pròpies de sèries temporals financeres, es demostra mitjançant estudis MC la capacitat d’estimació de dues variants de filtre de partícules (SIRJ vs. LW(Liu iWest)) utilitzant dades simulades. El capítol acaba amb una aplicació a dos conjunts de dades reals dins de l’àrea financera: l’índex de rendiments espanyol IBEX 35 i els preus al comptat (en dòlars) del Brent europeu. La contribució en els capítols 5 i 6 consisteix en proposar noves variants de filtres de partícules, compoden ser el KPFJ, el SIRJ i el SIRoptJ (un cas especial de l’algorisme SIRJ utilitzant una distribució d’importància òptima) que s’han desenvolupat al llarg d’aquest treball. També se suggereix que els anomenats filtres de partícules EPFJ (Extended Particle Filter with Jittering) i UPFJ (Unscented Particle Filter with Jittering) podrien ser opcions raonables quan es tracta de models altament no lineals; el KPFJ sent un cas especial de l’algorisme EPFJ. En aquesta part, també es tracten aspectes rellevants dins de la metodologia de filtratge de partícules, com ser l’impacte potencial en l’estimació de la longitud de la sèrie temporal, el paràmetre de factor de descompte i el nombre de partícules. Al llarg d’aquest treball s’han escrit (i implementat en el llenguatge R) els pseudo-codis per a tots els filtres estudiats. Els resultats presentats s’obtenenmitjançant simulacionsMonte Carlo (MC) extenses, tenint en compte variats escenaris descrits en la tesi. Les característiques intrínseques del model baix estudi van guiar l’elecció dels filtres a comparar en cada situació específica. Amés, la comparació dels filtres es basa en el RMSE (RootMean Square Error), el temps de CPU i el grau de degeneració. Finalment, el Capítol 7 presenta la discussió, les contribucions i les línies futures de recerca. Alguns aspectes teòrics i pràctics complementaris es presenten en els apèndixs.
La estimación secuencial de los estados (filtrado) y la correspondiente estimación simultánea de los estados y los parámetros fijos de un modelo dinámico formulado en forma de espacio de estado –sea lineal o no– constituye un problema de relevada importancia enmuchos campos, como ser en el área de finanzas. El objetivo principal de esta tesis es el de estimar secuencialmente y de manera eficiente –desde un punto de vista bayesiano y usando la metodología de filtrado de partículas– los estados y/o los parámetros fijos de un modelo de espacio de estado dinámico no estándar: posiblemente no lineal, no gaussiano o no estacionario. El presente trabajo consta de 7 capítulos y se organiza en dos partes. El Capítulo 1 introduce conceptos básicos, la motivación, el propósito y la estructura de la tesis. La primera parte de esta tesis (capítulos 2 a 4) se centra únicamente en la estimación de los estados. El Capítulo 2 presenta una revisión exhaustiva de los algoritmos más clásicos no basados en simulaciones (KF, EKF,UKF3) y los basados en simulaciones (SIS, SIR, ASIR, EPF, UPF). Para todos estos filtros, mencionados en la literatura, además de describirlos en detalle, se ha unificado la notación con el objetivo de que ésta sea consistente y comparable entre los diferentes algoritmos implementados a lo largo de este trabajo. Los capítulos 3 y 4 se centran en la realización de estudios Monte Carlo (MC) extensos que confirman la eficiencia de la metodología de filtrado de partículas para estimar los estados latentes de un proceso dinámico formulado en forma de espacio de estado, sea lineal o no. Algunos estudios MC complementarios se llevan a cabo para evaluar varios aspectos de la metodología de filtrado de partículas, como ser el problema de la degeneración, la elección de la estrategia de remuestreo, el número de partículas usadas o el tamaño de la serie temporal. Específicamente, el Capítulo 3 ilustra el comportamiento de lametodología de filtrado de partículas en un contexto lineal y gaussiano en comparación con el óptimo y exacto filtro de Kalman. La capacidad de filtrado de las cuatro variantes de filtro de partículas estudiadas (SIR, SIRopt, ASIR, KPF; el último siendo un caso especial del algoritmo EPF) se evaluó en base a dos procesos de series temporales aparentemente simples pero importantes: los denominados Local Level Model (LLM) y el AR (1) plus noise, que son no estacionario y estacionario, respectivamente. Este capítulo estudia en profundidad temas relevantes dentro del enfoque adoptado, como el impacto en la estimación de la relación entre la señal y el ruido (SNR: signal-to-noise-ratio, en esta tesis), de la longitud de la serie temporal y del número de partículas. El Capítulo 4 evalúa e ilustra el comportamiento de la metodología de filtrado de partículas en un contexto no lineal. En concreto, se utiliza un modelo de espacio de estado no lineal, no gaussiano y no estacionario tomado de la literatura para ilustrar el comportamiento de cuatro filtros de partículas (SIR, ASIR, EPF, UPF) en contraposición a dos filtros no basados en simulación bien conocidos (EKF, UKF). Aquí se comparan los esquemas de remuestreo residual y estratificado y se evalúa el efecto de aumentar el número de partículas. En la segunda parte (capítulos 5 y 6), se llevan a cabo también estudios MC extensos, pero ahora el objetivo principal es la estimación simultánea de los estados y parámetros fijos de ciertos modelos seleccionados. Esta área de investigación sigue siendo muy activa y es donde esta tesis contribuye más. El Capítulo 5 provee una revisión parcial de losmétodos para llevar a cabo la estimación simultánea de los estados y parámetros fijos a través de lametodología de filtrado de partículas. Dichos filtros son una extensión de aquellos adoptados anteriormente sólo para estimar los estados. Aquí se realiza un estudio MC para estimar el estado (nivel) y los dos parámetros de varianza del modelo LLM no estacionario; se utilizan cuatro variantes (LW, SIRJ, SIRoptJ, KPFJ) de filtro de partículas, seis escenarios típicos del SNR y dos escenarios para el llamado factor de descuento necesario en el paso de diversificación. En este capítulo, se propone la variante de filtro de partículas SIRJ (Sample Importance resampling with Jittering) como alternativa al filtro de referencia de Liu y West (LW PF). También se propone y explora el uso combinado de una distribución de importancia basada en el filtro de Kalman y un paso de diversificación (jittering) que da lugar a la variante del filtro de partículas denominada Kalman Particle Filteringwith Jittering (KPFJ). El Capítulo 6 se centra en la estimación de los estados y de los parámetros fijos del modelo básico no estándar de volatilidad estocástica denominado Stochastic autoregressivemodel of order one: SARV (1). Después de una introducción y descripción detallada de las características propias de series temporales financieras, se demuestra mediante estudios MC la capacidad de estimación de dos variantes de filtro de partículas (SIRJ vs. LW (Liu y West)) utilizando datos simulados. El capítulo termina con una aplicación a dos conjuntos de datos reales dentro del área financiera: el índice de rendimientos español IBEX 35 y los precios al contado (en dólares) del Brent europeo. La contribución en los capítulos 5 y 6 consiste en proponer nuevas variantes de filtros de partículas, como pueden ser el KPFJ, el SIRJ y el SIRoptJ (Caso especial del algoritmo SIRJ utilizando una distribución de importancia óptima) que se han desarrollado a lo largo de este trabajo. También se sugiere que los llamados filtros de partículas EPFJ (Extended Particle Filter with Jittering) y UPFJ (Unscented Particle Filter with Jittering) podrían ser opciones razonables cuando se trata de modelos altamente no lineales; el KPFJ siendo un caso especial del algoritmo EPFJ. En esta parte, también se tratan aspectos relevantes dentro de lametodología de filtrado de partículas, como ser el impacto potencial en la estimación de la longitud de la serie temporal, el parámetro de factor de descuento y el número de partículas. A lo largo de este trabajo se han escrito (e implementado en el lenguaje R) los pseudo-códigos para todos los filtros estudiados. Los resultados presentados se obtienen mediante simulaciones Monte Carlo (MC) extensas, teniendo en cuenta variados escenarios descritos en la tesis. Las características intrínsecas del modelo bajo estudio guiaron la elección de los filtros a comparar en cada situación específica. Además, la comparación de los filtros se basa en el RMSE (Root Mean Square Error), el tiempo de CPU y el grado de degeneración. Finalmente, el Capítulo 7 presenta la discusión, las contribuciones y las líneas futuras de investigación. Algunos aspectos teóricos y prácticos complementarios se presentan en los apéndices.
Style APA, Harvard, Vancouver, ISO itp.
17

YADAV, SATYENDRA. "Improvement in Sensor Node Localization". Thesis, 2013. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14223.

Pełny tekst źródła
Streszczenie:
The most fundamental problem of wireless sensor networks is localization (finding the geographical location of the sensors). Most of the localization algorithms proposed for sensor networks are based on Sequential Monte Carlo (SMC) method. To achieve high accuracy in localization it requires high seed node density and it also suffers from low sampling efficiency. There are some papers which solve this problems but they are not energy efficient. Another approach the Bounding Box method was used to reduce the scope of searching the candidate samples and thus reduces the time for finding the set of valid samples. In this thesis we propose an energy efficient approach which will further reduce the scope of searching the candidate samples, so now we can remove the invalid samples from the sample space and we can introduce more valid samples to improve the localization accuracy. We will consider the direction of movement of the valid samples, so that we can predict the next position of the samples more accurately, hence we can achieve high localization accuracy. Further we can also add the information about the speed of movement of the node so that we can measure the actual acceleration of the node. Now as we have information about the direction and speed of movement of the node we can locate a sensor node more accurately and faster.
Mr. VINOD KUMAR Associate Professor Delhi Technological University
Style APA, Harvard, Vancouver, ISO itp.
18

Chen, Yu-Lung, i 陳郁龍. "A Method for Improving Sequential Monte Carlo Method in Video Object Tracking". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/83502138929579300795.

Pełny tekst źródła
Streszczenie:
碩士
國立東華大學
資訊工程學系
94
In recent years, moving object tracking in video attracts the interest of many researchers. This technology leads to numerous applications, particularly those in computer vision such as human-machine interface, video transmission and compression, and surveillance. Among the many techniques for moving object tracking, the Sequential Monte Carlo method, which also referred as the particle filter, has been a commonly-used option due to its simplicity. However, the particle filter suffers from “drifting problem” when deciding the position of the target object. Therefore, the method is not good for applications requiring higher tracking accuracy. In addition, the particle filter usually fails to track objects with complicated geometries. This thesis proposes a multi-component tracking method to attain more robust results for tracking objects with complicated geometries.
Style APA, Harvard, Vancouver, ISO itp.
19

Bloem-Reddy, Benjamin Michael. "Random Walk Models, Preferential Attachment, and Sequential Monte Carlo Methods for Analysis of Network Data". Thesis, 2017. https://doi.org/10.7916/D8348R5Q.

Pełny tekst źródła
Streszczenie:
Networks arise in nearly every branch of science, from biology and physics to sociology and economics. A signature of many network datasets is strong local dependence, which gives rise to phenomena such as sparsity, power law degree distributions, clustering, and structural heterogeneity. Statistical models of networks require a careful balance of flexibility to faithfully capture that dependence, and simplicity, to make analysis and inference tractable. In this dissertation, we introduce a class of models that insert one network edge at a time via a random walk, permitting the location of new edges to depend explicitly on the structure of the existing network, while remaining probabilistically and computationally tractable. Connections to graph kernels are made through the probability generating function of the random walk length distribution. The limiting degree distribution is shown to exhibit power law behavior, and the properties of the limiting degree sequence are studied analytically with martingale methods. In the second part of the dissertation, we develop a class of particle Markov chain Monte Carlo algorithms to perform inference for a large class of sequential random graph models, even when the observation consists only of a single graph. Using these methods, we derive a particle Gibbs sampler for random walk models. Fit to synthetic data, the sampler accurately recovers the model parameters; fit to real data, the model offers insight into the typical length scale of dependence in the network, and provides a new measure of vertex centrality. The arrival times of new vertices are the key to obtaining results for both theory and inference. In the third part, we undertake a careful study of the relationship between the arrival times, sparsity, and heavy tailed degree distributions in preferential attachment-type models of partitions and graphs. A number of constructive representations of the limiting degrees are obtained, and connections are made to exchangeable Gibbs partitions as well as to recent results on the limiting degrees of preferential attachment graphs.
Style APA, Harvard, Vancouver, ISO itp.
20

Lehmann, Eric Andr{u00E9}. "Particle filtering methods for acoustic source localisation and tracking". Phd thesis, 2004. http://hdl.handle.net/1885/149771.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Ji, Chunlin. "Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMC". Diss., 2009. http://hdl.handle.net/10161/1609.

Pełny tekst źródła
Streszczenie:

The modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, computation and applications in biological contexts. Chapter 2 describes and develops models for spatial point processes in which the point outcomes are latent, where indirect observations related to the point outcomes are available, and in which the underlying spatial intensity functions are typically highly heterogenous. Spatial intensities of inhomogeneous Poisson processes are represented via flexible nonparametric Bayesian mixture models. Computational approaches are presented for this new class of spatial point process mixtures and extended to the context of unobserved point process outcomes. Two examples drawn from a central, motivating context, that of immunofluorescence histology analysis in biological studies generating high-resolution imaging data, demonstrate the modelling approach and computational methodology. Chapters 3 and 4 extend this framework to define a class of flexible Bayesian nonparametric models for inhomogeneous spatio-temporal point processes, adding dynamic models for underlying intensity patterns. Dependent Dirichlet process mixture models are introduced as core components of this new time-varying spatial model. Utilizing such nonparametric mixture models for the spatial process intensity functions allows the introduction of time variation via dynamic, state-space models for parameters characterizing the intensities. Bayesian inference and model-fitting is addressed via novel particle filtering ideas and methods. Illustrative simulation examples include studies in problems of extended target tracking and substantive data analysis in cell fluorescent microscopic imaging tracking problems.

The second part of the thesis, consisting of chapters 5 and chapter 6, concerns advances in computational methods for some core and generic Bayesian inferential problems. Chapter 5 develops a novel approach to estimation of upper and lower bounds for marginal likelihoods in Bayesian modelling using refinements of existing variational methods. Traditional variational approaches only provide lower bound estimation; this new lower/upper bound analysis is able to provide accurate and tight bounds in many problems, so facilitates more reliable computation for Bayesian model comparison while also providing a way to assess adequacy of variational densities as approximations to exact, intractable posteriors. The advances also include demonstration of the significant improvements that may be achieved in marginal likelihood estimation by marginalizing some parameters in the model. A distinct contribution to Bayesian computation is covered in Chapter 6. This concerns a generic framework for designing adaptive MCMC algorithms, emphasizing the adaptive Metropolized independence sampler and an effective adaptation strategy using a family of mixture distribution proposals. This work is coupled with development of a novel adaptive approach to computation in nonparametric modelling with large data sets; here a sequential learning approach is defined that iteratively utilizes smaller data subsets. Under the general framework of importance sampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of both synthetic data examples, and in a real data analysis arising in astro-statistics.

Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.


Dissertation
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii