Dissertations / Theses on the topic 'Bayesian estimation'

To see the other types of publications on this topic, follow the link: Bayesian estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bayesian estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rademeyer, Estian. "Bayesian kernel density estimation." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64692.

Full text
Abstract:
This dissertation investigates the performance of two-class classi cation credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and naive Bayes (NB), as well as the non-parametric Parzen classi ers are extended, using Bayes' rule, to include either a class imbalance or a Bernoulli prior. This is done with the aim of addressing the low default probability problem. Furthermore, the performance of Parzen classi cation with Silverman and Minimum Leave-one-out Entropy (MLE) Gaussian kernel bandwidth estimation is also investigated. It is shown that the non-parametric Parzen classi ers yield superior classi cation power. However, there is a longing for these non-parametric classi ers to posses a predictive power, such as exhibited by the odds ratio found in logistic regression (LR). The dissertation therefore dedicates a section to, amongst other things, study the paper entitled \Model-Free Objective Bayesian Prediction" (Bernardo 1999). Since this approach to Bayesian kernel density estimation is only developed for the univariate and the uncorrelated multivariate case, the section develops a theoretical multivariate approach to Bayesian kernel density estimation. This approach is theoretically capable of handling both correlated as well as uncorrelated features in data. This is done through the assumption of a multivariate Gaussian kernel function and the use of an inverse Wishart prior.
Dissertation (MSc)--University of Pretoria, 2017.
The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF.
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
2

Weiss, Yair. "Bayesian motion estimation and segmentation." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9354.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1998.
Includes bibliographical references (leaves 195-204).
Estimating motion in scenes containing multiple moving objects remains a difficult problem in computer vision yet is solved effortlessly by humans. In this thesis we present a computational investigation of this astonishing performance in human vision. The method we use throughout is to formulate a small number of assumptions and see the extent to which the optimal interpretation given these assumptions corresponds to the human percept. For scenes containing a single motion we show that a wide range of previously published results are predicted by a Bayesian model that finds the most probable velocity field assuming that (1) images may be noisy and (2) velocity fields are likely to be slow and smooth. The predictions agree qualitatively, and are often in remarkable agreement quantitatively. For scenes containing multiple motions we introduce the notion of "smoothness in layers". The scene is assumed to be composed of a small number of surfaces or layers, and the motion of each layer is assumed to be slow and smooth. We again formalize these assumptions in a Bayesian framework and use the statistical technique of mixture estimation to find the predicted a surprisingly wide range of previously published results that are predicted with these simple assumptions. We discuss the shortcomings of these assumptions and show how additional assumptions can be incorporated into the same framework. Taken together, the first two parts of the thesis suggest that a seemingly complex set of illusions in human motion perception may arise from a single computational strategy that is optimal under reasonable assumptions.
(cont.) The third part of the thesis presents a computer vision algorithm that is based on the same assumptions. We compare the approach to recent developments in motion segmentation and illustrate its performance on real and synthetic image sequences.
by Yair Weiss.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Bouda, Milan. "Bayesian Estimation of DSGE Models." Doctoral thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-200007.

Full text
Abstract:
Thesis is dedicated to Bayesian Estimation of DSGE Models. Firstly, the history of DSGE modeling is outlined as well as development of this macroeconometric field in the Czech Republic and in the rest of the world. Secondly, the comprehensive DSGE framework is described in detail. It means that everyone is able to specify or estimate arbitrary DSGE model according to this framework. Thesis contains two empirical studies. The first study describes derivation of the New Keynesian DSGE Model and its estimation using Bayesian techniques. This model is estimated with three different Taylor rules and the best performing Taylor rule is identified using the technique called Bayesian comparison. The second study deals with development of the Small Open Economy Model with housing sector. This model is based on previous study which specifies this model as a closed economy model. I extended this model by open economy features and government sector. Czech Republic is generally considered as a small open economy and these extensions make this model more applicable to this economy. Model contains two types of households. The first type of consumers is able to access the capital markets and they can smooth consumption across time by buying or selling financial assets. These households follow the permanent income hypothesis (PIH). The other type of household uses rule of thumb (ROT) consumption, spending all their income to consumption. Other agents in this economy are specified in standard way. Outcomes of this study are mainly focused on behavior of house prices. More precisely, it means that all main outputs as Bayesian impulse response functions, Bayesian prediction and shock decomposition are focused mainly on this variable. At the end of this study one macro-prudential experiment is performed. This experiment comes up with answer on the following question: is the higher/lower Loan to Value (LTV) ratio better for the Czech Republic? This experiment is very conclusive and shows that level of LTV does not affect GDP. On the other hand, house prices are very sensitive to this LTV ratio. The recommendation for the Czech National Bank could be summarized as follows. In order to keep house prices less volatile implement rather lower LTV ratio than higher.
APA, Harvard, Vancouver, ISO, and other styles
4

Pramanik, Santanu. "The Bayesian and approximate Bayesian methods in small area estimation." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8856.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Joint Program in Survey Methodology. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Campolieti, Michele. "Bayesian estimation of discrete duration models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0001/NQ27884.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hissmann, Michael. "Bayesian estimation for white light interferometry." Berlin Pro Business, 2005. http://shop.pro-business.com/product_info.php?products_id=357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Makarava, Natallia. "Bayesian estimation of self-similarity exponent." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6409/.

Full text
Abstract:
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
Die Abschätzung des Selbstähnlichkeitsexponenten hat in den letzten Jahr-zehnten an Aufmerksamkeit gewonnen und ist in vielen wissenschaftlichen Gebieten und Disziplinen zu einem intensiven Forschungsthema geworden. Reelle Daten, die selbsähnliches Verhalten zeigen und/oder durch den Selbstähnlichkeitsexponenten (insbesondere durch den Hurst-Exponenten) parametrisiert werden, wurden in verschiedenen Gebieten gesammelt, die von Finanzwissenschaften über Humanwissenschaften bis zu Netzwerken in der Hydrologie und dem Verkehr reichen. Diese reiche Anzahl an möglichen Anwendungen verlangt von Forschern, neue Methoden zu entwickeln, um den Selbstähnlichkeitsexponenten abzuschätzen, sowie großskalige Abhängigkeiten zu erkennen. In dieser Arbeit stelle ich die Bayessche Schätzung des Hurst-Exponenten vor. Im Unterschied zu früheren Methoden, erlaubt die Bayessche Herangehensweise die Berechnung von Punktschätzungen zusammen mit Konfidenzintervallen, was von bedeutendem Vorteil in der Datenanalyse ist, wie in der Arbeit diskutiert wird. Zudem ist diese Methode anwendbar auf kurze und unregelmäßig verteilte Datensätze, wodurch die Auswahl der möglichen Anwendung, wo der Hurst-Exponent geschätzt werden soll, stark erweitert wird. Unter Berücksichtigung der Tatsache, dass der Gauß'sche selbstähnliche Prozess von bedeutender Interesse in der Modellierung ist, werden in dieser Arbeit Realisierungen der Prozesse der fraktionalen Brown'schen Bewegung und des fraktionalen Gauß'schen Rauschens untersucht. Zusätzlich werden Anwendungen auf reelle Daten, wie Wasserstände des Nil und fixierte Augenbewegungen, diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
8

Graham, Matthew Corwin 1986. "Robust Bayesian state estimation and mapping." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98678.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 135-146).
Virtually all robotic and autonomous systems rely on navigation and mapping algorithms (e.g. the Kalman filter or simultaneous localization and mapping (SLAM)) to determine their location in the world. Unfortunately, these algorithms are not robust to outliers and even a single faulty measurement can cause a catastrophic failure of the navigation system. This thesis proposes several novel robust navigation and SLAM algorithms that produce accurate results when outliers and faulty measurements occur. The new algorithms address the robustness problem by augmenting the standard models used by filtering and SLAM algorithms with additional latent variables that can be used to infer when outliers have occurred. Solving the augmented problems leads to algorithms that are naturally robust to outliers and are nearly as efficient as their non-robust counterparts. The first major contribution of this thesis is a novel robust filtering algorithm that can compensate for both measurement outliers and state prediction errors using a set of sparse latent variables that can be inferred using an efficient convex optimization. Next the thesis proposes a batch robust SLAM algorithm that uses the Expectation- Maximization algorithm to infer both the navigation solution and the measurement information matrices. Inferring the information matrices allows the algorithm to reduce the impact of outliers on the SLAM solution while the Expectation-Maximization procedure produces computationally efficient calculations of the information matrix estimates. While several SLAM algorithms have been proposed that are robust to loop closure errors, to date no SLAM algorithms have been developed that are robust to landmark errors. The final contribution of this thesis is the first SLAM algorithm that is robust to both loop closure and landmark errors (incremental SLAM with consistency checking (ISCC)). ISCC adds integer variables to the SLAM optimization that indicate whether each measurement should be included in the SLAM solution. ISCC then uses an incremental greedy strategy to efficiently determine which measurements should be used to compute the SLAM solution. Evaluation on standard benchmark datasets as well as visual SLAM experiments demonstrate that ISCC is robust to a large number of loop closure and landmark outliers and that it can provide significantly more accurate solutions than state-of-the-art robust SLAM algorithms when landmark errors occur.
by Matthew C. Graham.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Vega-Brown, Will (William Robert). "Predictive parameter estimation for Bayesian filtering." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81715.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 113-117).
In this thesis, I develop CELLO, an algorithm for predicting the covariances of any Gaussian model used to account for uncertainty in a complex system. The primary motivation for this work is state estimation; often, complex raw sensor measurements are processed into low dimensional observations of a vehicle state. I argue that the covariance of these observations can be well-modelled as a function of the raw sensor measurement, and provide a method to learn this function from data. This method is computationally cheap, asymptotically correct, easy to extend to new sensors, and noninvasive, in the sense that it augments, rather than disrupts, existing filtering algorithms. I additionally present two important variants; first, I extend CELLO to learn even when ground truth vehicle states are unavailable; and second, I present an equivalent Bayesian algorithm. I then use CELLO to learn covariance models for several systems, including a laser scan-matcher, an optical flow system, and a visual odometry system. I show that filtering using covariances predicted by CELLO can quantitatively improve estimator accuracy and consistency, both relative to a fixed covariance model and relative to carefully tuned domain-specific covariance models.
by William Vega-Brown.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Xing, Guan. "LASSOING MIXTURES AND BAYESIAN ROBUST ESTIMATION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1164135815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kan, Chengzhang <1997&gt. "Term structure models and Bayesian estimation." Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/22033.

Full text
Abstract:
This paper shall cover: 1)foundational results of term structure: interest rate, bond, swap, financial products. equity...a review of the literature on term structure models and a review on Bayesian modelling of term structure. 2)recent advances of term structure, especially affine process and term structure of equity. 3)Bayesian estimation of affine term structure model. 4)Bayesian paradigm (pro and con), dealing with the choice of a statistical paradigm and decision theory, the uncertainty problem relating to behaviour and economic facts.
APA, Harvard, Vancouver, ISO, and other styles
12

Ferroni, Filippo. "Essay on Bayesian Estimation of DSGE Models." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7397.

Full text
Abstract:
Esta tesis presenta tres diferentes experimentos de política utilizando estimaciones Bayesianas de modelos DSGE. En la primera parte, se quiere demostrar que una política fiscal contracíclica es un instrumento importante para la estabilidad macroeconómica. Este resultado es robusto a diferentes controles. En la segunda parte, se demuestra las variaciones de las estimaciones de los parámetros estructurales según la descomposición ciclo-tendencia, si en uno o en dos estadios. Resulta que con un procedimiento a dos estadios la volatilidad del PIB es explicada mayormente por shocks nominales, mientras que con un procedimiento a un estadio por un shock a la inversión. Se argumenta que el procedimiento a un estadio proporciona una estructura probabilística más coherente. La tercera parte de la tesis propone una manera de estimar los parámetros estructurales utilizando la información procedente de distintos filtros. Mientras que con un tipo de estimación con un único filtro el dinero tiene poca influencia en las fluctuaciones de medio plazo, con un sistema de múltiples filtros el dinero tiene un papel importante en la transmisión de los shocks.
This thesis examines three different policy experiments using Bayesian estimates of DSGE models. First, we show that countercyclical fiscal policies are important to smooth fluctuations and that this is true regardless of how we specify the fiscal rule and several details of the model. Second, we show that the sources of output volatility obtained from a cyclical DSGE model crucially depend on whether estimation is done sequentially or jointly. In fact, while with a two step procedure, where the trend is first removed, nominal shocks drive output volatility, investment shocks dominate when structural and trend parameters are estimated jointly. Finally, we examine the role of money for business cycle fluctuations with a single and a multiple filtering approach, where information provided by different filters is jointly used to estimate DSGE parameters. In the former case, money has a marginal role for output and inflation fluctuations, while in the latter case is important to transmit cyclical fluctuations.
APA, Harvard, Vancouver, ISO, and other styles
13

Jain, Achin. "Software defect content estimation: A Bayesian approach." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26932.

Full text
Abstract:
Software inspection is a method to detect errors in software artefacts early in the development cycle. At the end of the inspection process the inspectors need to make a decision whether the inspected artefact is of sufficient quality or not. Several methods have been proposed to assist in making this decision like capture recapture methods and Bayesian approach. In this study these methods have been analyzed and compared and a new Bayesian approach for software inspection is proposed. All of the estimation models rely on an underlying assumption that the inspectors are independent. However, this assumption of independence is not necessarily true in practical sense, as most of the inspection teams interact with each other and share their findings. We, therefore, studied a new Bayesian model where the inspectors share their findings, for defect estimate and compared it with the Bayesian model (Gupta et al. 2003), where inspectors examine the artefact independently. The simulations were carried out under realistic software conditions with a small number of difficult defects and a few inspectors. The models were evaluated on the basis of decision accuracy and median relative error and our results suggest that the dependent inspector assumption improves the decision accuracy (DA) over the previous Bayesian model and CR models.
APA, Harvard, Vancouver, ISO, and other styles
14

Armstrong, Helen School of Mathematics UNSW. "Bayesian estimation of decomposable Gaussian graphical models." Awarded by:University of New South Wales. School of Mathematics, 2005. http://handle.unsw.edu.au/1959.4/24295.

Full text
Abstract:
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical models fully accessible to statisticians. To achieve these aims the thesis makes a number of statistical contributions. First, it proposes a new prior for decomposable graphs and a simulation methodology for estimating this prior. Second, it proposes a number of Markov chain Monte Carlo sampling schemes based on graphical techniques. The thesis also presents some new graphical results, and some existing results are reproved to make them more readily understood. Appendix 8.1 contains all the programs written to carry out the inference discussed in the thesis, together with both a summary of the theory on which they are based and a line by line description of how each routine works.
APA, Harvard, Vancouver, ISO, and other styles
15

Werthmüller, Dieter. "Bayesian estimation of resistivities from seismic velocities." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8932.

Full text
Abstract:
I address the problem of finding a background model for the estimation of resistivities in the earth from controlled-source electromagnetic (CSEM) data by using seismic data and well logs as constraints. Estimation of resistivities is normally done by trial-and-error, in a process called “inversion”, by finding a model of the earth whose responses match the data to within an acceptable error; what comes out of the inversion is what is put into the model by the geophysicist: it does not come out of the data directly. The premise underlying this thesis is that an earth model can be found that satisfies not only the CSEM data but also the seismic data and any well logs. I present a methodology to determine background resistivities from seismic velocities using rock physics, structural constraints, and depth trends. The physical parameters of the seismic wave equation are different from those in the electromagnetic diffusion equation, so there is no direct link between the governing equations. I therefore use a Bayesian framework to incorporate not only the errors in the data and our limited knowledge of the rock parameters, but also the uncertainty of our chosen and calibrated velocity-to-resistivity transform. To test the methodology I use a well log from the North Sea Harding South oil and gas field to calibrate the transform, and apply it to seismic velocities of the nearby Harding Central oil and gas field. I also use short-offset CSEM inversions to estimate the electric anisotropy and to improve the shallow part of the resistivity model, where there is no well control. Three-dimensional modelling of this resistivity model predicts the acquired CSEM data within the estimated uncertainty. This methodology makes it possible to estimate background resistivities from seismic velocities, well logs, and other available geophysical and geological data. Subsequent CSEM surveys can then focus on finding resistive anomalies relative to this background model; these are, potentially, hydrocarbon-bearing formations.
APA, Harvard, Vancouver, ISO, and other styles
16

Mai, The Tien. "PAC-Bayesian estimation of low-rank matrices." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLG001.

Full text
Abstract:
Les deux premi`eres parties de cette th`ese 'etudient respectivement des estimateurs pseudo-bay'esiens dans les probl`emes de compl'etion de matrices, et de tomographie quantique. Dans chaque probl`eme, on propose une loi a priori qui induit des matrices de faible rang. On 'etudie les performances statistiques: dans chacun des deux cas, on prouve des vitesses de convergence pour nos estimateurs. Notre analyse repose essentiellement sur des in'egalit'es PAC-Bay'esiennes. On propose aussi un algorithme MCMC pour impl'ementer notre estimateur. On teste ensuite ses performances sur des donn'ees simul'ees, et r'eelles. La derni`ere partie de la th`ese 'etudie le probl`eme de lifelong learning (que l'on peut traduire par apprentissage au long cours), o`u de l'information est conserv'ee et transf'er'ee d'un probl`eme d'apprentissage `a un autre. Nous proposons une formalisation de ce probl`eme dans un contexte de pr'ediction s'equentielle. Nous proposons un m'eta-algorithme pour le transfert d'information, qui repose sur l'agr'egation `a poids exponentiels. On prouve une borne sur le regret de cette m'ethode. Un avantage important de notre analyse est qu'elle ne requiert aucune hypoth`ese sur la forme des algorithmes d'apprentissages utilis'es `a l'int'erieur de chaque probl`eme. On termine cette partie par l''etude de quelques exemples: cas d'un nombre fini de pr'edicteurs, apprentissage d'une direction r'ev'elatrice, et apprentissage d'un dictionnaire
The first two parts of the thesis study pseudo-Bayesian estimation for the problem of matrix completion and quantum tomography. A novel low-rank inducing prior distribution is proposed for each problem. The statistical performance is examined: in each case we provide the rate of convergence of the pseudo-Bayesian estimator. Our analysis relies on PAC-Bayesian oracle inequalities. We also propose an MCMC algorithm to compute our estimator. The numerical behavior is tested on simulated and real data sets. The last part of the thesis studies the lifelong learning problem, a scenario of transfer learning, where information is transferred from one learning task to another. We propose an online formalization of the lifelong learning problem. Then, a meta-algorithm is proposed for lifelong learning. It relies on the idea of exponentially weighted aggregation. We provide a regret bound on this strategy. One of the nice points of our analysis is that it makes no assumption on the learning algorithm used within each task. Some applications are studied in details: finite subset of relevant predictors, single index model, dictionary learning
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Ya Li. "Interactions between gaussian processes and bayesian estimation." Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25377.

Full text
Abstract:
L’apprentissage (machine) de modèle et l’estimation d’état sont cruciaux pour interpréter les phénomènes sous-jacents à de nombreuses applications du monde réel. Toutefois, il est souvent difficile d’apprendre le modèle d’un système et de capturer les états latents, efficacement et avec précision, en raison du fait que la connaissance du monde est généralement incertaine. Au cours des dernières années, les approches d’estimation et de modélisation bayésiennes ont été extensivement étudiées afin que l’incertain soit réduit élégamment et de manière flexible. Dans la pratique cependant, différentes limitations au niveau de la modélisation et de l’estimation bayésiennes peuvent détériorer le pouvoir d’interprétation bayésienne. Ainsi, la performance de l’estimation est souvent limitée lorsque le modèle de système manque de souplesse ou/et est partiellement inconnu. De même, la performance de la modélisation est souvent restreinte lorsque l’estimateur Bayésien est inefficace. Inspiré par ces faits, nous proposons d’étudier dans cette thèse, les connections possibles entre modélisation bayésienne (via le processus gaussien) et l’estimation bayésienne (via le filtre de Kalman et les méthodes de Monte Carlo) et comment on pourrait améliorer l’une en utilisant l’autre. À cet effet, nous avons d’abord vu de plus près comment utiliser les processus gaussiens pour l’estimation bayésienne. Dans ce contexte, nous avons utilisé le processus gaussien comme un prior non-paramétrique des modèles et nous avons montré comment cela permettait d’améliorer l’efficacité et la précision de l’estimation bayésienne. Ensuite, nous nous somme intéressé au fait de savoir comment utiliser l’estimation bayésienne pour le processus gaussien. Dans ce cadre, nous avons utilisé différentes estimations bayésiennes comme le filtre de Kalman et les filtres particulaires en vue d’améliorer l’inférence au niveau du processus gaussien. Ceci nous a aussi permis de capturer différentes propriétés au niveau des données d’entrée. Finalement, on s’est intéressé aux interactions dynamiques entre estimation bayésienne et processus gaussien. On s’est en particulier penché sur comment l’estimation bayésienne et le processus gaussien peuvent ”travailler” de manière interactive et complémentaire de façon à améliorer à la fois le modèle et l’estimation. L’efficacité de nos approches, qui contribuent à la fois au processus gaussien et à l’estimation bayésienne, est montrée au travers d’une analyse mathématique rigoureuse et validée au moyen de différentes expérimentations reflétant des applications réelles.
Model learning and state estimation are crucial to interpret the underlying phenomena in many real-world applications. However, it is often challenging to learn the system model and capture the latent states accurately and efficiently due to the fact that the knowledge of the world is highly uncertain. During the past years, Bayesian modeling and estimation approaches have been significantly investigated so that the uncertainty can be elegantly reduced in a flexible probabilistic manner. In practice, however, several drawbacks in both Bayesian modeling and estimation approaches deteriorate the power of Bayesian interpretation. On one hand, the estimation performance is often limited when the system model lacks in flexibility and/or is partially unknown. On the other hand, the modeling performance is often restricted when a Bayesian estimator is not efficient and/or accurate. Inspired by these facts, we propose Interactions Between Gaussian Processes and Bayesian Estimation where we investigate the novel connections between Bayesian model (Gaussian processes) and Bayesian estimator (Kalman filter and Monte Carlo methods) in different directions to address a number of potential difficulties in modeling and estimation tasks. Concretely, we first pay our attention to Gaussian Processes for Bayesian Estimation where a Gaussian process (GP) is used as an expressive nonparametric prior for system models to improve the accuracy and efficiency of Bayesian estimation. Then, we work on Bayesian Estimation for Gaussian Processes where a number of Bayesian estimation approaches, especially Kalman filter and particle filters, are used to speed up the inference efficiency of GP and also capture the distinct input-dependent data properties. Finally, we investigate Dynamical Interaction Between Gaussian Processes and Bayesian Estimation where GP modeling and Bayesian estimation work in a dynamically interactive manner so that GP learner and Bayesian estimator are positively complementary to improve the performance of both modeling and estimation. Through a number of mathematical analysis and experimental demonstrations, we show the effectiveness of our approaches which contribute to both GP and Bayesian estimation.
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Jae-yoon. "Essays on DSGE Models and Bayesian Estimation." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83515.

Full text
Abstract:
This thesis explores the theory and practice of sovereignty. I begin with a conceptual analysis of sovereignty, examining its theological roots in contrast with its later influence in contestations over political authority. Theological debates surrounding God’s sovereignty dealt not with the question of legitimacy, which would become important for political sovereignty, but instead with the limits of his ability. Read as an ontological capacity, sovereignty is coterminous with an existent’s activity in the world. As lived, this capacity is regularly limited by the ways in which space is produced via its representations, its symbols, and its practices. All collective appropriations of space have a nomos that characterizes their practice. Foucault’s account of “biopolitics” provides an account of how contemporary materiality is distributed, an account that can be supplemented by sociological typologies of how city space is typically produced. The collective biopolitical distribution of space expands the range of practices that representationally legibilize activity in the world, thereby expanding the conceptual limits of existents and what it means for them to act up to the borders of their capacity, i.e., to practice sovereignty. The desire for total authorial capacity expresses itself in relations of domination and subordination that never erase the fundamental precarity of subjects, even as these expressions seek to disguise it. I conclude with a close reading of narratives recounting the lives of residents in Chicago’s Englewood, reading their activity as practices of sovereignty which manifest variously as they master and produce space.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Cao, Di. "BAYESIAN ADAPTIVE ESTIMATION OF HIGH DIMENSIONAL VECTORS." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408874038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

White, Staci A. "Quantifying Model Error in Bayesian Parameter Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433771825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Callahan, Margaret D. "Bayesian Parameter Estimation and Inference Across Scales." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459523006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Malsiner-Walli, Gertraud, Sylvia Frühwirth-Schnatter, and Bettina Grün. "Identifying mixtures of mixtures using Bayesian estimation." Taylor & Francis, 2017. http://dx.doi.org/10.1080/10618600.2016.1200472.

Full text
Abstract:
The use of a finite mixture of normal distributions in model-based clustering allows to capture non-Gaussian data clusters. However, identifying the clusters from the normal components is challenging and in general either achieved by imposing constraints on the model or by using post-processing procedures. Within the Bayesian framework we propose a different approach based on sparse finite mixtures to achieve identifiability. We specify a hierarchical prior where the hyperparameters are carefully selected such that they are reflective of the cluster structure aimed at. In addition, this prior allows to estimate the model using standard MCMC sampling methods. In combination with a post-processing approach which resolves the label switching issue and results in an identified model, our approach allows to simultaneously (1) determine the number of clusters, (2) flexibly approximate the cluster distributions in a semi-parametric way using finite mixtures of normals and (3) identify cluster-specific parameters and classify observations. The proposed approach is illustrated in two simulation studies and on benchmark data sets.
APA, Harvard, Vancouver, ISO, and other styles
23

Chan, Kwokleung. "Bayesian learning in classification and density estimation /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 2002. http://wwwlib.umi.com/cr/ucsd/fullcit?p3061619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Oleson, Jacob J. "Bayesian spatial models for small area estimation /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3052203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Masoero, Lorenzo. "Genomic variety estimation with Bayesian nonparametric hierarchies." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121737.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 75-83).
The recent availability of large genomic studies, with tens of thousands of observations, opens up the intriguing possibility to investigate and understand the effect of rare genetic variants in biological human evolution as well as their impact in the developement of rare diseases. To do so, it is imperative to develop a statistical framework to assess what fraction of the overall variation present in human genome is not yet captured by available datasets. In this thesis we introduce a novel and rigorous methodology to estimate how many new variants are yet to be observed in the context of genomic projects using a nonparametric Bayesian hierarchical approach, which allows to perform prediction tasks which jointly handle multiple subpopulations at the same time. Moreover, our method performs well on extremely small as well as very large datasets, a desirable property given the variability in size of available datasets. As a byproduct of the Bayesian formulation, our estimation procedure also naturally provides uncertainty quantification of the estimates produced.
by Lorenzo Masoero.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
26

Keim, Michelle. "Bayesian information retrieval /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

KAWAGUCHI, Nobuo, and Seigo ITO. "Bayesian Based Location Estimation System Using Wireless LAN." IEEE, 2005. http://hdl.handle.net/2237/15455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kanemura, Atsunori. "Inversive and synthetical Bayesian methods for image estimation." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Konrad, Janusz. "Bayesian estimation of motion fields from image sequences." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74225.

Full text
Abstract:
This thesis addresses the ill-posed problem of estimating two-dimensional motion in time-varying images. The approach proposed here uses the theory of stochastic processes at the formulation and solution stages. Independent Gaussian random variables are used to model the relationship between motion fields and images, and vector and binary Markov random fields are used to model motion and motion discontinuity fields, respectively. These models, combined using Bayes rule, result in Gibbsian a posteriori probability distribution from which the Maximum A Posteriori Probability (MAP) and the Minimum Expected Cost (MEC) estimation criteria are derived. Optimization of these criteria is performed using stochastic relaxation. The MAP estimation is extended to handle large displacements via hierarchical approach. Deterministic approximations to some of the stochastic algorithms proposed are derived and compared with their stochastic counterparts. The colour information is incorporated into the estimation process. Numerous experimental results are included. Two of the methods proposed are applied to motion-compensated interpolation and shown to reduce certain type of errors.
APA, Harvard, Vancouver, ISO, and other styles
30

Hadzagic, Melita. "Bayesian approaches to trajectory estimation in maritime surveillance." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=94978.

Full text
Abstract:
In maritime surveillance, multi-sensor data differ to a great extent in their temporal resolution. Additionally, due to multi-level security and information management processing, many contact reports arrive hours after observations. This makes the contact report data usually available for batch processing. The dissimilar multi-source information environment results in contact reports with heteroscedastic and correlated errors (i.e. measurement errors characterized by normal probability distributions with non-constant and non-diagonal covariance matrices), while the obtained measurement errors may be relatively large. Hence, the appropriate choice of a trajectory estimation algorithm, which addresses the aforementioned issues of the surveillance data, will significantly contribute to increased awareness in the maritime domain. This thesis presents two novel batch single ship trajectory estimation algorithms employing Bayesian approaches to estimation: (1) a stochastic linear filtering algorithm and (2) a curve fitting algorithm which employs Bayesian statistical inference for nonparametric regression. The stochastic linear filtering algorithm employs a combination of two stochastic processes, namely the Integrated Ornstein-Uhlenbeck process (IOU) and the random walk (RW), process to describe the ship's motion. The assumptions on linear modeling and bivariate Gaussian distribution of measurement errors allow for the use of Kalman filtering and Rauch-Tung-Striebel optimal smoothing. In the curve fitting algorithm, the trajectory is considered to be in the form of a cubic spline with an unknown number of knots in two-dimensional Euclidean plane of longitude and latitude. The function estimate is determined from the data which are assumed Gaussian distributed. A fully Bayesian approach is adopted by defining the prior distributions on all unknown parameters: the spline coefficients, the number and the locations of knots. The calculation of the p
En surveillance maritime, les données multi-senseurs diffrent, dans une large mesure, en terme de leur résolution temporelle. De plus, en raison de la gestion du traitement d'information de sécurité multi-niveau, plusieurs rapports de contact sont reçus des heures après leur observation. Ceci rend les données des rapports de contact disponible pour traitement en lot. L'information multi-source provenant d'environnements dissimilaires, les rapports de contact ont des erreurs hétéroscédastiques et corrélées (i.e., des erreurs de mesure caractérisées par une distribution de probabilité normale et une matrice de covariance non constante et non diagonale), ainsi qu'une erreur de mesure pouvant être relativement large. En conséquence, le choix approprié d'un algorithme d'estimation de trajectoire adressant les problèmes susmentionnés de données de surveillance contribuera significativement à accroître la perception de situation dans le domaine maritime. Cette thèse présente deux nouveaux algorithmes pour l'estimation en lot de trajectoire de navire simple et employant une approche d'estimation bayésienne: (1) un algorithme de filtration linéaire stochastique, et (2) un algorithme de lissage de courbe réalisant une régression non-paramétrique par inférence statistique bayésienne. L'algorithme de filtration linéaire stochastique emploi la combinaison de deux processus stochastiques, c'est-à-dire le processus d'Ornstein-Uhlenbeck intégré (IOU) et le processus de marche aléatoire (RW), pour décrire le mouvement du navire. Les suppositions de modèle linéaire et de distribution gaussienne bivariée des erreurs de mesure permettent l'utilisation du filtre de Kalman et du lissage optimal de Rauch-Tung-Striebel. Dans l'algorithme de lissage de courbe, la trajectoire est considérée représentée par un spline cubique avec un nombre de noeuds inconnu dans le plan euclidien à deux dimensions, en longitude et latitude. L'estimation de f
APA, Harvard, Vancouver, ISO, and other styles
31

Nasios, Nikolaos. "Bayesian learning for parametric and kernel density estimation." Thesis, University of York, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Al, Hakmani Rahab. "Bayesian Estimation of Mixture IRT Models using NUTS." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1641.

Full text
Abstract:
The No-U-Turn Sampler (NUTS) is a relatively new Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior that common MCMC algorithms such as Gibbs sampling or Metropolis Hastings usually exhibit. Given the fact that NUTS can efficiently explore the entire space of the target distribution, the sampler converges to high-dimensional target distributions more quickly than other MCMC algorithms and is hence less computational expensive. The focus of this study is on applying NUTS to one of the complex IRT models, specifically the two-parameter mixture IRT (Mix2PL) model, and further to examine its performance in estimating model parameters when sample size, test length, and number of latent classes are manipulated. The results indicate that overall, NUTS performs well in recovering model parameters. However, the recovery of the class membership of individual persons is not satisfactory for the three-class conditions. Also, the results indicate that WAIC performs better than LOO in recovering the number of latent classes, in terms of the proportion of the time the correct model was selected as the best fitting model. However, when the effective number of parameters was also considered in selecting the best fitting model, both fully Bayesian fit indices perform equally well. In addition, the results suggest that when multiple latent classes exist, using either fully Bayesian fit indices (WAIC or LOO) would not select the conventional IRT model. On the other hand, when all examinees came from a single unified population, fitting MixIRT models using NUTS causes problems in convergence.
APA, Harvard, Vancouver, ISO, and other styles
33

Sonogashira, Motoharu. "Variational Bayesian Image Restoration with Transformation Parameter Estimation." Kyoto University, 2018. http://hdl.handle.net/2433/232409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Majidi, Mohammad Hassan. "Bayesian estimation of discrete signals with local dependencies." Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0014/document.

Full text
Abstract:
L'objectif de cette thèse est d'étudier le problème de la détection de données dans le système de communication sans fil, à la fois pour le cas de l'information d'état de canal parfaite et imparfaite au niveau du récepteur. Comme on le sait, la complexité de MLSE est exponentielle en la mémoire de canal et la cardinalité de l'alphabet symbole est rapidement ingérable, ce qui force à recourir à des approches sousoptimales. Par conséquent, en premier lieu, nous proposons une nouvelle égalisation itérative lorsque le canal est inconnu à l'émetteur et parfaitement connu au niveau du récepteur. Ce récepteur est basé sur une approche de continuation, et exploite l'idée d'approcher une fonction originale de coût d'optimisation par une suite de fonctions plus dociles et donc de réduire la complexité de calcul au récepteur.En second lieu, en vue de la détection de données sous un canal dynamique linéaire, lorsque le canal est inconnu au niveau du récepteur, le récepteur doit être en mesure d'effectuer conjointement l'égalisation et l'estimation de canal. De cette manière, on formule une représentation de modèle état-espace combiné du système de communication. Par cette représentation, nous pouvons utiliser le filltre de Kalman comme le meilleur estimateur des paramètres du canal. Le but de cette section est de motiver de façon rigoureuse la mise en place du filltre de Kalman dans l'estimation des sequences de Markov par des canaux dynamiques Gaussien. Par la présente, nous interprétons et explicitons les approximations sous-jacentes dans les approaches heuristiques.Enfin, si nous considérons une approche plus générale pour le canal dynamique non linéaire, nous ne pouvons pas utiliser le filtre de Kalman comme le meilleur estimateur. Ici, nous utilisons des modèles commutation d’espace-état (SSSM) comme modèles espace-état non linéaires. Ce modèle combine le modèle de Markov caché (HMM) et le modèle espace-état linéaire (LSSM). Pour l'estimation de canal et la detection de données, l'approche espérance et maximisation (EM) est utilisée comme approche naturelle. De cette façon, le filtre de Kalman étendu (EKF) et les filtres à particules sont évités
The aim of this thesis is to study the problem of data detection in wireless communication system, for both case of perfect and imperfect channel state information at the receiver. As well known, the complexity of MLSE being exponential in the channel memory and in the symbol alphabet cardinality is quickly unmanageable and forces to resort to sub-optimal approaches. Therefore, first we propose a new iterative equalizer when the channel is unknown at the transmitter and perfectly known at the receiver. This receiver is based on continuation approach, and exploits the idea of approaching an original optimization cost function by a sequence of more tractable functions and thus reduce the receiver's computational complexity. Second, in order to data detection under linear dynamic channel, when the channel is unknown at the receiver, the receiver must be able to perform joint equalization and channel estimation. In this way, we formulate a combined state-space model representation of the communication system. By this representation, we can use the Kalman filter as the best estimator for the channel parameters. The aim in this section is to motivate rigorously the introduction of the Kalman filter in the estimation of Markov sequences through Gaussian dynamical channels. By this we interpret and make clearer the underlying approximations in the heuristic approaches. Finally, if we consider more general approach for non linear dynamic channel, we can not use the Kalman filter as the best estimator. Here, we use switching state-space model (SSSM) as non linear state-space model. This model combines the hidden Markov model (HMM) and linear state-space model (LSSM). In order to channel estimation and data detection, the expectation and maximization (EM) procedure is used as the natural approach. In this way extended Kalman filter (EKF) and particle filters are avoided
APA, Harvard, Vancouver, ISO, and other styles
35

Mazzetta, Chiara. "Bayesian estimation of temporal dynamics in population ecology." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Nicoloutsopoulos, Dimitrios. "Parametric and Bayesian non-parametric estimation of copulas." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1445722/.

Full text
Abstract:
This thesis studies parametric and non-parametric methods of cop ula estimation with special focus on the Archimedean class of copu las. The first part proposes an estimation procedure which is indepen dent of the marginal distributions and performs well for one-parame ter or two-parameter families of copulas, where traditional methods give questionable results especially for small sample sizes. In the sec ond part we follow a Bayesian methodology and represent the copula density as a random piecewise constant, function. Under the presence of some data, we set up a probability distribution over the copula density and utilize Markov Chain Monte Carlo techniques to explore that distribution. The methodology is extended to perform shape preserving estimation of a univariate convex and monotone func tion that characterizes the copula. The estimated first and second derivatives of the function of interest must satisfy the restrictions that the theory imposes. All methods are illustrated with examples from simulated samples and a real-life dataset of the daily observations of the Dow-Jones and FTSE financial indices.
APA, Harvard, Vancouver, ISO, and other styles
37

Wong, Jackie Siaw Tze. "Bayesian estimation and model comparison for mortality forecasting." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415627/.

Full text
Abstract:
The ability to perform mortality forecasting accurately is of considerable interest for a wide variety of applications to avoid adverse costs. The recent decline in mortality poses a major challenge to various institutions in their attempts to forecast mortality within acceptable risk margins. The ultimate aim of our project is to develop a methodology to produce accurate mortality forecasts, with carefully calibrated probabilistic intervals to quantify the uncertainty encountered during the forecasts. Bayesian methodology is mainly implemented throughout the thesis for various benefits, but primarily due to its ability to provide a coherent modelling framework. Our contributions in this thesis can be divided into several parts. Firstly, we focus on the Poisson log-bilinear model by Brouhns et al. (2002), which induces an undesirable property, the mean-variance equality. A Poisson log-normal and a Poisson gamma log bilinear models, fitted using arbitrarily dffuse priors, are presented as possible solutions. We demonstrate that properly accounting for overdispersion prevents over-fitting and offers better calibrated prediction intervals for mortality forecasting. Secondly, we carry out Bayesian model determination procedures to compare the models, using marginal likelihoods computed by bridge sampling (Meng and Wong, 1996). To achieve our goal of approximating the marginal likelihoods accurately, a series of simulation studies is conducted to investigate the behaviour of the bridge sampling estimator. Next, a structurally simpler model which postulates a log-linear relationship between the mortality rate and time is considered. To provide a fair comparison between this model and the log-bilinear model, we carry out rigorous investigations on the prior specifications to ensure consistency in terms of the prior information postulated for the models. We propose to use Laplace prior distributions on the corresponding parameters for the loglinear model. Finally, we demonstrate that the inclusion of cohort components is crucial to yield more accurate projections and to avoid unnecessarily wide prediction intervals by improving the calibration between data signals and errors.
APA, Harvard, Vancouver, ISO, and other styles
38

Torrence, Robert Billington. "Bayesian Parameter Estimation on Three Models of Influenza." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/77611.

Full text
Abstract:
Mathematical models of viral infections have been informing virology research for years. Estimating parameter values for these models can lead to understanding of biological values. This has been successful in HIV modeling for the estimation of values such as the lifetime of infected CD8 T-Cells. However, estimating these values is notoriously difficult, especially for highly complex models. We use Bayesian inference and Monte Carlo Markov Chain methods to estimate the underlying densities of the parameters (assumed to be continuous random variables) for three models of influenza. We discuss the advantages and limitations of parameter estimation using these methods. The data and influenza models used for this project are from the lab of Dr. Amber Smith in Memphis, Tennessee.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Hotti, Alexandra. "Bayesian insurance pricing using informative prior estimation techniques." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286312.

Full text
Abstract:
Large, well-established insurance companies build statistical pricing models based on customer claim data. Due to their long experience and large amounts of data, they can predict their future expected claim losses accurately. In contrast, small newly formed insurance start-ups do not have access to such data. Instead, a start-up’s pricing model’s initial parameters can be set by directly estimating the risk premium tariff’s parameters in a non-statistical manner. However, this approach results in a pricing model that cannot be adjusted based on new claim data through classical frequentist insurance approaches. This thesis has put forth three Bayesian approaches for including estimates of an existing multiplicative tariff as the expectation of a prior in a Generalized Linear Model (GLM). The similarity between premiums set using the prior estimations and the static pricing model was measured as their relative difference. The results showed that the static tariff could be closely estimated. The estimated priors were then merged with claim data through the likelihood. These posteriors were estimated via the two Markov Chain Monte Carlo approaches, Metropolis and Metropolis-Hastings. All in all, this resulted in three risk premium models that could take advantage of existing pricing knowledge and learn over time as new cases arrived. The results showed that the Bayesian pricing methods significantly reduced the discrepancy between predicted and actual claim costs on an overall portfolio level compared to the static tariff. Nevertheless, this could not be determined on an individual policyholder level.
Stora, väletablerade försäkringsbolag modellerar sina riskpremier med hjälp av statistiska modeller och data från skadeanmälningar. Eftersom försäkringsbolagen har tillgång till en lång historik av skadeanmälningar, så kan de förutspå sina framtida skadeanmälningskostnader med hög precision. Till skillnad från ett stort försäkringsbolag, har en liten, nyetablerad försäkringsstartup inte tillgång till den mängden data. Det nyetablerade försäkringsbolagets initiala prissättningsmodell kan därför istället byggas genom att direkt estimera parametrarna i en tariff med ett icke statistiskt tillvägagångssätt. Problematiken med en sådan metod är att tariffens parametrar inte kan justerares baserat på bolagets egna skadeanmälningar med klassiska frekvensbaserade prissättningsmetoder. I denna masteruppsats presenteras tre metoder för att estimera en existerande statisk multiplikativ tariff. Estimaten kan användas som en prior i en Bayesiansk riskpremiemodell. Likheten mellan premierna som har satts via den estimerade och den faktiska statiska tariffen utvärderas genom att beräkna deras relativa skillnad. Resultaten från jämförelsen tyder på att priorn kan estimeras med hög precision. De estimerade priorparametrarna kombinerades sedan med startupbolaget Hedvigs skadedata. Posteriorn estimerades sedan med Metropolis and Metropolis-Hastings, vilket är två Markov Chain Monte Carlo simuleringsmetoder. Sammantaget resulterade detta i en prissättningsmetod som kunde utnyttja kunskap från en existerande statisk prismodell, samtidigt som den kunde ta in mer kunskap i takt med att fler skadeanmälningar skedde. Resultaten tydde på att de Bayesianska prissättningsmetoderna kunde förutspå skadekostnader med högre precision jämfört med den statiska tariffen.
APA, Harvard, Vancouver, ISO, and other styles
40

Fan, Hang. "Estimation of Species Tree Using Approximate Bayesian Computation." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1281732679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Roth, Michael. "Advanced Kalman Filtering Approaches to Bayesian State Estimation." Doctoral thesis, Linköpings universitet, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-134867.

Full text
Abstract:
Bayesian state estimation is a flexible framework to address relevant problems at the heart of existing and upcoming technologies. Application examples are obstacle tracking for driverless cars and indoor navigation using smartphone sensor data. Unfortunately, the mathematical solutions of the underlying theory cannot be translated to computer code in general. Therefore, this thesis discusses algorithms and approximations that are related to the Kalman filter (KF). Four scientific articles and an introduction with the relevant background on Bayesian state estimation theory and algorithms are included. Two articles discuss nonlinear Kalman filters, which employ the KF measurement update in nonlinear models. The numerous variants are presented in a common framework and the employed moment approximations are analyzed. Furthermore, their application to target tracking problems is discussed. A third article analyzes the ensemble Kalman filter (EnKF), a Monte Carlo implementation of the KF that has been developed for high-dimensional geoscientific filtering problems. The EnKF is presented in a simple KF framework, including its challenges, important extensions, and relations to other filters. Whereas the aforementioned articles contribute to the understanding of existing algorithms, a fourth article devises novel filters and smoothers to address heavy-tailed noise. The development is based on Student’s t distribution and provides simple recursions in the spirit of the KF. The introduction and articles are accompanied by extensive simulation experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Suhwon. "Nonparametric bayesian density estimation with intrinsic autoregressive priors /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3115565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gorynin, Ivan. "Bayesian state estimation in partially observable Markov processes." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL009/document.

Full text
Abstract:
Cette thèse porte sur l'estimation bayésienne d'état dans les séries temporelles modélisées à l'aide des variables latentes hybrides, c'est-à-dire dont la densité admet une composante discrète-finie et une composante continue. Des algorithmes généraux d'estimation des variables d'états dans les modèles de Markov partiellement observés à états hybrides sont proposés et comparés avec les méthodes de Monte-Carlo séquentielles sur un plan théorique et appliqué. Le résultat principal est que ces algorithmes permettent de réduire significativement le coût de calcul par rapport aux méthodes de Monte-Carlo séquentielles classiques
This thesis addresses the Bayesian estimation of hybrid-valued state variables in time series. The probability density function of a hybrid-valued random variable has a finite-discrete component and a continuous component. Diverse general algorithms for state estimation in partially observable Markov processesare introduced. These algorithms are compared with the sequential Monte-Carlo methods from a theoretical and a practical viewpoint. The main result is that the proposed methods require less processing time compared to the classic Monte-Carlo methods
APA, Harvard, Vancouver, ISO, and other styles
44

Filho, Paulo Cilas Marques. "Análise bayesiana de densidades aleatórias simples." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-25052012-184549/.

Full text
Abstract:
Definimos, a partir de uma partição de um intervalo limitado da reta real formada por subintervalos, uma distribuição a priori sobre uma classe de densidades em relação à medida de Lebesgue construindo uma densidade aleatória cujas realizações são funções simples não negativas que assumem um valor constante em cada subintervalo da partição e possuem integral unitária. Utilizamos tais densidades aleatórias simples na análise bayesiana de um conjunto de observáveis absolutamente contínuos e provamos que a distribuição a priori é fechada sob amostragem. Exploramos as distribuições a priori e a posteriori via simulações estocásticas e obtemos soluções bayesianas para o problema de estimação de densidade. Os resultados das simulações exibem o comportamento assintótico da distribuição a posteriori quando crescemos o tamanho das amostras dos dados analisados. Quando a partição não é conhecida a priori, propomos um critério de escolha a partir da informação contida na amostra. Apesar de a esperança de uma densidade aleatória simples ser sempre uma densidade descontínua, obtemos estimativas suaves resolvendo um problema de decisão em que os estados da natureza são realizações da densidade aleatória simples e as ações são densidades suaves de uma classe adequada.
We define, from a known partition in subintervals of a bounded interval of the real line, a prior distribution over a class of densities with respect to Lebesgue measure constructing a random density whose realizations are nonnegative simple functions that integrate to one and have a constant value on each subinterval of the partition. These simple random densities are used in the Bayesian analysis of a set of absolutely continuous observables and the prior distribution is proved to be closed under sampling. We explore the prior and posterior distributions through stochastic simulations and find Bayesian solutions to the problem of density estimation. Simulations results show the asymptotic behavior of the posterior distribution as we increase the size of the analyzed data samples. When the partition is unknown, we propose a choice criterion based on the information contained in the sample. In spite of the fact that the expectation of a simple random density is always a discontinuous density, we get smooth estimates solving a decision problem where the states of nature are realizations of the simple random density and the actions are smooth densities of a suitable class.
APA, Harvard, Vancouver, ISO, and other styles
45

Stephens, Matthew. "Bayesian methods for mixtures of normal distributions." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Neto, Anselmo Ramalho Pitombeira. "Dynamic bayesian statistical models for the estimation of the origin-destination matrix." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14698.

Full text
Abstract:
In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices.
In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices.
In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices.
APA, Harvard, Vancouver, ISO, and other styles
47

Lewis, John Robert. "Bayesian Restricted Likelihood Methods." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407505392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Merkle, Edgar C. "Bayesian estimation of factor analysis models with incomplete data." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126895149.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 106 p.; also includes graphics. Includes bibliographical references (p. 103-106). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Deok-Jin. "Nonlinear bayesian filtering with applications to estimation and navigation." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2269.

Full text
Abstract:
In principle, general approaches to optimal nonlinear filtering can be described in a unified way from the recursive Bayesian approach. The central idea to this recur- sive Bayesian estimation is to determine the probability density function of the state vector of the nonlinear systems conditioned on the available measurements. However, the optimal exact solution to this Bayesian filtering problem is intractable since it requires an infinite dimensional process. For practical nonlinear filtering applications approximate solutions are required. Recently efficient and accurate approximate non- linear filters as alternatives to the extended Kalman filter are proposed for recursive nonlinear estimation of the states and parameters of dynamical systems. First, as sampling-based nonlinear filters, the sigma point filters, the unscented Kalman fil- ter and the divided difference filter are investigated. Secondly, a direct numerical nonlinear filter is introduced where the state conditional probability density is calcu- lated by applying fast numerical solvers to the Fokker-Planck equation in continuous- discrete system models. As simulation-based nonlinear filters, a universally effective algorithm, called the sequential Monte Carlo filter, that recursively utilizes a set of weighted samples to approximate the distributions of the state variables or param- eters, is investigated for dealing with nonlinear and non-Gaussian systems. Recentparticle filtering algorithms, which are developed independently in various engineer- ing fields, are investigated in a unified way. Furthermore, a new type of particle filter is proposed by integrating the divided difference filter with a particle filtering framework, leading to the divided difference particle filter. Sub-optimality of the ap- proximate nonlinear filters due to unknown system uncertainties can be compensated by using an adaptive filtering method that estimates both the state and system error statistics. For accurate identification of the time-varying parameters of dynamic sys- tems, new adaptive nonlinear filters that integrate the presented nonlinear filtering algorithms with noise estimation algorithms are derived. For qualitative and quantitative performance analysis among the proposed non- linear filters, systematic methods for measuring the nonlinearities, biasness, and op- timality of the proposed nonlinear filters are introduced. The proposed nonlinear optimal and sub-optimal filtering algorithms with applications to spacecraft orbit es- timation and autonomous navigation are investigated. Simulation results indicate that the advantages of the proposed nonlinear filters make these attractive alterna- tives to the extended Kalman filter.
APA, Harvard, Vancouver, ISO, and other styles
50

Baziw, Erick. "Application of Bayesian recursive estimation for seismic signal processing." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/30715.

Full text
Abstract:
Bayesian recursive estimation (BRE) requires that the posterior density function be estimated so that conditional mean estimates of desired parameters or states can be obtained. BRE has been referred to as a complete solution to the estimation problem since the posterior density function embodies all available statistical information (i.e., prior, likelihood and evidence). Until recent advances in BRE, most applications required that the system and measurement equations be linear, and that the process and measurement noise be Gaussian and white. A Kalman filter, KF, (closed form solution to the BRE) could be applied to systems that met these conditions. Previous applications of the KF to solve seismic signal processing problems (e.g., deconvolution) have had very limited success and acceptability in the geophysics signal processing community due to the restrictive nature of the KF. The recently new BRE development of sequential Monte Carlo (SMC) techniques for numerically solving non-stationary and non-linear problems has generated considerable interest and active research within the last decade. This thesis focuses upon the implementation of SMC techniques (e.g., particle filtering) for solving seismic signal processing problems. All the associated filters of BRE (hidden Markov model filter, KF, particle filter, Rao-Blackwellised particle filter, and jump Markov systems) and a new and highly robust and unique model of the seismic source wavelet are implemented in two innovative algorithms for solving the important problems of passive seismic event detection and blind seismic deconvolution. A ground-breaking concept in blind seismic deconvolution referred to as principle phase decomposition (PPD) is outlined and evaluated in this thesis. The PPD technique estimates and separates overlapping source wavelets instead of estimating high bandwidth reflection coefficients. It is shown that one can then easily generate reflection coefficients from the separated source wavelets. In this thesis many advantages of the PPD are outlined. Simulated seismogram data with low signal-to-noise ratios is blindly deconvolved where non-stationary, mixed-phase, and zero-phase source wavelets are present. I believe that there are currently no existing blind seismic deconvolution techniques which could obtain comparable performance results of the PPD technique. The work in this thesis has resulted in three IEEE publications and one peer reviewed conference publication.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography