Tesi sul tema "Bayesian modelling"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Bayesian modelling.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Bayesian modelling".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Peeling, Paul Halliday. "Bayesian methods in music modelling". Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/237236.

Testo completo
Abstract (sommario):
This thesis presents several hierarchical generative Bayesian models of musical signals designed to improve the accuracy of existing multiple pitch detection systems and other musical signal processing applications whilst remaining feasible for real-time computation. At the lowest level the signal is modelled as a set of overlapping sinusoidal basis functions. The parameters of these basis functions are built into a prior framework based on principles known from musical theory and the physics of musical instruments. The model of a musical note optionally includes phenomena such as frequency and amplitude modulations, damping, volume, timbre and inharmonicity. The occurrence of note onsets in a performance of a piece of music is controlled by an underlying tempo process and the alignment of the timings to the underlying score of the music. A variety of applications are presented for these models under differing inference constraints. Where full Bayesian inference is possible, reversible-jump Markov Chain Monte Carlo is employed to estimate the number of notes and partial frequency components in each frame of music. We also use approximate techniques such as model selection criteria and variational Bayes methods for inference in situations where computation time is limited or the amount of data to be processed is large. For the higher level score parameters, greedy search and conditional modes algorithms are found to be sufficiently accurate. We emphasize the links between the models and inference algorithms developed in this thesis with that in existing and parallel work, and demonstrate the effects of making modifications to these models both theoretically and by means of experimental results.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Strimenopoulou, Foteini. "Bayesian modelling of functional data". Thesis, University of Kent, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.544037.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Polson, Nicholas G. "Bayesian perspectives on statistical modelling". Thesis, University of Nottingham, 1988. http://eprints.nottingham.ac.uk/11292/.

Testo completo
Abstract (sommario):
This thesis explores the representation of probability measures in a coherent Bayesian modelling framework, together with the ensuing characterisation properties of posterior functionals. First, a decision theoretic approach is adopted to provide a unified modelling criterion applicable to assessing prior-likelihood combinations, design matrices, model dimensionality and choice of sample size. The utility structure and associated Bayes risk induces a distance measure, introducing concepts from differential geometry to aid in the interpretation of modelling characteristics. Secondly, analytical and approximate computations for the implementation of the Bayesian paradigm, based on the properties of the class of transformation models, are discussed. Finally, relationships between distance measures (in the form of either a derivative of a Bayes mapping or an induced distance) are explored, with particular reference to the construction of sensitivity measures.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Baker, Peter John. "Applied Bayesian modelling in genetics". Thesis, Queensland University of Technology, 2001.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Habli, Nada. "Nonparametric Bayesian Modelling in Machine Learning". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34267.

Testo completo
Abstract (sommario):
Nonparametric Bayesian inference has widespread applications in statistics and machine learning. In this thesis, we examine the most popular priors used in Bayesian non-parametric inference. The Dirichlet process and its extensions are priors on an infinite-dimensional space. Originally introduced by Ferguson (1983), its conjugacy property allows a tractable posterior inference which has lately given rise to a significant developments in applications related to machine learning. Another yet widespread prior used in nonparametric Bayesian inference is the Beta process and its extensions. It has originally been introduced by Hjort (1990) for applications in survival analysis. It is a prior on the space of cumulative hazard functions and it has recently been widely used as a prior on an infinite dimensional space for latent feature models. Our contribution in this thesis is to collect many diverse groups of nonparametric Bayesian tools and explore algorithms to sample from them. We also explore machinery behind the theory to apply and expose some distinguished features of these procedures. These tools can be used by practitioners in many applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Delatola, Eleni-Ioanna. "Bayesian nonparametric modelling of financial data". Thesis, University of Kent, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589934.

Testo completo
Abstract (sommario):
This thesis presents a class of discrete time univariate stochastic volatility models using Bayesian nonparametric techniques. In particular, the models that will be introduced are not only the basic stochastic volatility model, but also the heavy-tailed model using scale mixture of Normals and the leverage model. The aim will be focused on capturing flexibly the distribution of the logarithm of the squared return under the aforementioned models using infinite mixture of Normals. Parameter estimates for these models will be obtained using Markov chain Monte Carlo methods and the Kalman filter. Links between the return distribution and the distribution of the logarithm of the squared returns "fill be established. The one-step ahead predictive ability of the model will be measured using log-predictive scores. Asset returns, stock indices and exchange rates will be fitted using the developed methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Yan, Haojie. "Bayesian spatial modelling of air pollution". Thesis, University of Bath, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.541668.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Brown, G. O. "Model discrimination in Bayesian credibility modelling". Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596996.

Testo completo
Abstract (sommario):
This thesis is about insurance models and aspects of uncertainty pertaining to such models. The models we consider are insurance credibility models, arising from the need for accurate rate making based on past experience of claims in some portfolio of insurance policies. Classical credibility modelling is concerned with the use of a linear estimate to approximate the risk premium and was first studied by American actuaries at the start of the 20th century. In the Bayesian paradigm the credibility premium is the optimal linear premium since it minimises the expected square loss based on current information. Here we focus on estimating the risk premium without using the linear estimator since the linear estimate is known to be an exact expression only in certain restricted cases such as the linear exponential family. Markov chain Monte Carlo (MCMC) has become a standard tool in statistical analysis. In this thesis we show how it can be used in a Bayesian setting applied to insurance credibility theory. Using MCMC methods, we can compute the premium to cover future risks to any degree of accuracy required by simulating directly from the posterior distribution of the unknown model risk parameters and then averaging the risk premium against this distribution. This is illustrated for a special case. We then consider the problem of model uncertainty and model selection in general credibility modelling. This is necessary especially when there are several competing models which seem to adequately describe the data. Most of our model selection techniques are based on the reversible jump MCMC algorithm of Green (1995, Biometrika). Recently Brooks et al. (2003, JRSSB) have proposed several implementational improvements for the vanilla reversible jump algorithm. In this thesis we apply these methods to various model selection problems in insurance credibility theory.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kheradmandnia, Manouchehr. "Aspects of Bayesian threshold autoregressive modelling". Thesis, University of Kent, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Smith, Elizabeth. "Bayesian modelling of extreme rainfall data". Thesis, University of Newcastle Upon Tyne, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.424142.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Chai, High Seng. "Bayesian modelling with skew-elliptical distributions". Thesis, University of Southampton, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432726.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Walker, Jemma. "Bayesian modelling in genetic association studies". Thesis, London School of Hygiene and Tropical Medicine (University of London), 2012. http://researchonline.lshtm.ac.uk/1635516/.

Testo completo
Abstract (sommario):
Bayesian Model Selection Approaches are flexible methods that can be utilised to investigate Genetic Association studies in greater detail; enabling us to more accurately pin-point locations of disease genes in complex regions such as the MHC, as well as investigate possible causal pathways between genes, disease and intermediate phenotypes. This thesis is split into two distinct parts. The first uses a Bayesian Multivariate Adaptive Regression Spline Model to search across many highly correlated variants to try to determine which are likely to be the truly causal variants within complex genetic regions and also how each of these variants influences disease status. Specifically, I consider the role of genetic variants within the MHC region on SLE. The second part of the thesis aims to model possible disease pathways between genes, disease, intermediate phenotypes and environmental factors using Bayesian Networks, in particular focussing upon Coronary Heart disease and numerous blood biomarkers and related genes. Bayesian Multivariate Adaptive Regression Spline Model: Genetic association studies have the problem that often many genotypes in strong linkage disequilibrium (LD) are found to be associated with the outcome of interest. This makes it difficult to establish the actual SNP responsible. The aim of this part of the thesis is to investigate Bayesian variable selection methods in regions of high LD. In particular, to investigate SNPs in the major histocompatibility complex (MHC) region associated with systematic lupus erythematosus (SLE). Past studies have found several SNPs in this region to be highly associated with SLE but these SNPs are in high LD with one another. It is desirable to search over all possible regression models in order to find those SNPs that are most important in the prediction of SLE. The Bayesian Multivariate Adapative Regression Splines (BMARS) model used should automatically correct for nearby associated SNPs, and only those directly associated should be included in the model. The BMARS approach will also automatically select the most appropriate disease model for each directly associated variant. It was found that there appear to be 3 separate SNP signals in the MHC region that show association with SLE. The rest of the associations found using simple Frequentist tests are likely to be due to LD with the true signal. Bayesian Networks for Genetic Association Studies: Coronary Heart Disease (CHD) is one of many diseases that result from complicated relationships between both genetic and environmental factors. Identifying causal factors and developing new treatments that target these factors is very difficult. Changes in intermediate phenotypes, or biomarkers, could suggest potential causal pathways, although these have a tendency to group amongst those patients with higher risk of CHD making to difficult to distinguish independent causal relationships. I aim to model disease pathways allowing for intermediate phenotypes as well as genetic and environmental factors. Statistical methodology was developed using directed acyclic graphs (DAGs). Disease outcomes, genes, intermediate phenotypes and possible explanatory variables were represented as nodes in a DAG. Possible models were investigated using Bayesian regression models, based upon the underlying DAG, in a reversible jump MCMC framework. Modelling the data this way allows us to distinguish between direct and indirect effects as well as explore possible directionality of relationships. Since different DAGs can belong to the same equivalence class, some directions of association may become indistinguishable and I am interested in the implications of this. I investigated the integrated associations of genotypes with multiple blood biomarkers linked to CHD risk, focusing particularly on relationships between APOE, CETP and APOB genotypes; HDL- and LDL- cholesterol, triglycerides, C-reactive protein, fibrogen and apolipoproteins A and B. Overview: I will begin by introducing the topics of genetics, statistics and directed acyclic graphs with a background on each (Chapters 2,3 and 4 respectively). Chapter 5 will then detail the analysis and results of the BMARS model. The analysis and results of Bayesian networks for genetic association studies will then be covered in Chapter 6.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Van, der Laarse Maryn. "Modelling rhino presence with Bayesian networks". Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/73455.

Testo completo
Abstract (sommario):
Modelling complex systems such as how the white rhinoceros Ceratotherium simum simum uses a landscape requires innovative and multi-disciplinary approaches. Bayesian networks have been shown to provide a dynamic, easily interpretable framework to represent real-world problems. This, together with advances in remote sensor technology to easily quantify environmental variables, make non-intrusive techniques for understanding and inference of ecological processes more viable than ever. However, when modelling an animal’s use of a landscape we only have access to presence locations. These data are also extremely susceptible to both temporal and spatial sampling bias in that animal presence locations often originate from aerial surveys or from individual rhinos fitted with tracking collars. In modelling species’ presence, little recognition is given to finding quantifiable drivers and managing confounding variables. Here we use presence-unlabelled modelling to construct Bayesian networks for rhino presence with remotely sensed covariates and show how it can provide an understanding of a complex system in a temporal and spatial context. We find that strategic unlabelled data sampling is important to counter sampling biases and discretisation of covariate data needs to be well considered in the tradeoff between computational efficiency and data accuracy. We show how learned Bayesian networks can be used to not only reveal interesting relations between drivers of rhino presence, but also to perform inference. Having temporally aware environmental variables such as soil moisture and distance to fire, allowed us to infer rhino presences for the following time step with incomplete evidence. We confirmed that in general, white rhinos tend to be close to surface water, rivers and previously burned areas with a preference for warm slopes. These relationships between drivers shift notably when modelling for individuals. We anticipate our dissertation to be a starting point for more sophisticated models of complex systems specifically investigating its use to model individual behaviour.
Dissertation (MEng)--University of Pretoria, 2020.
Industrial and Systems Engineering
MEng (Industrial Engineering)
Unrestricted
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Aliverti, Emanuele. "Bayesian modelling of complex dependence structures". Doctoral thesis, Università degli Studi di Padova, 2020. http://hdl.handle.net/10278/3732472.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Aliverti, Emanuele. "Bayesian modelling of complex dependence structures". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3424720.

Testo completo
Abstract (sommario):
Complex dependence structures characterising modern data are routinely encountered in a large variety of research fields. Medicine, biology, psychology and social sciences are enriched by intricate architectures such as networks, tensors and more generally high-dimensional dependent data. Rich dependence structures stimulate challenging research questions and open wide methodological avenues in different areas of statistical research, providing an exciting atmosphere to develop innovative tools. A primary interest in statistical modelling of complex data is on adequately extracting information to conduct meaningful inference, providing reliable results in terms of uncertainty quantification and generalisability into future samples. These aims require ad-hoc statistical methodologies to appropriately characterize the dependence structures defining complex data as such, further improving the understanding of the mechanisms underlying the observed configurations. The focus of the thesis is on Bayesian modelling of complex dependence structures via latent variable constructs. This strategy characterises the dependence structure in an unobservable latent space, specifying the observed quantities as conditionally independent given a set of latent attributes, facilitating tractable posterior inference and an eloquent interpretation. The thesis is organized into three main parts, illustrating case studies from different fields of application and focused on studying modern challenges in neuroscience, psychology and criminal justice. Bayesian modelling of the complex data arising in these domains via latent features effectively provides valuable insights on different aspects of such structures, addressing the questions of interest and contributing to the scientific understanding.
Strutture di dipendenza complesse sono molto diffuse in diverse applicazioni. Medicina, biologia, psicologia e scienze sociali sono arricchite da architetture complicate quali reti, tensori e più generalmente dati dipendenti ed ad alta dimensionalità. Strutture di dipendenza articolate stimolano complesse domande di ricerca ed aprono ampi spazi metodologici in diversi ambiti di ricerca statistica, creando una frizzante atmosfera nella quale sviluppare strumenti innovativi. Un obiettivo cruciale nella modellazione statistica di dati complessi consiste nell’estrazione di informazione per condurre inferenza coerente e ottenere risultati affidabili in termini di quantificazione dell’incertezza e di validità per dati futuri. Questi obiettivi necessitano di metodologie statistiche ad-hoc per caratterizzare un modo appropriato le strutture di dipendenza che definiscono dati complessi in quanto tali, migliorando ulteriormente la conoscenza dei meccanismi sottostanti tali strutture. Questa tesi si concentra sulla modellazione Bayesiana di strutture di dipendenza complessa tramite costrutti a variabili latenti. Tale strategia caratterizza la struttura di dipendenza in uno spazio latente, specificando le quantità osservate come condizionatamente indipendenti dato un insieme di attributi latenti, i quali semplificano l’inferenza a posteriori e permettono un’eloquente interpretazione. La tesi è organizzata in tre parti principali, le quali illustrano diverse applicazioni in neuroscienze, psicologia e giustizia criminale. Una modellazione Bayesiana tramite variabili latenti dei dati complessi che nascono in questi ambiti fornisce interessanti intuizioni su diversi aspetti di tali strutture, rispondendo a diverse domande di ricerca e contribuendo alla conoscenza scientifica in materia.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Yu, Qingzhao. "Bayesian synthesis". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155324080.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Frank, Stella Christina. "Bayesian models of syntactic category acquisition". Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/6693.

Testo completo
Abstract (sommario):
Discovering a word’s part of speech is an essential step in acquiring the grammar of a language. In this thesis we examine a variety of computational Bayesian models that use linguistic input available to children, in the form of transcribed child directed speech, to learn part of speech categories. Part of speech categories are characterised by contextual (distributional/syntactic) and word-internal (morphological) similarity. In this thesis, we assume language learners will be aware of these types of cues, and investigate exactly how they can make use of them. Firstly, we enrich the context of a standard model (the Bayesian Hidden Markov Model) by adding sentence type to the wider distributional context.We show that children are exposed to a much more diverse set of sentence types than evident in standard corpora used for NLP tasks, and previous work suggests that they are aware of the differences between sentence type as signalled by prosody and pragmatics. Sentence type affects local context distributions, and as such can be informative when relying on local context for categorisation. Adding sentence types to the model improves performance, depending on how it is integrated into our models. We discuss how to incorporate novel features into the model structure we use in a flexible manner, and present a second model type that learns to use sentence type as a distinguishing cue only when it is informative. Secondly, we add a model of morphological segmentation to the part of speech categorisation model, in order to model joint learning of syntactic categories and morphology. These two tasks are closely linked: categorising words into syntactic categories is aided by morphological information, and finding morphological patterns in words is aided by knowing the syntactic categories of those words. In our joint model, we find improved performance vis-a-vis single-task baselines, but the nature of the improvement depends on the morphological typology of the language being modelled. This is the first token-based joint model of unsupervised morphology and part of speech category learning of which we are aware.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Houlsby, Neil. "Efficient Bayesian active learning and matrix modelling". Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/248885.

Testo completo
Abstract (sommario):
With the advent of the Internet and growth of storage capabilities, large collections of unlabelled data are now available. However, collecting supervised labels can be costly. Active learning addresses this by selecting, sequentially, only the most useful data in light of the information collected so far. The online nature of such algorithms often necessitates efficient computations. Thus, we present a framework for information theoretic Bayesian active learning, named Bayesian Active Learning by Disagreement, that permits efficient and accurate computations of data utility. Using this framework we develop new techniques for active Gaussian process modelling and adaptive quantum tomography. The latter has been shown, in both simulation and laboratory experiments, to yield faster learning rates than any non-adaptive design. Numerous datasets can be represented as matrices. Bayesian models of matrices are becoming increasingly popular because they can handle noisy or missing elements, and are extensible to different data-types. However, efficient inference is crucial to allow these flexible probabilistic models to scale to large real-world datasets. Binary matrices are a ubiquitous datatype, so we present a stochastic inference algorithm for fast learning in this domain. Preference judgements are a common, implicit source of binary data. We present a hybrid matrix factorization/Gaussian process model for collaborative learning from multiple users' preferences. This model exploits both the structure of the matrix and can incorporate additional covariate information to make accurate predictions. We then combine matrix modelling with active learning and propose a new algorithm for cold-start learning with ordinal data, such as ratings. This algorithm couples Bayesian Active Learning by Disagreement with a heteroscedastic model to handle varying levels of noise. This ordinal matrix model is also used to analyze psychometric questionnaires; we analyze classical assumptions made in psychometrics and show that active learning methods can reduce questionnaire lengths substantially.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Vieira, Rute Gomes Velosa. "Bayesian phylogenetic modelling of lateral gene transfers". Thesis, University of Newcastle upon Tyne, 2015. http://hdl.handle.net/10443/3018.

Testo completo
Abstract (sommario):
Phylogenetic trees represent the evolutionary relationships between a set of species. Inferring these trees from data is particularly challenging sometimes since the transfer of genetic material can occur not only from parents to their o spring but also between organisms via lateral gene transfers (LGTs). Thus, the presence of LGTs means that genes in a genome can each have di erent evolutionary histories, represented by di erent gene trees. A few statistical approaches have been introduced to explore non-vertical evolution through collections of Markov-dependent gene trees. In 2005 Suchard described a Bayesian hierarchical model for joint inference of gene trees and an underlying species tree, where a layer in the model linked gene trees to the species tree via a sequence of unknown lateral gene transfers. In his model LGT was modeled via a random walk in the tree space derived from the subtree prune and regraft (SPR) operator on unrooted trees. However, the use of SPR moves to represent LGT in an unrooted tree is problematic, since the transference of DNA between two organisms implies the contemporaneity of both organisms and therefore it can allow unrealistic LGTs. This thesis describes a related hierarchical Bayesian phylogenetic model for reconstructing phylogenetic trees which imposes a temporal constraint on LGTs, namely that they can only occur between species which exist concurrently. This is achieved by taking into account possible time orderings of divergence events in trees, without explicitly modelling divergence times. An extended version of the SPR operator is introduced as a more adequate mechanism to represent the LGT e ect in a tree. The extended SPR operation respects the time ordering. It additionaly di ers from regular SPR as it maintains a 1-to-1 correspondence between points on the species tree and points on each gene tree. Each point on a gene tree represents the existence of a population containing that gene at some point in time. Hierarchical phylogenetic models were used in the reconstruction of each gene tree from its corresponding gene alignment, enabling the pooling of information across genes. In addition to Suchard's approach, we assume variation in the rate of evolution between di erent sites. The species tree is assumed to be xed. A Markov Chain Monte Carlo (MCMC) algorithm was developed to t the model in a Bayesian framework. A novel MCMC proposal mechanism for jointly proposing the gene tree topology and branch lengths, LGT distance and LGT history has been developed as well as a novel graphical tool to represent LGT history, the LGT Biplot. Our model was applied to simulated and experimental datasets. More speci cally we analysed LGT/reassortment presence in the evolution of 2009 Swine-Origin In uenza Type A virus. Future improvements of our model and algorithm should include joint inference of the species tree, improving the computational e ciency of the MCMC algorithm and better consideration of other factors that can cause discordance of gene trees and species trees such as gene loss.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Cowan, Alexandra. "Modelling trader intentions through evolving Bayesian networks". Thesis, Queen's University Belfast, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.725743.

Testo completo
Abstract (sommario):
This research highlights the problem of trade based market manipulation in financial markets, where an individual or party aim to distort the pricing mechanism and gain profit at the expense of law abiding investors. This research evaluates data mining approaches applied to financial market surveillance and addresses a current deficit in literature with regards to modelling traders at an entity level. A system is proposed, named the Evolving Bayesian Network (EBN), to model an individual trader's behaviour using transaction order data generated by the participant. The aim of the model is to infer the individual's intentions as order sequences are generated throughout a trading day, to then detect when manipulation attempts are being made for personal gain.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Nightingale, Glenna Faith. "Bayesian point process modelling of ecological communities". Thesis, University of St Andrews, 2013. http://hdl.handle.net/10023/3710.

Testo completo
Abstract (sommario):
The modelling of biological communities is important to further the understanding of species coexistence and the mechanisms involved in maintaining biodiversity. This involves considering not only interactions between individual biological organisms, but also the incorporation of covariate information, if available, in the modelling process. This thesis explores the use of point processes to model interactions in bivariate point patterns within a Bayesian framework, and, where applicable, in conjunction with covariate data. Specifically, we distinguish between symmetric and asymmetric species interactions and model these using appropriate point processes. In this thesis we consider both pairwise and area interaction point processes to allow for inhibitory interactions and both inhibitory and attractive interactions. It is envisaged that the analyses and innovations presented in this thesis will contribute to the parsimonious modelling of biological communities.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Hearty, Peter Stewart. "Modelling Agile software processes using bayesian networks". Thesis, Queen Mary, University of London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509669.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Ford, Oliver P. "Tokamak Plasma Analysis through Bayesian Diagnostic Modelling". Thesis, Imperial College London, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.526369.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Andrade, José Ailton Alencar. "Bayesian robustness modelling using regularly varying distributions". Thesis, University of Sheffield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419594.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Aguilar, Delil Gomez Portugal. "Bayesian modelling of the radiocarbon calibration curve". Thesis, University of Sheffield, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369960.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Vermaak, Jaco. "Bayesian modelling and enhancement of speech signals". Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621822.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Vlasakakis, Georgios. "Application of Bayesian statistics to physiological modelling". Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610198.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Groves, Adrian R. "Bayesian learning methods for modelling functional MRI". Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:fe46e696-a1a6-4a9d-9dfe-861b05b1ed33.

Testo completo
Abstract (sommario):
Bayesian learning methods are the basis of many powerful analysis techniques in neuroimaging, permitting probabilistic inference on hierarchical, generative models of data. This thesis primarily develops Bayesian analysis techniques for magnetic resonance imaging (MRI), which is a noninvasive neuroimaging tool for probing function, perfusion, and structure in the human brain. The first part of this work fits nonlinear biophysical models to multimodal functional MRI data within a variational Bayes framework. Simultaneously-acquired multimodal data contains mixtures of different signals and therefore may have common noise sources, and a method for automatically modelling this correlation is developed. A Gaussian process prior is also used to allow spatial regularization while simultaneously applying informative priors on model parameters, restricting biophysically-interpretable parameters to reasonable values. The second part introduces a novel data fusion framework for multivariate data analysis which finds a joint decomposition of data across several modalities using a shared loading matrix. Each modality has its own generative model, including separate spatial maps, noise models and sparsity priors. This flexible approach can perform supervised learning by using target variables as a modality. By inferring the data decomposition and multivariate decoding simultaneously, the decoding targets indirectly influence the component shapes and help to preserve useful components. The same framework is used for unsupervised learning by placing independent component analysis (ICA) priors on the spatial maps. Linked ICA is a novel approach developed to jointly decompose multimodal data, and is applied to combined structural and diffusion images across groups of subjects. This allows some of the benefits of tensor ICA and spatially-concatenated ICA to be combined, and allows model comparison between different configurations. This joint decomposition framework is particularly flexible because of its separate generative models for each modality and could potentially improve modelling of functional MRI, magnetoencephalography, and other functional neuroimaging modalities.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Chen, Younan. "Bayesian hierarchical modelling of dual response surfaces". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/29924.

Testo completo
Abstract (sommario):
Dual response surface methodology (Vining and Myers (1990)) has been successfully used as a cost-effective approach to improve the quality of products and processes since Taguchi (Tauchi (1985)) introduced the idea of robust parameter design on the quality improvement in the United States in mid-1980s. The original procedure is to use the mean and the standard deviation of the characteristic to form a dual response system in linear model structure, and to estimate the model coefficients using least squares methods. In this dissertation, a Bayesian hierarchical approach is proposed to model the dual response system so that the inherent hierarchical variance structure of the response can be modeled naturally. The Bayesian model is developed for both univariate and multivariate dual response surfaces, and for both fully replicated and partially replicated dual response surface designs. To evaluate its performance, the Bayesian method has been compared with the original method under a wide range of scenarios, and it shows higher efficiency and more robustness. In applications, the Bayesian approach retains all the advantages provided by the original dual response surface modelling method. Moreover, the Bayesian analysis allows inference on the uncertainty of the model parameters, and thus can give practitioners complete information on the distribution of the characteristic of interest.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Sairam, Nivedita. "Bayesian Approaches for Modelling Flood Damage Processes". Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/23083.

Testo completo
Abstract (sommario):
Hochwasserschadensprozesse werden von den drei Komponenten des Hochwasserrisikos bestimmt – der Gefahr, der Exposition und der Vulnerabilität. Dabei bleiben wichtige Einflussgrößen auf die Vulnerabilität, wie die private Hochwasservorsorge aufgrund fehlender quantitativer Informationen unberücksichtigt. Diese Arbeit entwickelt daher eine robuste statistische Methode zur Quantifizierung des Einflusses von privater Hochwasservorsorge auf die Reduzierung der Vulnerabilität von Haushalten bei Hochwasser. Es konnte gezeigt werden, dass in Deutschland private Hochwasservorsorgemaßnahmen den durchschnittlichen Hochwasserschaden pro Wohngebäude um 11.000 bis 15.000 Euro reduzieren. Hochwasserschadensmodelle mit Expertenwissen und datengestützten Methoden sind dabei am besten in der Lage Unterschiede in der Vulnerabilität durch private Hochwasservorsorge zu erkennen. Die über Hochwasserschadenprozesse erhobenen Daten und Modellannahmen sind von Unsicherheit geprägt und so sind auch Schätzungen mit. Die Bayesschen Modelle, die in dieser Arbeit entwickelt und angewandt werden, nutzen Annahmen über Schadensprozesse als Prior und empirische Daten zur Aktualisierung der Wahrscheinlischkeitsverteilungen. Die Modelle bieten Hochwasserschadensschätzungen als Verteilung, welche die Bandbreite der Variabilität der Schadensprozesse und die Unsicherheit der Modellannahmen abbilden. Hochwasserschadensmodelle, hinsichtlich der Prognoseerstellung und Anwendbarkeit. Ins Besondere verbessert die Verwendung einer Beta–Verteilung die Zuverlässigkeit der Modellergebnisse im Vergleich zu den häufig genutzten Gaußschen oder nicht parametrischen Verteilungen. Der hierarchische Bayessche Ansatz schafft eine verbesserte Parametrisierung von Wasserstand-Schadens-Funktionen und ersetzt so die Notwendigkeit empirischer Daten durch regional- und Ereignis-spezifisches Expertenwissen. Auf diese Weise kann die Vorhersage bei einer zeitlich und räumlichen Übertragung des Models verbessert werden.
Flood damage processes are influenced by the three components of flood risk - hazard, exposure and vulnerability. In comparison to hazard and exposure, the vulnerability component, though equally important is often generalized in many flood risk assessments by a simple depth-damage curve. Hence, this thesis developed a robust statistical method to quantify the role of private precaution in reducing flood vulnerability of households. In Germany, the role of private precaution was found to be very significant in reducing flood damage (11 - 15 thousand euros, per household). Also, flood loss models with structure, parameterization and choice of explanatory variables based on expert knowledge and data-driven methods were successful in capturing changes in vulnerability, which makes them suitable for future risk assessments. Due to significant uncertainty in the underlying data and model assumptions, flood loss models always carry uncertainty around their predictions. This thesis develops Bayesian approaches for flood loss modelling using assumptions regarding damage processes as priors and available empirical data as evidence for updating. Thus, these models provide flood loss predictions as a distribution, that potentially accounts for variability in damage processes and uncertainty in model assumptions. The models presented in this thesis are an improvement over the state-of-the-art flood loss models in terms of prediction capability and model applicability. In particular, the choice of the response (Beta) distribution improved the reliability of loss predictions compared to the popular Gaussian or non-parametric distributions; the Hierarchical Bayesian approach resulted in an improved parameterization of the common stage damage functions that replaces empirical data requirements with region and event-specific expert knowledge, thereby, enhancing its predictive capabilities during spatiotemporal transfer.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Tompkins, Anthony. "Bayesian Spatio-Temporal Modelling with Fourier Features". Thesis, The University of Sydney, 2018. https://hdl.handle.net/2123/21328.

Testo completo
Abstract (sommario):
One of the most powerful machine learning techniques is \emph{Gaussian Processes} (GPs) which incur an $O(N^3)$ complexity in the number of data samples. In regression and classification there exist approximation methods which typically rely on $M$ \emph{inducing points} but still typically incur an $O(NM^2)$ complexity in the data and corresponding inducing points which have reduced expressiveness the larger the dataset becomes. These methods are typically unable to learn if the number of datapoints becomes computationally intractable. It is this limitation of traditional methods that invites us to explore alternative representations of kernels to enable scalable modelling and inference for spatio-temporal phenomena. The key insight we leverage is providing \emph{feature}-space representations of kernels which have computational dependence \emph{independent} of data samples. While such representations exist in various forms, they typically address kernels of infinite support and have not been investigated extensively for modeling periodicity or data supported on bounded intervals. Our approach leverages methods in harmonic analysis to provide an alternative form of representing kernels using Fourier Series which we demonstrate to have superior performance to alternative feature representations. Our methodology further develops \emph{compositional} kernels and show it is straightforward to integrate our Fourier series features with standard kernels. With compositions of kernels we are able to represent nuances in the data that canonical kernels typically cannot represent. This thesis brings the following contributions: 1) A new formulation of representing univariate periodic kernels using Fourier series that allows one to perform scalable inference with a large number of samples; 2) A generalisation of univariate periodic kernels into the multivariate domain which allows tractable higher dimensional inference; 3) An efficient method for the tricky problem of seeding and learning periodic hyperparameters; 4) A generalised framework that allows one to perform compositional kernel learning in a Bayesian framework for spatio-temporal phenomena.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Baker, Jannah F. "Bayesian spatiotemporal modelling of chronic disease outcomes". Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/104455/1/Jannah_Baker_Thesis.pdf.

Testo completo
Abstract (sommario):
This thesis contributes to Bayesian spatial and spatiotemporal methodology by investigating techniques for spatial imputation and joint disease modelling, and identifies high-risk individual profiles and geographic areas for type II diabetes mellitus (DMII) outcomes. DMII and related chronic conditions including hypertension, coronary arterial disease, congestive heart failure and chronic obstructive pulmonary disease are examples of ambulatory care sensitive conditions for which hospitalisation for complications is potentially avoidable with quality primary care. Bayesian spatial and spatiotemporal studies are useful for identifying small areas that would benefit from additional services to detect and manage these conditions early, thus avoiding costly sequelae.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

CORRADIN, RICCARDO. "Contributions to modelling via Bayesian nonparametric mixtures". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241261.

Testo completo
Abstract (sommario):
I modelli mistura in ambito Bayesiano nonparametrico sono modelli flessibili per stime di densità e clustering, ormai uno strumento di uso comune in ambito statistico applicato. Il primo modello introdotto in questo ambito è stato il processo di Dirichlet (DP) (Ferguson, 1973) combinato con un kernel Gaussiano(Lo, 1984). Recentemente è cresciuto l’interesse verso la definizione di modelli mistura basati su misure nonparametriche che generalizzano il DP. Tra le misure proposte, il processo di Pitman-Yor (PY) (Perman et al., 1992; Pitman, 1995) e, più in generale, la classe di Gibbs-type prior (see e.g. De Blasi et al., 2015) rappresentano generalizzazioni convenienti in grado di combinare trattabilità matematica, interpretabilità e flessibilità. In questa tesi investighiamo tre aspetti dei modelli mistura nonparametrici, in ordine, proprietà dei modelli, aspetti computazionali e proprietà distributive. La tesi è organizzata come segue. Il primo capitolo propone una revisione coincisa della statistica Bayesiana nonparametrica, con particolare attenzione a strumenti e modelli utili nei capitoli successivi. Introduciamo le nozioni di scambiabilità, partizioni scambiabili e random probability measure. Discutiamo quindi alcuni casi particolari, i processi DP e PY, ingredienti principali rispettivamente nel secondo e nel terzo capitolo. Infine discutiamo brevemente la logica dietro la definizione di classi più generali di priors nonparametriche discrete. Nel secondo capitolo proponiamo uno studio dell’effetto di trasformazioni affini invertibili dei dati sulla distribuzione a posteriori di modelli mistura DP, con particolare attenzione ai modelli con kernel Gaussiano (DPM-G). Introduciamo un risultato riguardante la specificazione dei parametri di un modello in relazione a trasformazioni dei dati. Successivamente formalizziamo la nozione di robustezza asintotica di un modello nel caso di trasformazioni affini dei dati e dimostriamo un risultato asintotico che, basandosi sulla consistenza asintotica di modelli DPM-G, mostra che, sotto alcune assunzioni sulla distribuzione che ha generato i dati, i modelli DPM-G sono asintoticamente robusti. Nel terzo capitolo presentiamo l’Importance Conditional Sampler (ICS), un nuovo schema di campionamento condizionale per modelli mistura PY, basato su una rappresentazione della distribuzione a posteriori di un processo PY (Pitman, 1996) e sull’idea di importance sampling, ispirandosi al passo augmentation del noto Algoritmo 8 di Neal (2000). Il metodo proposto combina convenientemente le migliori caratteristiche dei metodi esistenti, condizionali e marginali, per modelli mistura PY. A differenza di altri algoritmi condizionali, l’efficienza numerica dell’ICS è robusta rispetto alla specificazione dei parametri del PY. Gli step per implementare l’ICS sono descritti in dettaglio e le performance sono comparate con gli algoritmi più popolari. Infine l’ICS viene usato per definire un nuovo algoritmo efficiente per la classe di modelli mistura GM-dipendenti DP (Lijoi et al., 2014a; Lijoi et al., 2014b), per dati parzialmente scambiabili. Nel quarto capitolo studiamo alcune proprietà delle Gibbs-type priors. Il risultato principale riguarda un campione scambiabile estratto da una Gibbs-type prior e propone una rappresentazione conveniente della distribuzione della dimensione del cluster per l’osservazione (m+1)esima, dato un campione non osservato di ampiezza m. Dallo studio di questa distribuzione deriviamo una strategia, semplice ed utile, per elicitare i parametri di una Gibbs-type prior, nel contesto dei modelli mistura con una misura misturante Gibbs-type. I risultati negli ultimi tre capitoli sono supportati da esaustivi studi di simulazioni ed illustrazioni in ambito atronomico.
Bayesian nonparametric mixtures are flexible models for density estimation and clustering, nowadays a standard tool in the toolbox of applied statisticians. The first proposal of such models was the Dirichlet process (DP) (Ferguson, 1973) mixture of Gaussian kernels by Lo (1984), contribution which paved the way to the definition of a wide variety of nonparametric mixture models. In recent years, increasing interest has been dedicated to the definition of mixture models based on nonparametric mixing measures that go beyond the DP. Among these measures, the Pitman-Yor process (PY) (Perman et al., 1992; Pitman, 1995) and, more in general, the class of Gibbs-type priors (see e.g. De Blasi et al., 2015) stand out for conveniently combining mathematical tractability, interpretability and modelling flexibility. In this thesis we investigate three aspects of nonparametric mixture models, which, in turn, concern their modelling, computational and distributional properties. The thesis is organized as follows. The first chapter proposes a coincise review of the area of Bayesian nonparametric statistics, with focus on tools and models that will be considered in the following chapters. We first introduce the notions of exchangeability, exchangeable partitions and discrete random probability measures. We then focus on the DP and the PY case, main ingredients of second and third chapter, respectively. Finally, we briefly discuss the rationale behind the definition of more general classes of discrete nonparametric priors. In the second chapter we propose a thorough study on the effect of invertible affine transformations of the data on the posterior distribution of DP mixture models, with particular attention to DP mixtures of Gaussian kernels (DPM-G). First, we provide an explicit result relating model parameters and transformations of the data. Second, we formalize the notion of asymptotic robustness of a model under affine transformations of the data and prove an asymptotic result which, by relying on the asymptotic consistency of DPM-G models, show that, under mild assumptions on the data-generating distribution, DPM-G are asymptotically robust. The third chapter presents the ICS, a novel conditional sampling scheme for PY mixture models, based on a useful representation of the posterior distribution of a PY (Pitman, 1996) and on an importance sampling idea, similar in spirit to the augmentation step of the celebrated Algorithm 8 of Neal (2000). The proposed method conveniently combines the best features of state-of-the-art conditional and marginal methods for PY mixture models. Importantly, and unlike its most popular conditional competitors, the numerical efficiency of the ICS is robust to the specification of the parameters of the PY. The steps for implementing the ICS are described in detail and its performance is compared with that one of popular competing algorithms. Finally, the ICS is used as a building block for devising a new efficient algorithm for the class of GM-dependent DP mixture models (Lijoi et al., 2014a; Lijoi et al., 2014b), for partially exchangeable data. In the fourth chapter we study some distributional properties Gibbs-type priors. The main result focuses on an exchangeable sample from a Gibbs-type prior and provides a conveniently simple description of the distribution of the size of the cluster the ( m + 1 ) th observation is assigned to, given an unobserved sample of size m. The study of such distribution provides the tools for a simple, yet useful, strategy for prior elicitation of the parameters of a Gibbs-type prior, in the context of Gibbs-type mixture models. The results in the last three chapters are supported by exhaustive simulation studies and illustrated by analysing astronomical datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Southey, Richard. "Bayesian hierarchical modelling with application in spatial epidemiology". Thesis, Rhodes University, 2018. http://hdl.handle.net/10962/59489.

Testo completo
Abstract (sommario):
Disease mapping and spatial statistics have become an important part of modern day statistics and have increased in popularity as the methods and techniques have evolved. The application of disease mapping is not only confined to the analysis of diseases as other applications of disease mapping can be found in Econometric and financial disciplines. This thesis will consider two data sets. These are the Georgia oral cancer 2004 data set and the South African acute pericarditis 2014 data set. The Georgia data set will be used to assess the hyperprior sensitivity of the precision for the uncorrelated heterogeneity and correlated heterogeneity components in a convolution model. The correlated heterogeneity will be modelled by a conditional autoregressive prior distribution and the uncorrelated heterogeneity will be modelled with a zero mean Gaussian prior distribution. The sensitivity analysis will be performed using three models with conjugate, Jeffreys' and a fixed parameter prior for the hyperprior distribution of the precision for the uncorrelated heterogeneity component. A simulation study will be done to compare four prior distributions which will be the conjugate, Jeffreys', probability matching and divergence priors. The three models will be fitted in WinBUGS® using a Bayesian approach. The results of the three models will be in the form of disease maps, figures and tables. The results show that the hyperprior of the precision for the uncorrelated heterogeneity and correlated heterogeneity components are sensitive to changes and will result in different results depending on the specification of the hyperprior distribution of the precision for the two components in the model. The South African data set will be used to examine whether there is a difference between the proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the correlated heterogeneity component in a convolution model. Two models will be fitted in WinBUGS® for this comparison. Both the hyperpriors of the precision for the uncorrelated heterogeneity and correlated heterogeneity components will be modelled using a Jeffreys' prior distribution. The results show that there is no significant difference between the results of the model with a proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the South African data, although there are a few disadvantages of using a proper conditional autoregressive prior for the correlated heterogeneity which will be stated in the conclusion.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Haasan, Masoud. "Tree-ring growth modelling applied to Bayesian dendrochronology". Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/15746/.

Testo completo
Abstract (sommario):
Classical dendrochronology involves using standard statistical methods, such as correlation coefficients and t-values to crossmatch undated tree-ring width sequences to dated 'master' chronologies. This crossmatching process aims to identify the 'best' offset between the dated and undated sequences with a view to providing a calendar date estimate for the undated trees. Motivated by the successful and routine use of Bayesian statistical methods to provide a fully probabilistic approach to radiocarbon dating, this thesis investigates the practicality of using a process-based forward model known as 'VSLite' at the core of Bayesian dendrochronology. The mechanistic VSLite model has the potential to capture key characteristics of the complex system that links climate to tree-ring formation. It allows simulated, dated tree-ring chronologies to be generated at any geographical location where historical climate records exist. Embedding VSLite within a Bayesian approach to tree-ring dating allows combination of both ring-width data and any available prior information. Additionally, instead of identifying the `best' calendar date estimate, the Bayesian approach allows provision of probabilistic statements about a collection of possible dates, each with a specific (posterior) probability. The impact of uncertainty in the VSLite input parameters on the model output has been systematically investigated in this thesis, and the VSLite-based approach to Bayesian tree-ring dating has been explored using both simulated and real data. Results of implementing the new VSLite-based approach are compared with those using current classical and Bayesian approaches. An option for reducing the need for preprocessing data is also investigated via a data-adaptive rescaling approach. Having established the effectiveness of using the mechanistic forward model as the core for Bayesian dendrochronology, the practicality of adopting it to aid in dating in the absence of suitable local master chronologies is also explored.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Leonte, Daniela School of Mathematics UNSW. "Flexible Bayesian modelling of gamma ray count data". Awarded by:University of New South Wales. School of Mathematics, 2003. http://handle.unsw.edu.au/1959.4/19147.

Testo completo
Abstract (sommario):
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. In this thesis we describe computational methods for Bayesian inference and model selection for generalized linear models, which improve on existing techniques. These methods are applied to the building of flexible models for gamma ray count data (data measuring the natural radioactivity of rocks) at the Castlereagh Waste Management Centre, which served as a hazardous waste disposal facility for the Sydney region between March 1978 and August 1998. Bayesian model selection methods for generalized linear models enable us to approach problems of smoothing, change point detection and spatial prediction for these data within a common methodological and computational framework, by considering appropriate basis expansions of a mean function. The data at Castlereagh were collected in the following way. A number of boreholes were drilled at the site, and for each borehole a gamma ray detector recorded gamma ray emissions at different depths as the detector was raised gradually from the bottom of the borehole to ground level. The profile of intensity of gamma counts can be informative about the geology at each location, and estimation of intensity profiles raises problems of smoothing and change point detection for count data. The gamma count profiles can also be modelled spatially, to inform the geological profile across the site. Understanding the geological structure of the site is important for modelling the transport of chemical contaminants beneath the waste disposal area. The structure of the thesis is as follows. Chapter 1 describes the Castlereagh hazardous waste site and the geophysical data, which motivated the methodology developed in this research. We summarise the principles of Gamma Ray (GR) logging, a method routinely employed by geophysicists and environmental engineers in the detailed evaluation of hazardous site geology, and detail the use of the Castlereagh data in this research. In Chapter 2 we review some fundamental ideas of Bayesian inference and computation and discuss them in the context of generalised linear models. Chapter 3 details the theoretical basis of our work. Here we give a new Markov chain Monte Carlo sampling scheme for Bayesian variable selection in generalized linear models, which is analogous to the well-known Swendsen-Wang algorithm for the Ising model. Special cases of this sampling scheme are used throughout the rest of the thesis. In Chapter 4 we discuss the use of methods for Bayesian model selection in generalized linear models in two specific applications, which we implement on the Castlereagh data. First, we consider smoothing problems where we flexibly estimate the dependence of a response variable on one or more predictors, and we apply these ideas to locally adaptive smoothing of gamma ray count data. Second, we discuss how the problem of multiple change point detection can be cast as one of model selection in a generalized linear model, and consider application to change point detection for gamma ray count data. In Chapter 5 we consider spatial models based on partitioning a spatial region of interest into cells via a Voronoi tessellation, where the number of cells and the positions of their centres is unknown, and show how these models can be formulated in the framework of established methods for Bayesian model selection in generalized linear models. We implement the spatial partition modelling approach to the spatial analysis of gamma ray data, showing how the posterior distribution of the number of cells, cell centres and cell means provides us with an estimate of the mean response function describing spatial variability across the site. Chapter 6 presents some conclusions and suggests directions for future research. A paper based on the work of Chapter 3 has been accepted for publication in the Journal of Computational and Graphical Statistics, and a paper based on the work in Chapter 4 has been accepted for publication in Mathematical Geology. A paper based on the spatial modelling of Chapter 5 is in preparation and will be submitted for publication shortly. The work in this thesis was collaborative, to a smaller or larger extent in its various components. I authored Chapters 1 and 2 entirely, including definition of the problem in the context of the CWMC site, data gathering and preparation for analysis, review of the literature on computational methods for Bayesian inference and model selection for generalized linear models. I also authored Chapters 4 and 5 and benefited from some of Dr Nott's assistance in developing the algorithms. In Chapter 3, Dr Nott led the development of sampling scheme B (corresponding to having non-zero interaction parameters in our Swendsen-Wang type algorithm). I developed the algorithm for sampling scheme A (corresponding to setting all algorithm interaction parameters to zero in our Swendsen-Wang type algorithm), and performed the comparison of the performance of the two sampling schemes. The final discussion in Chapter 6 and the direction for further research in the case study context is also my work.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Thomson, Noel. "Bayesian mixture modelling of migration by founder analysis". Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1468/.

Testo completo
Abstract (sommario):
In this thesis a new method is proposed to estimate major periods of migration from one region into another using phased, non-recombined sequence data from the present. The assumption is made that migration occurs in multiple waves and that during each migration period, a number of sequences, called `founder sequences', migrate into the new region. It is first shown through appropriate simulations based on the structured coalescent that previous inferences based on the idea of founder sequences sufer from the fundamental problem that it is assumed that migration events coincide with the nodes (coalescent events) of the reconstructed tree. It is shown that such an assumption leads to contradictions with the assumed underlying migration process, and that inferences based on such a method have the potential for bias in the date estimates obtained. An improved method is proposed which involves `connected star trees', a tree structure that allows the uncertainty in the time of the migration event to be modelled in a probabilistic manner. Useful theoretical results under this assumption are derived. To model the uncertainty of which founder sequence belongs to which migration period, a Bayesian mixture modelling approach is taken, inferences in which are made by Markov Chain Monte Carlo techniques. Using the developed model, a reanalysis of a dataset that pertains to the settlement of Europe is undertaken. It is shown that sensible inferences can be made under certain conditions using the new model. However, it is also shown that questions of major interest cannot be answered, and certain inferences cannot be made due to an inherent lack of information in any dataset composed of sequences from the present day. It is argued that many of the major questions of interest regarding the migration of modern day humans into Europe cannot be answered without strong prior assumptions being made by the investigator. It is further argued that the same reasons that prohibit certain inferences from being made under the proposed model would remain in any method which has similar assumptions.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Shahtahmassebi, Golnaz. "Bayesian modelling of ultra high-frequency financial data". Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/894.

Testo completo
Abstract (sommario):
The availability of ultra high-frequency (UHF) data on transactions has revolutionised data processing and statistical modelling techniques in finance. The unique characteristics of such data, e.g. discrete structure of price change, unequally spaced time intervals and multiple transactions have introduced new theoretical and computational challenges. In this study, we develop a Bayesian framework for modelling integer-valued variables to capture the fundamental properties of price change. We propose the application of the zero inflated Poisson difference (ZPD) distribution for modelling UHF data and assess the effect of covariates on the behaviour of price change. For this purpose, we present two modelling schemes; the first one is based on the analysis of the data after the market closes for the day and is referred to as off-line data processing. In this case, the Bayesian interpretation and analysis are undertaken using Markov chain Monte Carlo methods. The second modelling scheme introduces the dynamic ZPD model which is implemented through Sequential Monte Carlo methods (also known as particle filters). This procedure enables us to update our inference from data as new transactions take place and is known as online data processing. We apply our models to a set of FTSE100 index changes. Based on the probability integral transform, modified for the case of integer-valued random variables, we show that our models are capable of explaining well the observed distribution of price change. We then apply the deviance information criterion and introduce its sequential version for the purpose of model comparison for off-line and online modelling, respectively. Moreover, in order to add more flexibility to the tails of the ZPD distribution, we introduce the zero inflated generalised Poisson difference distribution and outline its possible application for modelling UHF data.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Whitlock, Mark E. "A Bayesian approach to road traffic network modelling". Thesis, University of Kent, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311262.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

GOMES, GUILHERME JOSE CUNHA. "MODELLING THE SOIL-ROCK INTERFACE USING BAYESIAN INFERENCE". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28488@1.

Testo completo
Abstract (sommario):
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A interface solo-rocha é de difícil determinação e permanece essencialmente desconhecida na maioria das encostas brasileiras. Nesta tese, apresentamos um modelo analítico para a predição espacial da espessura de solo com base na teoria do controle ascendente do maciço rochoso e topografia de alta resolução. A maioria dos parâmetros do modelo possui significado físico, possibilitando medições em campo ou laboratório. O modelo inclui um termo que simula a perda de regolito devido a movimentos de massa estocásticos e outro termo que reproduz a forma do maciço rochoso ao longo de canais de drenagem. Reconciliamos nosso modelo com dados de campo obtidos a partir de sondagens com penetrômetro dinâmico leve no maciço da Tijuca, Rio de Janeiro. Usamos inferência Bayesiana, com amostragem da distribuição posterior de parâmetros através de simulação Monte Carlo via cadeia de Markov, a qual forneceu parâmetros do modelo que melhor honram os dados de campo bem como a incerteza preditiva estratigráfica. Para testar os resultados da inferência Bayesiana em estabilidade de encostas, desenvolvemos um programa computacional para a integração de simulações de fluxo não-saturado, o qual proporciona a distribuição de poro pressões, e um código de análise limite numérica, que fornece o fator de segurança (FS), ambos em três-dimensões. Propagamos a incerteza estratigráfica no programa desenvolvido para quantificar a variabilidade do FS e a probabilidade de ruptura de uma encosta natural não-saturada existente na região de estudo. Finalmente, salientamos a importância da quantificação da topografia da interface solo-rocha em análises de estabilidade geotécnica.
Soil-bedrock interface is difficult to determine and remains essentially unknown in most Brazilian slopes. In this thesis, we present an analytic model for the spatial prediction of regolith depth built on the bottomup control on fresh bedrock topography hypothesis and high-resolution topographic data. Most of the parameters of the model represent physical entities that can be measured directly in the laboratory or field. The model includes a term which simulates the loss of regolith due to stochastic mass movements and another term that mimic the bedrock-valley morphology. We reconcile our model with field observations from boreholes using a light dynamic penetrometer at Tijuca massif, Rio de Janeiro. We use Bayesian inference, with Markov chain Monte Carlo simulation to summarize the posterior distribution of the parameters, which led to model parameters that best honor our field data as well as the stratigraphic predictive uncertainty. To test the results of the Bayesian inference in slope stability, we develop a software to integrate unsaturated flow simulations, which provide the pressure head distributions and a numerical limit analysis code, that generates the factor of safety (FS), both in three dimensions. We propagate the stratigraphic uncertainty through the developed program to quantify the FS variability and the probability of failure of a natural unsaturated hillslope in the study region. Finally, we emphasize the importance of bedrock topography in slope stability analysis.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Loza, Reyes Elisa. "Classification of phylogenetic data via Bayesian mixture modelling". Thesis, University of Bath, 2010. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.519916.

Testo completo
Abstract (sommario):
Conventional probabilistic models for phylogenetic inference assume that an evolutionary tree,andasinglesetofbranchlengthsandstochasticprocessofDNA evolutionare sufficient to characterise the generating process across an entire DNA alignment. Unfortunately such a simplistic, homogeneous formulation may be a poor description of reality when the data arise from heterogeneous processes. A well-known example is when sites evolve at heterogeneous rates. This thesis is a contribution to the modelling and understanding of heterogeneityin phylogenetic data. Weproposea methodfor the classificationof DNA sites based on Bayesian mixture modelling. Our method not only accounts for heterogeneous data but also identifies the underlying classes and enables their interpretation. We also introduce novel MCMC methodology with the same, or greater, estimation performance than existing algorithms but with lower computational cost. We find that our mixture model can successfully detect evolutionary heterogeneity and demonstrate its direct relevance by applying it to real DNA data. One of these applications is the analysis of sixteen strains of one of the bacterial species that cause Lyme disease. Results from that analysis have helped understanding the evolutionary paths of these bacterial strains and, therefore, the dynamics of the spread of Lyme disease. Our method is discussed in the context of DNA but it may be extendedto othertypesof molecular data. Moreover,the classification scheme thatwe propose is evidence of the breadth of application of mixture modelling and a step forwards in the search for more realistic models of theprocesses that underlie phylogenetic data.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Consul, Juliana Iworikumo. "Flexible Bayesian modelling of covariate effects on survival". Thesis, University of Newcastle upon Tyne, 2016. http://hdl.handle.net/10443/3535.

Testo completo
Abstract (sommario):
Proportional hazards models are commonly used in survival analysis. Typically a baseline hazard function is combined with hazard multipliers which depend on covariate values through a logarithmic link function and a linear predictor. Models have been developed which allow exibility in the form of the baseline hazard. However, the form of dependence of the hazard multipliers on covariates is usually speci ed. The aim of this research is to introduce exibility into the form of the dependence of the hazard function on the covariates by removing the assumptions of parametric forms which are usually made. Given su cient data, this will allow the model to adapt to the true form of the relationship and possibly uncover unexpected features. The Bayesian approach to inference is used. The choice of a suitable prior distribution allows a compromise which relaxes the assumption of a parametric form of relationship while imposing enough structure to exploit the information in nite data sets by specifying correlations in the prior distribution between log-hazards for neigbouring covariate pro les. The choice of prior distribution can therefore be important for obtaining useful posterior inferences. A generalised piecewise constant hazard model is introduced, in which quantitative covariates, as well as time, are categorised. Thus, the time and covariate space is divided into cells, within each of which the hazard is a constant. Two forms of prior distribution are considered, one based on a parametric model and the other using a Gaussian Markov random eld. When the number of covariates is large, this approach leads to a very large number of cells, many of which might not represent any observed cases. Therefore, we consider an alternative approach in which a Gaussian process prior for the log-hazards over the covariate space is used. The posterior distribution is computed only at the observed covariate pro les. The methodology developed is applicable to a wide range of survival data and is illustrated by applications to two data sets referring to patients with non-Hodgkin's lymphoma and leukaemia respectively.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Li, Xuguang. "Modelling financial volatility using Bayesian and conventional methods". Thesis, Lancaster University, 2016. http://eprints.lancs.ac.uk/82685/.

Testo completo
Abstract (sommario):
This thesis investigates different volatility measures and models, including parametric and non-parametric volatility measurement. Both conventional and Bayesian methods are used to estimate volatility models. Chapter 1: We model and forecast intraday return volatility based on an extended stochastic volatility (SV) specification. Compared with the standard SV, we incorporate the trading duration information which includes both actual and expected durations. We use the Autoregressive Conditional Duration (ACD) model to calculate the expected duration that can be used to measure the surprise in durations. We find that the effect of surprise in durations on intraday volatility is highly significant. If there is an unexpected increase for the lag actual duration, the current volatility tends to decrease, and vice versa. We also take into account the duration and volatility intraday patterns. Our empirical results is based on the SPDR S&P 500 (SPY) and Microsoft Corporation (MSFT) data. According to the in-sample and out-of-sample empirical results, the extend SV model outperforms the GARCH and GARCH augmented with duration information. Chapter 2: We examine contagion effects resulting from the US subprime crisis on a sample of EU countries (UK, Switzerland, Netherlands, Germany and France) using a Multivariate Stochastic Volatility (MSV) framework augmented with implied volatilities. The MSV framework is estimated using Bayesian techniques. We compare the the MSV framework with the Multivariate GARCH (M-GARCH) framework and find the contagion effect is more significant under MSV framework. Moreover, augmenting the MSV framework with implied volatilities further increases model fit. Compared with the original MSV framework, we find that the contagion effect becomes more significant when we incorporate implied volatilities. Therefore, implied volatility information is useful for detecting financial contagion, or double checking some cases of market interdependence (strong linkages but insignificant increase in correlations). Chapter 3: We extend the Heterogeneous AR (HAR) model to allow the autoregressive parameter of daily realized volatility (RV) to be time varying (TV-HAR). The daily lag weights are adjusted according to the fluctuations of RV around its longer time average level (monthly RV). We compare the TV-HAR model with the HAR model and the recently introduced HARQ model. We observe a regular pattern of RV which the HAR and HARQ models do not fully capture: if there is an increase in the lag daily RV compared with its longer-term average level (monthly RV), the current RV tends to decrease rapidly to its long term level; conversely, if there is a decrease in the lag daily RV compared with its longer-term average level (monthly RV), that reversion takes longer. The TV-HAR model can capture this RV pattern. We find that the TVHAR model performs better than the benchmark HAR model and the HARQ model for both simulated and empirical data. Our empirical analysis is based on the S&P 500 equity index, SPY index and ten series of stocks data from 2000 to 2010.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Zhang, Yan. "Variational Bayesian data driven modelling for biomedical systems". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/89458/.

Testo completo
Abstract (sommario):
Physiological systems are well recognised to be nonlinear, stochastic and complex. In situations when only one time series of a single variable is available, exacting useful information from the dynamic data is crucial to facilitate personalised clinical decisions and deepen the understanding of the underlying mechanisms. This thesis is focused on establishing and validating data-driven models that incorporate nonlinearity and stochasticity into the model developing framework, to describe a single measurement time series in the field of biomedical engineering. The tasks of model selection and parameter estimation are performed by applying the variational Bayesian method, which has shown great potential as a deterministic alternative to Markov Chain Monte Carlo sampling methods. The free energy, a maximised lower bound of the model evidence, is considered as the main model selection criterion, which penalises the complexity of the model. Several other model selection criteria, alongside the free energy criterion, have been utilised according to the specific requirements of each application. The methodology has been employed to two biomedical applications. For the first application, a nonlinear stochastic second order model has been developed to describe the blood glucose response to food intake for people with and without Diabetes Mellitus (DM). It was found that the glucose dynamics for the people with DM show a higher degree of nonlinearity and a different range of parameter values compared with people without DM. The developed model shows clinical potential of classifying individuals into these two groups, monitoring the effectiveness of the diabetes management, and identifying people with pre-diabetes conditions. For the second application, a linear third order model has been established for the first time to describe post-transplant antibody dynamics after high-risk kidney transplantation. The model was found to have different ranges of parameter values between people with and without acute antibody-mediated rejection (AMR) episodes. The findings may facilitate the formation of an accurate pre-transplant risk profile which predicts AMR and allows the clinician to intervene at a much earlier stage, and therefore improve the outcomes of high-risk kidney transplantation.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Duncan, Earl W. "Bayesian approaches to issues arising in spatial modelling". Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/112356/1/Earl_Duncan_Thesis.pdf.

Testo completo
Abstract (sommario):
This thesis addressed several contemporary issues arising in the analysis of spatial data and the broader statistical methodology. Two state-of-the-art statistical models are developed for the purpose of identifying unusual trends, a new algorithm to deal with label switching is devised which outperforms existing solutions, and new approaches to spatial smoothing are explored. The outcomes from this thesis should be of interest to managers in the health sector, biostatisticians, and researchers who deal with spatial data.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Bleki, Zolisa. "Efficient Bayesian analysis of spatial occupancy models". Master's thesis, University of Cape Town, 2020. http://hdl.handle.net/11427/32469.

Testo completo
Abstract (sommario):
Species conservation initiatives play an important role in ecological studies. Occupancy models have been a useful tool for ecologists to make inference about species distribution and occurrence. Bayesian methodology is a popular framework used to model the relationship between species and environmental variables. In this dissertation we develop a Gibbs sampling method using a logit link function in order to model posterior parameters of the single-season spatial occupancy model. We incorporate the widely used Intrinsic Conditional Autoregressive (ICAR) prior model to specify the spatial random effect in our sampler. We also develop OccuSpytial, a statistical package implementing our Gibbs sampler in the Python programming language. The aim of this study is to highlight the computational efficiency that can be obtained by employing several techniques, which include exploiting the sparsity of the precision matrix of the ICAR model and also making use of Polya-Gamma latent variables to obtain closed form expressions for the posterior conditional distributions of the parameters of interest. An algorithm for efficiently sampling from the posterior conditional distribution of the spatial random effects parameter is also developed and presented. To illustrate the sampler's performance a number of simulation experiments are considered, and the results are compared to those obtained by using a Gibbs sampler incorporating Restricted Spatial Regression (RSR) to specify the spatial random effect. Furthermore, we fit our model to the Helmeted guineafowl (Numida meleagris) dataset obtained from the 2nd South African Bird Atlas Project database in order to obtain a distribution map of the species. We compare our results with those obtained from the RSR variant of our sampler, those obtained by using the stocc statistical package (written using the R programming language), and those obtained from not specifying any spatial information about the sites in the data. It was found that using RSR to specify spatial random effects is both statistically and computationally more efficient that specifying them using ICAR. The OccuSpytial implementations of both ICAR and RSR Gibbs samplers has significantly less runtime compared to other implementations it was compared to.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Caballero, Jose Louis Galan. "Modeling qualitative judgements in Bayesian networks". Thesis, Queen Mary, University of London, 2008. http://qmro.qmul.ac.uk/xmlui/handle/123456789/28170.

Testo completo
Abstract (sommario):
Although Bayesian Networks (BNs) are increasingly being used to solve real world problems [47], their use is still constrained by the difficulty of constructing the node probability tables (NPTs). A key challenge is to construct relevant NPTs using the minimal amount of expert elicitation, recognising that it is rarely cost-effective to elicit complete sets of probability values. This thesis describes an approach to defining NPTs for a large class of commonly occurring nodes called ranked nodes. This approach is based on the doubly truncated Normal distribution with a central tendency that is invariably a type of a weighted function of the parent nodes. We demonstrate through two examples how to build large probability tables using the ranked nodes approach. Using this approach we are able to build the large probability tables needed to capture the complex models coming from assessing firm's risks in the safety or finance sector. The aim of the first example with the National Air-Traffic Services(NATS) is to show that using this approach we can model the impact of the organisational factors in avoiding mid-air aircraft collisions. The resulting model was validated by NATS and helped managers to assess the efficiency of the company handling risks and thus, control the likelihood of air-traffic incidents. In the second example, we use BN models to capture the operational risk (OpRisk) in financial institutions. The novelty of this approach is the use of causal reasoning as a means to reduce the uncertainty surrounding this type of risk. This model was validated against the Basel framework [160], which is the emerging international standard regulation governing how financial institutions assess OpRisks.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Mavros, George. "Bayesian stochastic mortality modelling under serially correlated local effects". Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/2917.

Testo completo
Abstract (sommario):
The vast majority of stochastic mortality models in the academic literature are intended to explain the dynamics underpinning the process by a combination of age, period and cohort e ects. In principle, the more such e ects are included in a stochastic mortality model, the better is the in-sample t to the data. Estimates of those parameters are most usually obtained under some distributional assumption about the occurrence of deaths, which leads to the optimisation of a relevant objective function. The present Thesis develops an alternative framework where the local mortality effect is appreciated, by employing a parsimonious multivariate process for modelling the latent residual e ects of a simple stochastic mortality model as dependent rather than conditionally independent variables. Under the suggested extension the cells of the examined data-set are supplied with a serial dependence structure by relating the residual terms through a simple vector autoregressive model. The method is applicable for any of the popular mortality modelling structures in academia and industry, and is accommodated herein for the Lee-Carter and Cairns-Blake-Dowd models. The additional residuals model is used to compensate for factors of a mortality model that might mostly be a ected by local e ects within given populations. By using those two modelling bases, the importance of the number of factors for a stochastic mortality model is emphasised through the properties of the prescribed residuals model. The resultant hierarchical models are set under the Bayesian paradigm, and samples from the joint posterior distribution of the latent states and parameters are obtained by developing Markov chain Monte Carlo algorithms. Along with the imposed short-term dynamics, we also examine the impact of the joint estimation in the long-term factors of the original models. The Bayesian solution aids in recognising the di erent levels of uncertainty for the two naturally distinct type of dynamics across di erent populations. The forecasted rates, mortality improvements, and other relevant mortality dependent metrics under the developed models are compared to those produced by their benchmarks and other standard stochastic mortality models in the literature.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Newman, Keith. "Bayesian modelling of latent Gaussian models featuring variable selection". Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3700.

Testo completo
Abstract (sommario):
Latent Gaussian models are popular and versatile models for performing Bayesian inference. In many cases, these models will be analytically intractable creating a need for alternative inference methods. Integrated nested Laplace approximations (INLA) provides fast, deterministic inference of approximate posterior densities by exploiting sparsity in the latent structure of the model. Markov chain Monte Carlo (MCMC) is often used for Bayesian inference by sampling from a target posterior distribution. This suffers poor mixing when many variables are correlated, but careful reparameterisation or use of blocking methods can mitigate these issues. Blocking comes with additional computational overheads due to the matrix algebra involved; these costs can be limited by harnessing the same latent Markov structures and sparse precision matrix properties utilised by INLA, with particular attention paid to efficient matrix operations. We discuss how linear and latent Gaussian models can be constructed by combining methods for linear Gaussian models with Gaussian approximations. We then apply these ideas to a case study in detecting genetic epistasis between telomere defects and deletion of non-essential genes in Saccharomyces cerevisiae, for an experiment known as Quantitative Fitness Analysis (QFA). Bayesian variable selection is included to identify which gene deletions cause a genetic interaction. Previous Bayesian models have proven successful in detecting interactions but time-consuming due to the complexity of the model and poor mixing. Linear and latent Gaussian models are created to pursue more efficient inference over standard Gibbs samplers, but we find inference methods for latent Gaussian models can struggle with increasing dimension. We also investigate how the introduction of variable selection provides opportunities to reduce the dimension of the latent model structure for potentially faster inference. Finally, we discuss progress on a new follow-on experiment, Mini QFA, which attempts to find epistasis between telomere defects and a pair of gene deletions.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Dou, Yiping. "Dynamic Bayesian models for modelling environmental space-time fields". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/634.

Testo completo
Abstract (sommario):
This thesis addresses spatial interpolation and temporal prediction using air pollution data by several space-time modelling approaches. Firstly, we implement the dynamic linear modelling (DLM) approach in spatial interpolation and find various potential problems with that approach. We develop software to implement our approach. Secondly, we implement a Bayesian spatial prediction (BSP) approach to model spatio-temporal ground-level ozone fields and compare the accuracy of that approach with that of the DLM. Thirdly, we develop a Bayesian version empirical orthogonal function (EOF) method to incorporate the uncertainties due to temporally varying spatial process, and the spatial variations at broad- and fine- scale. Finally, we extend the BSP into the DLM framework to develop a unified Bayesian spatio-temporal model for univariate and multivariate responses. The result generalizes a number of current approaches in this field.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia