To see the other types of publications on this topic, follow the link: Bayesian analysis.

Dissertations / Theses on the topic 'Bayesian analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bayesian analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Abrams, Keith Rowland. "Bayesian survival analysis." Thesis, University of Liverpool, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316744.

Full text
Abstract:
In cancer research the efficacy of a new treatment is often assessed by means of a clinical trial. In such trials the outcome measure of interest is usually time to death from entry into the study. The time to intermediate events may also be of interest, for example time to the spread of the disease to other organs (metastases). Thus, cancer clinical trials can be seen to generate multi-state data, in which patients may be in anyone of a finite number of states at a particular time. The classical analysis of data from cancer clinical trials uses a survival regression model. This type of model allows for the fact that patients in the trial will have been observed for different lengths of time and for some patients the time to the event of interest will not be observed (censored). The regression structure means that a measure of treatment effect can be obtained after allowing for other important factors. Clinical trials are not conducted in isolation, but are part of an on-going learning process. In order to assess the current weight of evidence for the use of a particular treatment a Bayesian approach is necessary. Such an approach allows for the formal inclusion of prior information, either in the form of clinical expertise or the results from previous studies, into the statistical analysis. An initial Bayesian analysis, for a single non-recurrent event, can be performed using non-temporal models that consider the occurrence of events up to a specific time from entry into the study. Although these models are conceptually simple, they do not explicitly allow for censoring or covariates. In order to address both of these deficiencies a Bayesian fully parametric multiplicative intensity regression model is developed. The extra complexity of this model means that approximate integration techniques are required. Asymptotic Laplace approximations and the more computer intensive Gauss-Hermite quadrature are shown to perform well and yield virtually identical results. By adopting counting process notation the multiplicative intensity model is extended to the multi-state scenario quite easily. These models are used in the analysis of a cancer clinical trial to assess the efficacy of neutron therapy compared to standard photon therapy for patients with cancer of the pelvic region. In this trial there is prior information both in the form of clinical prior beliefs and results from previous studies. The usefulness of multi-state models is also demonstrated in the analysis of a pilot quality of life study. Bayesian multi-state models are shown to provide a coherent framework for the analysis of clinical studies, both interventionist and observational, yielding clinically meaningful summaries about the current state of knowledge concerning the disease/treatment process.
APA, Harvard, Vancouver, ISO, and other styles
2

Yuan, Lin. "Bayesian nonparametric survival analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22253.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Conti, Gabriella, Sylvia Frühwirth-Schnatter, James J. Heckman, and Rémi Piatek. "Bayesian exploratory factor analysis." Elsevier, 2014. http://dx.doi.org/10.1016/j.jeconom.2014.06.008.

Full text
Abstract:
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Qijun. "Hierarchical Bayesian Benchmark Dose Analysis." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/316773.

Full text
Abstract:
An important objective in statistical risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to hierarchical Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indeed, for the few existing forms of Bayesian BMDs, informative prior information is seldom incorporated. Here, a new method is developed by using reparameterized quantal-response models that explicitly describe the BMD as a target parameter. This potentially improves the BMD/BMDL estimation by combining elicited prior belief with the observed data in the Bayesian hierarchy. Besides this, the large variety of candidate quantal-response models available for applying these methods, however, lead to questions of model adequacy and uncertainty. Facing this issue, the Bayesian estimation technique here is further enhanced by applying Bayesian model averaging to produce point estimates and (lower) credible bounds. Implementation is facilitated via a Monte Carlo-based adaptive Metropolis (AM) algorithm to approximate the posterior distribution. Performance of the method is evaluated via a simulation study. An example from carcinogenicity testing illustrates the calculations.
APA, Harvard, Vancouver, ISO, and other styles
5

Font, Valverde Martí. "Bayesian analysis of textual data." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/384329.

Full text
Abstract:
En esta tesis se desarrolla, siempre con el enfoque bayesiano en mente, una metodología estadística para el análisis de datos discretos en su aplicación en problemas estilometría. El análisis estadístico del estilo literario se ha utilizado para caracterizar el estilo de textos y autores, y para ayudar a resolver problemas de atribución de autoría. Estudios anteriores caracterizaron el estilo usando la longitud de las palabras, la longitud de las oraciones, y la proporción de los sustantivos, artículos, adjetivos o adverbios. Los datos que aquí se utilizan van, desde la frecuencia de frecuencias de palabras, hasta el análisis simultáneo de la frecuencia de longitud de palabra y de las palabras funcionales más frecuentes. Todos estos datos son característicos del estilo de autor y al mismo tiempo independiente del contexto en el que escribe. De esta forma, se introduce un análisis bayesiano de la frecuencia de frecuencias de palabras, que tiene una distribución en forma de J inversa con las colas superiores extraordinariamente largas. Se basa en la extensión de la metodología no bayesiana de Sichel para estos datos utilizando el modelo Poisson inversa gaussiana. Los modelos se comprueban mediante la exploración de la distribución a posteriori de los errores de Pearson y por la implementación de controles de consistencia de la distribución predictiva a posteriori. La distribución a posteriori de la inversa gausiana tiene una interpretación útil, al poder ser vista como una estimación de la distribución vocabulario del autor, de la cual se pueden obtener la riqueza y diversidad de la escritura del autor. Se propone también un análisis alternativo basado en la mixtura inversa gaussiana - poisson truncada en el cero, que se obtiene cambiando el orden de la mezcla y el truncamiento. También se propone un análisis de la heterogeneidad de estilo, que es un compromiso entre el modelo de punto de cambio, que busca un cambio repentino de estilo, y el análisi de conglomerados, que no tiene en cuenta el orden. El análisis incorpora el hecho de que partes próximas de un texto tienen más probabilidades de pertenecer al mismo autor que partes del texto más separadas. El enfoque se ilustra volviendo a revisar la atribución de autoría del Tirant lo Blanc. Para el análisis de la heterogeneidad del estilo literario se propone también un análisis estadístico que utiliza simultáneamente diferentes características estilométricas, como la longitud palabra y la frecuencia de las palabras funcionales más frecuentes. Las filas de todas tablas de contingencia se agrupan simultáneamente basandose en una mezcla finita de conjuntos de modelos multinomiales con un estilo homogéneo. Esto tiene algunas ventajas sobre las heurísticas utilizadas en el análisis de conglomerados, ya que incorpora naturalmente el tamaño del texto, la naturaleza discreta de los datos y la dependencia entre las categorías. Todo ello se ilustra a través del análisis del estilo en las obras de teatro de Shakespeare, el Quijote y el Tirant lo Blanc. Finalmente, los problemas de atribución y verificación de autoría, que se tratan normalmente por separado, son tratados de forma conjunta. Esto se hace asumiendo un escenario abierto de clasificación para el problema de la atribución, contemplando la posibilidad de que ninguno de los autores candidatos, con textos conocidos para aprendijaje, es el autor de los textos en disputa. Entonces, el problema de verificación se convierte en un caso especial de problema de atribución. El modelo multinomial bayesiano propuesto permite obtener una solución exacta y cerrada para este problema de atribución de autoría más general. El enfoque al problema de verificación se ilustra mediante la exploración de si un fallo judicial condenatorio podría haber sido escrito por el juez que lo firma o no, y el enfoque al problema de atribución se ilustra revisando el problema de la autoría de los Federalist Papers.
In this thesis I develop statistical methodology for analyzing discrete data to be applied to stylometry problems, always with the Bayesian approach in mind. The statistical analysis of literary style has long been used to characterize the style of texts and authors, and to help settle authorship attribution problems. Early work in the literature used word length, sentence length, and proportion of nouns, articles, adjectives or adverbs to characterize literary style. I use count data that goes from the frequency of word frequency, to the simultaneous analysis of word length counts and more frequent function words counts. All of them are characteristic features of the style of author and at the same time rather independent of the context in which he writes. Here we intrude a Bayesian Analysis of word frequency counts, that have a reverse J-shaped distribution with extraordinarily long upper tails. It is based on extending Sichel's non-Bayesian methodology for frequency count data using the inverse gaussian Poisson model. The model is checked by exploring the posterior distribution of the Pearson errors and by implementing posterior predictive consistency checks. The posterior distribution of the inverse gaussian mixing density also provides a useful interpretation, because it can be seen as an estimate of the vocabulary distribution of the author, from which measures of richness and of diversity of the author's writing can be obtained. An alternative analysis is proposed based on the inverse gaussian-zero truncated Poisson mixture model, which is obtained by switching the order of the mixing and the truncation stages. An analysis of the heterogeneity of the style of a text is proposed that strikes a compromise between change-point, that analyze sudden changes in style, and cluster analysis, that does not take order into consideration. Here an analysis is proposed that strikes a compromise by incorporating the fact that parts of the text that are close together are more likely to belong to the same author than parts of the text far apart. The approach is illustrated by revisiting the authorship attribution of Tirant lo Blanc. A statistical analysis of the heterogeneity of literary style in a set of texts that simultaneously uses different stylometric characteristics, like word length and the frequency of function words, is proposed. It clusters the rows of all contingency tables simultaneously into groups with homogeneous style based on a finite mixture of sets of multinomial models. That has some advantages over the usual heuristic cluster analysis approaches as it naturally incorporates the text size, the discrete nature of the data, and the dependence between categories. All is illustrated with the analysis of the style in plays by Shakespeare, El Quijote, and Tirant lo Blanc. Finally, authorship attribution and verification problems that are usually treated separately are treated jointly. That is done by assuming an open-set classification framework for attribution problems, contemplating the possibility that neither one of the candidate authors, with training texts known to have been written by them is the author of the disputed texts. Then the verification problem becomes a special case of attribution problems.A formal Bayesian multinomial model for this more general authorship attribution is given and a closed form solution for it is derived. The approach to the verification problem is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it or not, and the approach to the attribution problem illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it or not, and the approach to the attribution problem is illustrated by revisiting the authority attribution
APA, Harvard, Vancouver, ISO, and other styles
6

Husain, Syeda Tasmine. "Bayesian analysis of longitudinal models /." Internet access available to MUN users only, 2003. http://collections.mun.ca/u?/theses,163598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brink, Anton Meredith. "Bayesian analysis of contingency tables." Thesis, Imperial College London, 1997. http://hdl.handle.net/10044/1/8948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

LaBute, Gerard Joseph. "Pseudo-Bayesian response surface analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0001/MQ34971.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wright, Alan. "Bayesian pathway analysis in epigenetics." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1286.

Full text
Abstract:
A typical gene expression data set consists of measurements of a large number of gene expressions, on a relatively small number of subjects, classified according to two or more outcomes, for example cancer or non-cancer. The identification of associations between gene expressions and outcome is a huge multiple testing problem. Early approaches to this problem involved the application of thousands of univariate tests with corrections for multiplicity. Over the past decade, numerous studies have demonstrated that analyzing gene expression data structured into predefined gene sets can produce benefits in terms of statistical power and robustness when compared to alternative approaches. This thesis presents the results of research on gene set analysis. In particular, it examines the properties of some existing methods for the analysis of gene sets. It introduces novel Bayesian methods for gene set analysis. A distinguishing feature of these methods is that the model is specified conditionally on the expression data, whereas other methods of gene set analysis and IGA generally make inferences conditionally on the outcome. Computer simulation is used to compare three common established methods for gene set analysis. In this simulation study a new procedure for the simulation of gene expression data is introduced. The simulation studies are used to identify situations in which the established methods perform poorly. The Bayesian approaches developed in this thesis apply reversible jump Markov chain Monte Carlo (RJMCMC) techniques to model gene expression effects on phenotype. The reversible jump step in the modelling procedure allows for posterior probabilities for activeness of gene set to be produced. These mixture models reverse the generally accepted conditionality and model outcome given gene expression, which is a more intuitive assumption when modelling the pathway to phenotype. It is demonstrated that the two models proposed may be superior to the established methods studied. There is considerable scope for further development of this line of research, which is appealing in terms of the use of mixture model priors that reflect the belief that a relatively small number of genes, restricted to a small number of gene sets, are associated with the outcome.
APA, Harvard, Vancouver, ISO, and other styles
10

O'Donovan, Daniel James. "Bayesian analysis of NMR data." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ma, Yimin. "Bayesian and empirical Bayesian analysis for the truncation parameter distribution families." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0027/NQ51000.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ma, Yimin. "Bayesian and empirical Bayesian analysis for the truncation parameter distribution families /." *McMaster only, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Freeman, Corey Ross. "Bayesian network analysis of nuclear acquisitions." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bennett, James Elston. "Bayesian analysis of population pharmacokinetic models." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gay, John Michael. "Frailties in the Bayesian survival analysis." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Huntriss, Alicia. "A Bayesian analysis of luminescence dating." Thesis, Durham University, 2008. http://etheses.dur.ac.uk/2928/.

Full text
Abstract:
Luminescence dating is a widespread dating method used in the fields of archaeology and Quaternary science. As an experimental method it is subject to various uncertainties in the determination of parameters that are used to evaluate age. The need to express these uncertainties fully, combined with the prior archaeological knowledge commonly available, motivates the development of a Bayesian approach to the assessment of age based on luminescence data. The luminescence dating procedure is dissected into its component parts, and each is considered individually before being combined to find the posterior age distribution. We use Bayesian multi-sample calibration to find the palaeodose in the first stage of the model, consider the problem of identifying a plateau in the data, and then use this, along with the annual dose, to estimate age. The true sample age is then modelled, incorporating any prior information available, both for an individual sample and for a collection of samples with related ages.
APA, Harvard, Vancouver, ISO, and other styles
17

Dimitrakopoulou, Vasiliki. "Bayesian variable selection in cluster analysis." Thesis, University of Kent, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.594195.

Full text
Abstract:
Statistical analysis of data sets of high-dimensionality has met great interest over the past years, with great applications on disciplines such as medicine, nellascience, pattern recognition, image analysis and many others. The vast number of available variables though, contrary to the limited sample size, often mask the cluster structure of the data. It is often that some variables do not help in distinguishing the different clusters in the data; patterns over the samp•.l ed observations are, thus, usually confined to a small subset of variables. We are therefore interested in identifying the variables that best discriminate the sample, simultaneously to recovering the actual cluster structure of the objects under study. With the Markov Chain Monte Carlo methodology being widely established, we investigate the performance of the combined tasks of variable selection and clustering procedure within the Bayesian framework. Motivated by the work of Tadesse et al. (2005), we identify the set of discriminating variables with the use of a latent vector and form the clustering procedure within the finite mixture models methodology. Using Markov chains we draw inference on, not just the set of selected variables and the cluster allocations, but also on the actual number of components: using the f:teversible Jump MCMC sampler (Green, 1995) and a variation of t he SAMS sampler of Dahl (2005). However, sensitivity t o the hyperparameters settings of the covariance structure of the suggested model motivated our interest in an Empirical Bayes procedure to pre-specify the crucial hyper parameters. Further on addressing the problem of II ~----. -- 1 hyperparameters' sensitivity, we suggest several different covariance structures for the mixture components. Developing MATLAB codes for all models introduced in this thesis, we apply and compare the various models suggested on a set of simulated data, as well as on three real data sets; the iris, the crabs and the arthritis data sets.
APA, Harvard, Vancouver, ISO, and other styles
18

Purutçuŏglu, Vilda. "Bayesian methods for gene network analysis." Thesis, Lancaster University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497164.

Full text
Abstract:
The two main aims of this thesis is initially to develop a new frequentist gene expression index (FOX) for Affymetrix oligonucleotide DNA arrays and then to present different stages of the analysis of a realistic MAPK/ERK pathway starting from its identification via quasi reaction set, then moving to its exact and approximated stochastic simulations and finally estimating its model parameters, rates, under distinct models.
APA, Harvard, Vancouver, ISO, and other styles
19

Frühwirth-Schnatter, Sylvia, Regina Tüchler, and Thomas Otter. "Bayesian analysis of the heterogeneity model." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2002. http://epub.wu.ac.at/678/1/document.pdf.

Full text
Abstract:
In the present paper we consider Bayesian estimation of a finite mixture of models with random effects which is also known as the heterogeneity model. First, we discuss the properties of various MCMC samplers that are obtained from full conditional Gibbs sampling by grouping and collapsing. Whereas full conditional Gibbs sampling turns out to be sensitive to the parameterization chosen for the mean structure of the model, the alternative sampler is robust in this respect. However, the logical extension of the approach to the sampling of the group variances does not further increase the efficiency of the sampler. Second, we deal with the identifiability problem due to the arbitrary labeling within the model. Finally, a case study involving metric Conjoint analysis serves as a practical illustration. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
20

Domedel-Puig, Nuria. "Bayesian analysis of dynamic cellular processes." Thesis, Birkbeck (University of London), 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499232.

Full text
Abstract:
The objective of this thesis is to show how a Bayesian model comparison framework, coupled with the use of a formal mathematical modeling language (ODEs), can assist researchers in the process of modeling dynamic biological systems. The Bayesian approach differs from classical statistics in the way model parameters are treated: our state of knowledge about them can be summarised by probability distributions. All Bayesian inference depends on the data-updated version of these parameter distributions, the posterior densities. Averaging the data likelihood over the posterior results in the model evidence, a measure that very conveniently balances the complexity of a model with the quality of its fit to the data. This is very useful for model comparison. Such a task arises quite often in biological research, where different hypotheses are often available to explain a given phenomenon, and deciding which one is best is difficult. Despite its importance, model suitability is most often judged in an informal way. The main aspects of the Bayesian approach-together with comparisons to classical statistics methods-are described in detail in the first part of this thesis. The most important formalisms for modeling biological systems are also reviewed, and the building blocks of differential equation models are presented. These methods are then applied to a series of synthetic datasets for which the underlying model is known, allowing to illustrate the main features of Bayesian inference. This is followed by the application of the framework to two real systems: a series of network motifs and the Jak/STAT signal transduction pathway. Results show that network motifs are well identifiable given dynamic data and, in the particular case of complex feedforward motif models, the Bayesian framework outperforms the classical methods. The present work also highlights the lack of an appropriate model for the flagella system, and thus a number of novel models are explored. Finally, the Jak/STAT system is analysed. The results are compared to existing models in the literature, and allow discarding some biologically-motivated new models.
APA, Harvard, Vancouver, ISO, and other styles
21

Kaufmann, Sylvia, and Sylvia Frühwirth-Schnatter. "Bayesian Analysis of Switching ARCH Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/744/1/document.pdf.

Full text
Abstract:
We consider a time series model with autoregressive conditional heteroskedasticity that is subject to changes in regime. The regimes evolve according to a multistate latent Markov switching process with unknown transition probabilities, and it is the constant in the variance process of the innovations that is subject to regime shifts. The joint estimation of the latent process and all model parameters is performed within a Bayesian framework using the method of Markov Chain Monte Carlo simulation. We perform model selection with respect to the number of states and the number of autoregressive parameters in the variance process using Bayes factors and model likelihoods. To this aim, the model likelihood is estimated by combining the candidate's formula with importance sampling. The usefulness of the sampler is demonstrated by applying it to the dataset previously used by Hamilton and Susmel who investigated models with switching autoregressive conditional heteroskedasticity using maximum likelihood methods. The paper concludes with some issues related to maximum likelihood methods, to classical model select ion, and to potential straightforward extensions of the model presented here. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
22

Wakefield, Jon. "The Bayesian analysis of pharmacokinetic models." Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Whaley, Steven R. J. "Bayesian analysis of sickness absence data." Thesis, University of Aberdeen, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274884.

Full text
Abstract:
Sickness-absence (SA) is a serious financial burden to UK industry totalling £10-12 billion in 1999 the equivalent of £434 and 7.8 days lost per worker. A major change in the reporting of SA occurred on 14 June 1982 with the introduction of self certification. Up to then all episodes had to be certified by a general practitioner. Since then, events that lasted for seven calendar or less have not required a GP's certificate and are 'self-certified'. A SA episode consists of the date the individual went off sick, the duration of the episode and a medical diagnosis given by either a GP or self diagnosis. A common approach to the analysis of SA data is to model the number of times an individual went off sick during a period of follow up via Poisson regression. Some studies on SA have examined the duration of SA, though most concentrated on the probability of going off sick. This thesis uses an intensity based approach to model the joint probability that a person goes off sick with a specific disease and has a specific duration of absence (the 'joint analysis'). A Bayesian hierarchical model, based on the conditional proportional hazards model, is formulated for the joint analysis and sampled using Markov chain Monte Carlo methods. Posterior expectations and 90% credible intervals are presented as summaries of the marginal posterior distributions of the parameters of the joint analysis. Trace plots of the log-joint posterior distribution are given to assess convergence of the MCMC sampler.
APA, Harvard, Vancouver, ISO, and other styles
24

Welch, Jason. "Bayesian methods in chemical data analysis." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Rigby, Robert Anthony. "Credibility intervals in Bayesian discriminant analysis." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/46292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Thaithara, Balan Sreekumar. "Bayesian methods for astrophysical data analysis." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Marshall, Philip James. "Bayesian analysis of clusters of galaxies." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lewis, Stefanie Janneke. "Bayesian data analysis in baryon spectroscopy." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5245/.

Full text
Abstract:
The strong interaction within a nucleon has been the focus of much theoretical and experimental work in nuclear and particle physics. Theorists have been improving lattice QCD calculations and developing quark models that define the inter-quark interactions, and experimentalists have spent years gathering data to support and improve these models. Finding nucleon resonance states provides essential information for the development of these theories and improves our understanding of the excited nucleon spectrum. There are a variety of quark models that have been proposed which each predict a unique resonance spectrum. Currently, these models predict resonances that have not been observed experimentally. It is important to experimentally determine which of these resonances exist. Historically, many of the existing measurements came as a result of nucleon-pion scattering experiments. It has been suggested, however, that some resonances may couple more strongly to other reaction channels, such as the KΛ strangeness reaction channel analysed here. Pseudoscalar meson photoproduction experiments can be used to analyse such a reaction channel. In these experiments, a photon beam is incident on a stationary nucleon target and the reaction products are detected. The polarisation of the recoiling particle can often be determined or measured. In the KΛ channel, the recoiling baryon is a Λ whose polarisation can be obtained without the use of any additional hardware through the self-analysing properties of the hyperon. These experiments can be completely described by four complex amplitudes, which can be accessed experimentally through sixteen polarisation observables. The polarisation observables are bilinear combinations of the amplitudes and as such have nontrivial correlations. They are dependent on the polarisations of the beam, target and recoiling particle. By selecting different polarisations of the beam or target, or by using a combination of polarisations, different observables can be measured. The amplitudes can be obtained once a sufficient selection of observables is determined. Currently, analyses of pseudoscalar meson photoproduction data is done using a binned fitting method. The use of binned fitting inevitably leads to some information from the data being lost. In this thesis, a new analysis method is presented, based on Bayesian statistics. The aim of such an approach is to maximise the information yield from data. An event-by-event likelihood function reduces the information lost through histogram binning. It is shown that the use of a prior distribution in amplitude space can preserve the correlations between observables, also increasing the information yield. It is found that such an analysis programme leads to a significant extraction of information from existing data. It is also found that datasets from different experiments could be concatenated and analysed together using the programme presented in this work, and successfully extract observables. Information on observables to which the experiment is not directly sensitive can be found and visualised graphically. The development of this analysis programme is detailed in this thesis. Previously analysed data from two experiments are analysed using this analysis method, and the results are compared to those obtained in the past. It is shown that this Bayesian approach produces results that are consistent with accepted results and provides information on observables that are not directly measurable by a particular experiment. The data from two experiments is combined and analysed together, and it is shown that the results of the combined analysis are consistent with those obtained through separate analyses.
APA, Harvard, Vancouver, ISO, and other styles
29

Alsing, Justin. "Bayesian analysis of weak gravitational lensing." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/44571.

Full text
Abstract:
This thesis is concerned with how to extract cosmological information from observations of weak gravitational lensing - the modification of observed galaxy images due to gravitational lensing by the large-scale structure of the Universe. Firstly, we are concerned with how we can use all possible observa- tional probes of weak lensing to squeeze out as much cosmological information as possible from future surveys. Up until now, the tra- ditional approach to cosmological weak lensing analyses have focused on the distortion of shapes of distant galaxies measured across the sky - cosmic shear. However, shearing of galaxy shapes is only half the picture - weak lensing also magnifies the sizes and fluxes of observed objects and this lensing magnification field contains the same cosmo- logical information as the cosmic shear field, whilst being subject to a different set of systematic effects. As such, weak lensing magnifica- tion is an exciting complement to cosmic shear and a holistic approach to weak lensing, combining shear and magnification, promises tighter constraints on cosmology, better control of systematics, and more ro- bust science at the end-of-the-day. We develop the theoretical and statistical formalism for performing a cosmological weak lensing anal- ysis using shape, size and flux information together and demonstrate that significant information gains and synergies can be expected from the addition of this new lensing observable - cosmic magnification. Secondly, we are interested in how we can use the statistics of the lensing fields to constrain cosmology via an analysis that is prin- cipled in its propagation of uncertainties, optimal in its use of the full information-content of the data, and exact under clearly stated and well understood model assumptions. We introduce a totally fresh perspective on weak lensing data analysis - Bayesian hierarchical modelling (BHM) - that promises to achieve all of these goals. The BHM approach provides a general framework for analysing weak lensing data that accounts for the full statistical interdependency of all model components in the weak lensing analysis pipeline, allowing information to flow freely from (in principle) raw pixel and photo- metric data through to cosmological inferences. We develop efficient Bayesian sampling schemes that explore the joint posterior of the shear maps and power spectra (and cosmological parameters) from a catalogue of estimated shapes and redshifts. We demonstrate that these algorithms bring the benefits of the Bayesian approach whilst being computationally practical for current and future surveys, and are readily extendable to extract information beyond the two-point statistics of the lensing fields or to incorporate the full weak lensing pipeline in a global principled analysis, presenting significant advan- tages over traditional estimator-based methods. We apply the newly developed Bayesian hierarchical approaches to the current state-of- the-art cosmic shear data from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), constraining cosmological parameters and models from weak lensing using Bayesian hierarchical inference - the first application to weak lensing data.
APA, Harvard, Vancouver, ISO, and other styles
30

Grapsa, Erofili. "Bayesian analysis for categorical survey data." Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/197303/.

Full text
Abstract:
In this thesis, we develop Bayesian methodology for univariate and multivariate categorical survey data. The Multinomial model is used and the following problems are addressed. Limited information about the design variables leads us to model the unknown design variables taking into account the sampling scheme. Random effects are incorporated in the model to deal with the effect of sampling design, that produces the Multinomial GLMM and issues such as model comparison and model averaging are also discussed. The methodology is applied in a true dataset and estimates for population counts are obtained
APA, Harvard, Vancouver, ISO, and other styles
31

ZHOU, RONG. "BAYESIAN ANALYSIS OF LOG-BINOMIAL MODELS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1115843904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Smith, Adam Nicholas. "Bayesian Analysis of Partitioned Demand Models." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1497895561381294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Xiaoyin. "Bayesian analysis of capture-recapture models /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3060157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Thamrin, Sri Astuti. "Bayesian survival analysis using gene expression." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/62666/1/Sri_Astuti_Thamrin_Thesis.pdf.

Full text
Abstract:
This thesis developed and applied Bayesian models for the analysis of survival data. The gene expression was considered as explanatory variables within the Bayesian survival model which can be considered the new contribution in the analysis of such data. The censoring factor that is inherent of survival data has also been addressed in terms of its impact on the fitting of a finite mixture of Weibull distribution with and without covariates. To investigate this, simulation study were carried out under several censoring percentages. Censoring percentage as high as 80% is acceptable here as the work involved high dimensional data. Lastly the Bayesian model averaging approach was developed to incorporate model uncertainty in the prediction of survival.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Yong Ku. "Bayesian multiresolution dynamic models." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180465799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Noble, Robert Bruce. "Multivariate Applications of Bayesian Model Averaging." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/30180.

Full text
Abstract:
The standard methodology when building statistical models has been to use one of several algorithms to systematically search the model space for a good model. If the number of variables is small then all possible models or best subset procedures may be used, but for data sets with a large number of variables, a stepwise procedure is usually implemented. The stepwise procedure of model selection was designed for its computational efficiency and is not guaranteed to find the best model with respect to any optimality criteria. While the model selected may not be the best possible of those in the model space, commonly it is almost as good as the best model. Many times there will be several models that exist that may be competitors of the best model in terms of the selection criterion, but classical model building dictates that a single model be chosen to the exclusion of all others. An alternative to this is Bayesian model averaging (BMA), which uses the information from all models based on how well each is supported by the data. Using BMA allows a variance component due to the uncertainty of the model selection process to be estimated. The variance of any statistic of interest is conditional on the model selected so if there is model uncertainty then variance estimates should reflect this. BMA methodology can also be used for variable assessment since the probability that a given variable is active is readily obtained from the individual model posterior probabilities. The multivariate methods considered in this research are principal components analysis (PCA), canonical variate analysis (CVA), and canonical correlation analysis (CCA). Each method is viewed as a particular multivariate extension of univariate multiple regression. The marginal likelihood of a univariate multiple regression model has been approximated using the Bayes information criteria (BIC), hence the marginal likelihood for these multivariate extensions also makes use of this approximation. One of the main criticisms of multivariate techniques in general is that they are difficult to interpret. To aid interpretation, BMA methodology is used to assess the contribution of each variable to the methods investigated. A second issue that is addressed is displaying of results of an analysis graphically. The goal here is to effectively convey the germane elements of an analysis when BMA is used in order to obtain a clearer picture of what conclusions should be drawn. Finally, the model uncertainty variance component can be estimated using BMA. The variance due to model uncertainty is ignored when the standard model building tenets are used giving overly optimistic variance estimates. Even though the model attained via standard techniques may be adequate, in general, it would be difficult to argue that the chosen model is in fact the correct model. It seems more appropriate to incorporate the information from all plausible models that are well supported by the data to make decisions and to use variance estimates that account for the uncertainty in the model estimation as well as model selection.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Puza, Borek Dalibor. "Aspects of Bayesian biostatistics." Thesis, Canberra, ACT : The Australian National University, 1994. http://hdl.handle.net/1885/140911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Rowley, Mark. "Bayesian analysis of fluorescence lifetime imaging data." Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/bayesian-analysis-of-fluorescence-lifetime-imaging-data(c2fc5ecd-517d-451c-a449-f07e566b3482).html.

Full text
Abstract:
The development of a novel photon-by-photon Bayesian analysis for time-domain Fluorescence Lifetime Imaging Microscopy (FLIM) data, and its application to both real experimental biological and synthetic data, is presented in this thesis. FLIM is an intensity-independent and sensitive optical technique for studying the cellular envi- ronment and can robustly exploit Fo¨rster Resonance Energy Transfer (FRET) to enable protein-protein interactions to be located within living or fixed cells. Careful analysis of fluorescence lifetime data, often comprising multi-exponential kinetics, is crucial to elucidating FRET via FLIM. The developed Bayesian analysis is demonstrated to offer more accurate fitting of data with lower photon counts, allowing greater acquisition speeds. As well as revealing infor- mation previously unobtainable, such as direct error estimates, fitting model probabilities, and instrument response extraction, the developed approach allows for future extensions which can exploit the full probability distribution. In a section of this work already pub- lished [1], Bayesian mono-exponential analysis was shown to offer robust estimation with greater precision at low total photon counts, estimating fluorescent lifetimes to a level of accuracy not obtained using other techniques. Bayesian mono-exponential parameter es- timates obtained with the developed Bayesian analysis are improved compared to those obtained using maximum likelihood, least squares, and the phasor data fitting approaches. In this work, Bayesian bi-exponential analysis based on an improved fully-analytic time- domain FLIM system model is shown to also offer improved decay parameter estimates. The developed analysis offers fluorescence decay model selection by exploiting the hierarchical nature of Bayesian analysis. This innovation enables the quantitative determi- nation of the likelihood of the data being due to mono- or bi-exponential decay processes, for example. Model selection applied to FLIM promises to simplify processing where the exact kinetics are not known. Finally, the determination of an approximated instrument response function from observed fluorescence decay data alone is also possible.
APA, Harvard, Vancouver, ISO, and other styles
39

Padhy, Budhinath. "Bayesian Data Analysis For The Sovenian Plebiscite." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/444.

Full text
Abstract:
Slovenia became an independent republic with its own constitution passed on December 23, 1991. The important step that led to the independence of Slovenia was the December 1990 plebiscite. It was at this plebiscite that the citizens of Slovenia voted for a sovereign and independent state. A public survey called Slovenian Public Opinion (SPO) survey was taken by the government of Slovenia for the plebiscite. The plebiscite counted `YES voters' only those voters who attended and who voted for independence. Non-voters were counted as `NO voters' and `Don't Know' survey responses that could be thought of as missing data that was treated as `YES' or `NO'. Analysis of survey data is done using non-parametric fitting procedure, Bayesian ignorable nonresponse model and Bayesian nonignorable nonresponse model. Finally, a sensitivity analysis is conducted with respect to the different values of a prior parameter. The amazing estimates of the eventual plebiscite outcome show the validity our underlying models.
APA, Harvard, Vancouver, ISO, and other styles
40

Volinsky, Christopher T. "Bayesian model averaging for censored survival models /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Kyeong Eun. "Bayesian models for DNA microarray data analysis." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/2465.

Full text
Abstract:
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
APA, Harvard, Vancouver, ISO, and other styles
42

Langseth, Helge. "Bayesian networks with applications in reliability analysis." Doctoral thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2002. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-959.

Full text
Abstract:

A common goal of the papers in this thesis is to propose, formalize and exemplify the use of Bayesian networks as a modelling tool in reliability analysis. The papers span work in which Bayesian networks are merely used as a modelling tool (Paper I), work where models are specially designed to utilize the inference algorithms of Bayesian networks (Paper II and Paper III), and work where the focus has been on extending the applicability of Bayesian networks to very large domains (Paper IV and Paper V).

Paper I is in this respect an application paper, where model building, estimation and inference in a complex time-evolving model is simplified by focusing on the conditional independence statements embedded in the model; it is written with the reliability data analyst in mind. We investigate the mathematical modelling of maintenance and repair of components that can fail due to a variety of failure mechanisms. Our motivation is to build a model, which can be used to unveil aspects of the “quality” of the maintenance performed. This “quality” is measured by two groups of model parameters: The first measures “eagerness”, the maintenance crew’s ability to perform maintenance at the right time to try to stop an evolving failure; the second measures “thoroughness”, the crew’s ability to actually stop the failure development. The model we propose is motivated by the imperfect repair model of Brown and Proschan (1983), but extended to model preventive maintenance as one of several competing risks (David and Moeschberger 1978). The competing risk model we use is based on random signs censoring (Cooke 1996). The explicit maintenance model helps us to avoid problems of identifiability in connection with imperfect repair models previously reported by Whitaker and Samaniego (1989). The main contribution of this paper is a simple yet flexible reliability model for components that are subject to several failure mechanisms, and which are not always given perfect repair. Reliability models that involve repairable systems with non perfect repair, and a variety of failure mechanisms often become very complex, and they may be difficult to build using traditional reliability models. The analysis are typically performed to optimize the maintenance regime, and the complexity problems can, in the worst case, lead to sub-optimal decisions regarding maintenance strategies. Our model is represented by a Bayesian network, and we use the conditional independence relations encoded in the network structure in the calculation scheme employed to generate parameter estimates.

In Paper II we target the problem of fault diagnosis, i.e., to efficiently generate an inspection strategy to detect and repair a complex system. Troubleshooting has long traditions in reliability analysis, see e.g. (Vesely 1970; Zhang and Mei 1987; Xiaozhong and Cooke 1992; Norstrøm et al. 1999). However, traditional troubleshooting systems are built using a very restrictive representation language: One typically assumes that all attempts to inspect or repair components are successful, a repair action is related to one component only, and the user cannot supply any information to the troubleshooting system except for the outcome of repair actions and inspections. A recent trend in fault diagnosis is to use Bayesian networks to represent the troubleshooting domain (Breese and Heckerman 1996; Jensen et al. 2001). This allows a more flexible representation, where we, e.g., can model non-perfect repair actions and questions. Questions are troubleshooting steps that do not aim at repairing the device, but merely are performed to capture information about the failed equipment, and thereby ease the identification and repair of the fault. Breese and Heckerman (1996) and Jensen et al. (2001) focus on fault finding in serial systems. In Paper II we relax this assumption and extend the results to any coherent system (Barlow and Proschan 1975). General troubleshooting is NP-hard (Sochorov´a and Vomlel 2000); we therefore focus on giving an approximate algorithm which generates a “good” troubleshooting strategy, and discuss how to incorporate questions into this strategy. Finally, we utilize certain properties of the domain to propose a fast calculation scheme.

Classification is the task of predicting the class of an instance from as set of attributes describing it, i.e., to apply a mapping from the attribute space to a predefined set of classes. In the context of this thesis one may for instance decide whether a component requires thorough maintenance or not based on its usage pattern and environmental conditions. Classifier learning, which is the theme of Paper III, is to automatically generate such a mapping based on a database of labelled instances. Classifier learning has a rich literature in statistics under the name of supervised pattern recognition, see e.g. (McLachlan 1992; Ripley 1996). Classifier learning can be seen as a model selection process, where the task is to find the model from a class of models with highest classification accuracy. With this perspective it is obvious that the model class we select the classifier from is crucial for classification accuracy. We use the class of Hierarchical Na¨ıve Bayes (HNB) models (Zhang 2002) to generate a classifier from data. HNBs constitute a relatively new model class which extends the modelling flexibility of Näive Bayes (NB) models (Duda and Hart 1973). The NB models is a class of particularly simple classifier models, which has shown to offer very good classification accuracy as measured by the 0/1-loss. However, NB models assume that all attributes are conditionally independent given the class, and this assumption is clearly violated in many real world problems. In such situations overlapping information is counted twice by the classifier. To resolve this problem, finding methods for handling the conditional dependence between the attributes has become a lively research area; these methods are typically grouped into three categories: Feature selection, feature grouping, and correlation modelling. HNB classifiers fall in the last category, as HNB models are made by introducing latent variables to relax the independence statements encoded in an NB model. The main contribution of this paper is a fast algorithm to generate HNB classifiers. We give a set of experimental results which show that the HNB classifiers can significantly improve the classification accuracy of the NB models, and also outperform other often-used classification systems.

In Paper IV and Paper V we work with a framework for modelling large domains. Using small and “easy-to-read” pieces as building blocks to create a complex model is an often applied technique when constructing large Bayesian networks. For instance, Pradhan et al. (1994) introduce the concept of sub-networks which can be viewed and edited separately, and frameworks for modelling object oriented domains have been proposed in, e.g., (Koller and Pfeffer 1997; Bangsø and Wuillemin 2000). In domains that can approx priately be described using an object oriented language (Mahoney and Laskey 1996) we typically find repetitive substructures or substructures that can naturally be ordered in a superclass/subclass hierarchy. For such domains, the expert is usually able to provide information about these properties. The basic building blocks available from domain experts examining such domains are information about random variables that are grouped into substructures with high internal coupling and low external coupling. These substructures naturally correspond to instantiations in an object-oriented BN (OOBN). For instance, an instantiation may correspond to a physical object or it may describe a set of entities that occur at the same instant of time (a dynamic Bayesian network (Kjærulff 1992) is a special case of an OOBN). Moreover, analogously to the grouping of similar substructures into categories, instantiations of the same type are grouped into classes. As an example, several variables describing a specific pump may be said to make up an instantiation. All instantiations describing the same type of pump are said to be instantiations of the same class. OOBNs offer an easy way of defining BNs in such object-oriented domains s.t. the object-oriented properties of the domain are taken advantage of during model building, and also explicitly encoded in the model. Although these object oriented frameworks relieve some of the problems when modelling large domains, it may still prove difficult to elicit the parameters and the structure of the model. In Paper IV and Paper V we work with learning of parameters and specifying the structure in the OOBN definition of Bangsø and Wuillemin (2000).

Paper IV describes a method for parameter learning in OOBNs. The contributions in this paper are three-fold: Firstly, we propose a method for learning parameters in OOBNs based on the EM-algorithm (Dempster et al. 1977), and prove that maintaining the object orientation imposed by the prior model will increase the learning speed in object oriented domains. Secondly, we propose a method to efficiently estimate the probability parameters in domains that are not strictly object oriented. More specifically, we show how Bayesian model averaging (Hoeting et al. 1999) offers well-founded tradeoff between model complexity and model fit in this setting. Finally, we attack the situation where the domain expert is unable to classify an instantiation to a given class or a set of instantiations to classes (Pfeffer (2000) calls this type uncertainty; a case of model uncertainty typical to object oriented domains). We show how our algorithm can be extended to work with OOBNs that are only partly specified.

In Paper V we estimate the OOBN structure. When constructing a Bayesian network, it can be advantageous to employ structural learning algorithms (Cooper and Herskovits 1992; Heckerman et al. 1995) to combine knowledge captured in databases with prior information provided by domain experts. Unfortunately, conventional learning algorithms do not easily incorporate prior information, if this information is too vague to be encoded as properties that are local to families of variables (this is for instance the case for prior information about repetitive structures). The main contribution of Paper V is a method for doing structural learning in object oriented domains. We argue that the method supports a natural approach for expressing and incorporating prior information provided by domain experts and show how this type of prior information can be exploited during structural learning. Our method is built on the Structural EM-algorithm (Friedman 1998), and we prove our algorithm to be asymptotically consistent. Empirical results demonstrate that the proposed learning algorithm is more efficient than conventional learning algorithms in object oriented domains. We also consider structural learning under type uncertainty, and find through a discrete optimization technique a candidate OOBN structure that describes the data well.

APA, Harvard, Vancouver, ISO, and other styles
43

Meng, Yu. "Bayesian Analysis of a Stochastic Volatility Model." Thesis, Uppsala University, Department of Mathematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-119972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ray, Shubhankar. "Nonparametric Bayesian analysis of some clustering problems." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4251.

Full text
Abstract:
Nonparametric Bayesian models have been researched extensively in the past 10 years following the work of Escobar and West (1995) on sampling schemes for Dirichlet processes. The infinite mixture representation of the Dirichlet process makes it useful for clustering problems where the number of clusters is unknown. We develop nonparametric Bayesian models for two different clustering problems, namely functional and graphical clustering. We propose a nonparametric Bayes wavelet model for clustering of functional or longitudinal data. The wavelet modelling is aimed at the resolution of global and local features during clustering. The model also allows the elicitation of prior belief about the regularity of the functions and has the ability to adapt to a wide range of functional regularity. Posterior inference is carried out by Gibbs sampling with conjugate priors for fast computation. We use simulated as well as real datasets to illustrate the suitability of the approach over other alternatives. The functional clustering model is extended to analyze splice microarray data. New microarray technologies probe consecutive segments along genes to observe alternative splicing (AS) mechanisms that produce multiple proteins from a single gene. Clues regarding the number of splice forms can be obtained by clustering the functional expression profiles from different tissues. The analysis was carried out on the Rosetta dataset (Johnson et al., 2003) to obtain a splice variant by tissue distribution for all the 10,000 genes. We were able to identify a number of splice forms that appear to be unique to cancer. We propose a Bayesian model for partitioning graphs depicting dependencies in a collection of objects. After suitable transformations and modelling techniques, the problem of graph cutting can be approached by nonparametric Bayes clustering. We draw motivation from a recent work (Dhillon, 2001) showing the equivalence of kernel k-means clustering and certain graph cutting algorithms. It is shown that loss functions similar to the kernel k-means naturally arise in this model, and the minimization of associated posterior risk comprises an effective graph cutting strategy. We present here results from the analysis of two microarray datasets, namely the melanoma dataset (Bittner et al., 2000) and the sarcoma dataset (Nykter et al., 2006).
APA, Harvard, Vancouver, ISO, and other styles
45

Karuri, Stella. "Integration in Computer Experiments and Bayesian Analysis." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1187.

Full text
Abstract:
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error.

One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density.

A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.
APA, Harvard, Vancouver, ISO, and other styles
46

McCandless, Lawrence Cruikshank. "Bayesian propensity score analysis of observational data." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31420.

Full text
Abstract:
Propensity scores analysis (PSA) involves regression adjustment for the estimated propensity scores, and the method can be used for estimating causal effects from observational data. However, confidence intervals for the treatment effect may be falsely precise because PSA ignores uncertainty in the estimated propensity scores. We propose Bayesian propensity score analysis (BPSA) for observational studies with a binary treatment, binary outcome and measured confounders. The method uses logistic regression models with the propensity score as a latent variable. The first regression models the relationship between the outcome, treatment and propensity score, while the second regression models the relationship between the propensity score and measured confounders. Markov chain Monte Carlo is used to study the posterior distribution of the exposure effect. We demonstrate BPSA in an observational study of the effect of statin therapy on all-cause mortality in patients discharged from Ontario hospitals following acute myocardial infarction. The results illustrate that BPSA and PSA may give different inferences despite the large sample size. We study performance using Monte Carlo simulations. Synthetic datasets are generated using competing models for the outcome variable and various fixed parameter values. The results indicate that if the outcome regression model is correctly specified, in the sense that the outcome risk within treatment groups is a smooth function of the propensity score, then BPSA permits more efficient estimation of the propensity scores compared to PSA. BPSA exploits prior information about the relationship between the outcome variable and the propensity score. This information is ignored by PSA. Conversely, when the model for the outcome variable is misspecified, then BPSA generally performs worse than PSA.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
47

Maimon, Geva. "A Bayesian spatial analysis of glass data /." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82284.

Full text
Abstract:
In criminal investigations involving glass evidence, refractive index (RI) is the property of glass most commonly used by forensic examiners to determine the association between control samples of glass obtained at the crime scene, and samples of glass found on a suspect. Previous studies have shown that an intrinsic variability of RI exists within a pane of float glass. In this thesis, we attempt to determine whether this variability is spatially determined or random in nature, the conclusion of which plays an important role in the statistical interpretation of glass evidence. We take a Bayesian approach in fitting a spatial model to our data, and utilize the WinBUGS software to perform Gibbs sampling. To test for spatial variability, we propose two test quantities, and employ Bayesian Monte Carlo significance tests to test our data, as well as nine other specifically formulated data-sets.
APA, Harvard, Vancouver, ISO, and other styles
48

Zeryos, Mihail. "Bayesian pursuit analysis and singular stochastic control." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Haro, Lopez Ruben Alejandro. "Data adaptive Bayesian analysis using distributional mixtures." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Choudrey, Rizwan A. "Variational methods for Bayesian independent component analysis." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography