Дисертації з теми "Bayesian Machine Learning (BML)"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Bayesian Machine Learning (BML).

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Bayesian Machine Learning (BML)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Habli, Nada. "Nonparametric Bayesian Modelling in Machine Learning." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34267.

Повний текст джерела
Анотація:
Nonparametric Bayesian inference has widespread applications in statistics and machine learning. In this thesis, we examine the most popular priors used in Bayesian non-parametric inference. The Dirichlet process and its extensions are priors on an infinite-dimensional space. Originally introduced by Ferguson (1983), its conjugacy property allows a tractable posterior inference which has lately given rise to a significant developments in applications related to machine learning. Another yet widespread prior used in nonparametric Bayesian inference is the Beta process and its extensions. It has originally been introduced by Hjort (1990) for applications in survival analysis. It is a prior on the space of cumulative hazard functions and it has recently been widely used as a prior on an infinite dimensional space for latent feature models. Our contribution in this thesis is to collect many diverse groups of nonparametric Bayesian tools and explore algorithms to sample from them. We also explore machinery behind the theory to apply and expose some distinguished features of these procedures. These tools can be used by practitioners in many applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Higson, Edward John. "Bayesian methods and machine learning in astrophysics." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289728.

Повний текст джерела
Анотація:
This thesis is concerned with methods for Bayesian inference and their applications in astrophysics. We principally discuss two related themes: advances in nested sampling (Chapters 3 to 5), and Bayesian sparse reconstruction of signals from noisy data (Chapters 6 and 7). Nested sampling is a popular method for Bayesian computation which is widely used in astrophysics. Following the introduction and background material in Chapters 1 and 2, Chapter 3 analyses the sampling errors in nested sampling parameter estimation and presents a method for estimating them numerically for a single nested sampling calculation. Chapter 4 introduces diagnostic tests for detecting when software has not performed the nested sampling algorithm accurately, for example due to missing a mode in a multimodal posterior. The uncertainty estimates and diagnostics in Chapters 3 and 4 are implemented in the $\texttt{nestcheck}$ software package, and both chapters describe an astronomical application of the techniques introduced. Chapter 5 describes dynamic nested sampling: a generalisation of the nested sampling algorithm which can produce large improvements in computational efficiency compared to standard nested sampling. We have implemented dynamic nested sampling in the $\texttt{dyPolyChord}$ and $\texttt{perfectns}$ software packages. Chapter 6 presents a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including examples of processing astronomical images. The numerical implementation uses dynamic nested sampling, and uncertainties are calculated using the methods introduced in Chapters 3 and 4. Chapter 7 applies our Bayesian sparse reconstruction framework to artificial neural networks, where it allows the optimum network architecture to be determined by treating the number of nodes and hidden layers as parameters. We conclude by suggesting possible areas of future research in Chapter 8.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Menke, Joshua E. "Improving machine learning through oracle learning /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1726.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Menke, Joshua Ephraim. "Improving Machine Learning Through Oracle Learning." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/843.

Повний текст джерела
Анотація:
The following dissertation presents a new paradigm for improving the training of machine learning algorithms, oracle learning. The main idea in oracle learning is that instead of training directly on a set of data, a learning model is trained to approximate a given oracle's behavior on a set of data. This can be beneficial in situations where it is easier to obtain an oracle than it is to use it at application time. It is shown that oracle learning can be applied to more effectively reduce the size of artificial neural networks, to more efficiently take advantage of domain experts by approximating them, and to adapt a problem more effectively to a machine learning algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huszár, Ferenc. "Scoring rules, divergences and information in Bayesian machine learning." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648333.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Roychowdhury, Anirban. "Robust and Scalable Algorithms for Bayesian Nonparametric Machine Learning." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511901271093727.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yu, Shen. "A Bayesian machine learning system for recognizing group behaviour." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:8881/R/?func=dbin-jump-full&object_id=32565.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shahriari, Bobak. "Practical Bayesian optimization with application to tuning machine learning algorithms." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59104.

Повний текст джерела
Анотація:
Bayesian optimization has recently emerged in the machine learning community as a very effective automatic alternative to the tedious task of hand-tuning algorithm hyperparameters. Although it is a relatively new aspect of machine learning, it has known roots in the Bayesian experimental design (Lindley, 1956; Chaloner and Verdinelli, 1995), the design and analysis of computer experiments (DACE; Sacks et al., 1989), Kriging (Krige, 1951), and multi-armed bandits (Gittins, 1979). In this thesis, we motivate and introduce the model-based optimization framework and provide some historical context to the technique that dates back as far as 1933 with application to clinical drug trials (Thompson, 1933). Contributions of this work include a Bayesian gap-based exploration policy, inspired by Gabillon et al. (2012); a principled information-theoretic portfolio strategy, out-performing the portfolio of Hoffman et al. (2011); and a general practical technique circumventing the need for an initial bounding box. These various works each address existing practical challenges in the way of more widespread adoption of probabilistic model-based optimization techniques. Finally, we conclude this thesis with important directions for future research, emphasizing scalability and computational feasibility of the approach as a general purpose optimizer.
Science, Faculty of
Computer Science, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sampson, Oliver [Verfasser]. "Widened Machine Learning with Application to Bayesian Networks / Oliver Sampson." Konstanz : KOPS Universität Konstanz, 2020. http://d-nb.info/1209055597/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Scalabrin, Maria. "Bayesian Learning Strategies in Wireless Networks." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3424931.

Повний текст джерела
Анотація:
This thesis collects the research works I performed as a Ph.D. candidate, where the common thread running through all the works is Bayesian reasoning with applications in wireless networks. The pivotal role in Bayesian reasoning is inference: reasoning about what we don’t know, given what we know. When we make inference about the nature of the world, then we learn new features about the environment within which the agent gains experience, as this is what allows us to benefit from the gathered information, thus adapting to new conditions. As we leverage the gathered information, our belief about the environment should change to reflect our improved knowledge. This thesis focuses on the probabilistic aspects of information processing with applications to the following topics: Machine learning based network analysis using millimeter-wave narrow-band energy traces; Bayesian forecasting and anomaly detection in vehicular monitoring networks; Online power management strategies for energy harvesting mobile networks; Beam training and data transmission optimization in millimeter-wave vehicular networks. In these research works, we deal with pattern recognition aspects in real-world data via supervised/unsupervised learning methods (classification, forecasting and anomaly detection, multi-step ahead prediction via kernel methods). Finally, the mathematical framework of Markov Decision Processes (MDPs), which also serves as the basis for reinforcement learning, is introduced, where Partially Observable MDPs use the notion of belief to make decisions about the state of the world in millimeter-wave vehicular networks. The goal of this thesis is to investigate the considerable potential of inference from insightful perspectives, detailing the mathematical framework and how Bayesian reasoning conveniently adapts to various research domains in wireless networks.
Questa tesi raccoglie i lavori di ricerca svolti durante il mio percorso di dottorato, il cui filo conduttore è dato dal Bayesian reasoning con applicazioni in reti wireless. Il contributo fondamentale dato dal Bayesian reasoning sta nel fare deduzioni: ragionare riguardo a quello che non conosciamo, dato quello che conosciamo. Nel fare deduzioni riguardo alla natura delle cose, impariamo nuove caratteristiche proprie dell’ambiente in cui l’agente fa esperienza, e questo è ciò che ci permette di fare uso dell’informazione acquisita, adattandoci a nuove condizioni. Nel momento in cui facciamo uso dell’informazione acquisita, la nostra convinzione (belief) riguardo allo stato dell’ambiente cambia in modo tale da riflettere la nostra nuova conoscenza. Questa tesi tratta degli aspetti probabilistici nel processare l’informazione con applicazioni nei seguenti ambiti di ricerca: Machine learning based network analysis using millimeter-wave narrow-band energy traces; Bayesian forecasting and anomaly detection in vehicular monitoring networks; Online power management strategies for energy harvesting mobile networks; Beam-training and data transmission optimization in millimeter-wave vehicular networks. In questi lavori di ricerca studiamo aspetti di riconoscimento di pattern in dati reali attraverso metodi di supervised/unsupervised learning (classification, forecasting and anomaly detection, multi-step ahead prediction via kernel methods). Infine, presentiamo il contesto matematico dei Markov Decision Processes (MDPs), il quale sta anche alla base del reinforcement learning, dove Partially Observable MDPs utilizzano il concetto probabilistico di convinzione (belief) al fine di prendere decisoni riguardo allo stato dell’ambiente in millimeter-wave vehicular networks. Lo scopo di questa tesi è di investigare il considerevole potenziale nel fare deduzioni, andando a dettagliare il contesto matematico e come il modello probabilistico dato dal Bayesian reasoning si possa adattare agevolmente a vari ambiti di ricerca con applicazioni in reti wireless.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

FRANZESE, GIULIO. "Contributions to Efficient Machine Learning." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2875759.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

McCalman, Lachlan Robert. "Function Embeddings for Multi-modal Bayesian Inference." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/12031.

Повний текст джерела
Анотація:
Tractable Bayesian inference is a fundamental challenge in robotics and machine learning. Standard approaches such as Gaussian process regression and Kalman filtering make strong Gaussianity assumptions about the underlying distributions. Such assumptions, however, can quickly break down when dealing with complex systems such as the dynamics of a robot or multi-variate spatial models. In this thesis we aim to solve Bayesian regression and filtering problems without making assumptions about the underlying distributions. We develop techniques to produce rich posterior representations for complex, multi-modal phenomena. Our work extends kernel Bayes' rule (KBR), which uses empirical estimates of distributions derived from a set of training samples and embeds them into a high-dimensional reproducing kernel Hilbert space (RKHS). Bayes' rule itself occurs on elements of this space. Our first contribution is the development of an efficient method for estimating posterior density functions from kernel Bayes' rule, applied to both filtering and regression. By embedding fixed-mean mixtures of component distributions, we can efficiently find an approximate pre-image by optimising the mixture weights using a convex quadratic program. The result is a complex, multi-modal posterior representation. Our next contributions are methods for estimating cumulative distributions and quantile estimates from the posterior embedding of kernel Bayes' rule. We examine a number of novel methods, including those based on our density estimation techniques, as well as directly estimating the cumulative through use of the reproducing property of RKHSs. Finally, we develop a novel method for scaling kernel Bayes' rule inference to large datasets, using a reduced-set construction optimised using the posterior likelihood. This method retains the ability to perform multi-output inference, as well as our earlier contributions to represent explicitly non-Gaussian posteriors and quantile estimates.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Shon, Aaron P. "Bayesian cognitive models for imitation /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/7013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Luo, Zhiyuan. "A probabilistic reasoning and learning system based on Bayesian belief networks." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhao, Yajing. "Chaotic Model Prediction with Machine Learning." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8419.

Повний текст джерела
Анотація:
Chaos theory is a branch of modern mathematics concerning the non-linear dynamic systems that are highly sensitive to their initial states. It has extensive real-world applications, such as weather forecasting and stock market prediction. The Lorenz system, defined by three ordinary differential equations (ODEs), is one of the simplest and most popular chaotic models. Historically research has focused on understanding the Lorenz system's mathematical characteristics and dynamical evolution including the inherent chaotic features it possesses. In this thesis, we take a data-driven approach and propose the task of predicting future states of the chaotic system from limited observations. We explore two directions, answering two distinct fundamental questions of the system based on how informed we are about the underlying model. When we know the data is generated by the Lorenz System with unknown parameters, our task becomes parameter estimation (a white-box problem), or the ``inverse'' problem. When we know nothing about the underlying model (a black-box problem), our task becomes sequence prediction. We propose two algorithms for the white-box problem: Markov-Chain-Monte-Carlo (MCMC) and a Multi-Layer-Perceptron (MLP). Specially, we propose to use the Metropolis-Hastings (MH) algorithm with an additional random walk to avoid the sampler being trapped into local energy wells. The MH algorithm achieves moderate success in predicting the $\rho$ value from the data, but fails at the other two parameters. Our simple MLP model is able to attain high accuracy in terms of the $l_2$ distance between the prediction and ground truth for $\rho$ as well, but also fails to converge satisfactorily for the remaining parameters. We use a Recurrent Neural Network (RNN) to tackle the black-box problem. We implement and experiment with several RNN architectures including Elman RNN, LSTM, and GRU and demonstrate the relative strengths and weaknesses of each of these methods. Our results demonstrate the promising role of machine learning and modern statistical data science methods in the study of chaotic dynamic systems. The code for all of our experiments can be found on \url{https://github.com/Yajing-Zhao/}
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kégl, Balazs. "Contributions to machine learning: the unsupervised, the supervised, and the Bayesian." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00674004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wu, Jinlong. "Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85129.

Повний текст джерела
Анотація:
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Ph. D.
Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Dos, Santos De Oliveira Rafael. "Bayesian Optimisation for Planning under Uncertainty." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/20762.

Повний текст джерела
Анотація:
Under an increasing demand for data to understand critical processes in our world, robots have become powerful tools to automatically gather data and interact with their environments. In this context, this thesis addresses planning problems where limited prior information leads to uncertainty about the outcomes of a robot's decisions. The methods are based on Bayesian optimisation (BO), which provides a framework to solve planning problems under uncertainty by means of probabilistic modelling. As a first contribution, the thesis provides a method to find energy-efficient paths over unknown terrains. The method applies a Gaussian process (GP) model to learn online how a robot's power consumption varies as a function of its configuration while moving over the terrain. BO is applied to optimise trajectories over the GP model being learnt so that they are informative and energetically efficient. The method was tested in experiments on simulated and physical environments. A second contribution addresses the problem of policy search in high-dimensional parameter spaces. To deal with high dimensionality the method combines BO with a coordinate-descent scheme that greatly improves BO's performance when compared to conventional approaches. The method was applied to optimise a control policy for a race car in a simulated environment and shown to outperform other optimisation approaches. Finally, the thesis provides two methods to address planning problems involving uncertainty in the inputs space. The first method is applied to actively learn terrain roughness models via proprioceptive sensing with a mobile robot under localisation uncertainty. Experiments demonstrate the method's performance in both simulations and a physical environment. The second method is derived for more general optimisation problems. In particular, this method is provided with theoretical guarantees and empirical performance comparisons against other approaches in simulated environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Bratières, Sébastien. "Non-parametric Bayesian models for structured output prediction." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274973.

Повний текст джерела
Анотація:
Structured output prediction is a machine learning tasks in which an input object is not just assigned a single class, as in classification, but multiple, interdependent labels. This means that the presence or value of a given label affects the other labels, for instance in text labelling problems, where output labels are applied to each word, and their interdependencies must be modelled. Non-parametric Bayesian (NPB) techniques are probabilistic modelling techniques which have the interesting property of allowing model capacity to grow, in a controllable way, with data complexity, while maintaining the advantages of Bayesian modelling. In this thesis, we develop NPB algorithms to solve structured output problems. We first study a map-reduce implementation of a stochastic inference method designed for the infinite hidden Markov model, applied to a computational linguistics task, part-of-speech tagging. We show that mainstream map-reduce frameworks do not easily support highly iterative algorithms. The main contribution of this thesis consists in a conceptually novel discriminative model, GPstruct. It is motivated by labelling tasks, and combines attractive properties of conditional random fields (CRF), structured support vector machines, and Gaussian process (GP) classifiers. In probabilistic terms, GPstruct combines a CRF likelihood with a GP prior on factors; it can also be described as a Bayesian kernelized CRF. To train this model, we develop a Markov chain Monte Carlo algorithm based on elliptical slice sampling and investigate its properties. We then validate it on real data experiments, and explore two topologies: sequence output with text labelling tasks, and grid output with semantic segmentation of images. The latter case poses scalability issues, which are addressed using likelihood approximations and an ensemble method which allows distributed inference and prediction. The experimental validation demonstrates: (a) the model is flexible and its constituent parts are modular and easy to engineer; (b) predictive performance and, most crucially, the probabilistic calibration of predictions are better than or equal to that of competitor models, and (c) model hyperparameters can be learnt from data.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Rademeyer, Estian. "Bayesian kernel density estimation." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64692.

Повний текст джерела
Анотація:
This dissertation investigates the performance of two-class classi cation credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and naive Bayes (NB), as well as the non-parametric Parzen classi ers are extended, using Bayes' rule, to include either a class imbalance or a Bernoulli prior. This is done with the aim of addressing the low default probability problem. Furthermore, the performance of Parzen classi cation with Silverman and Minimum Leave-one-out Entropy (MLE) Gaussian kernel bandwidth estimation is also investigated. It is shown that the non-parametric Parzen classi ers yield superior classi cation power. However, there is a longing for these non-parametric classi ers to posses a predictive power, such as exhibited by the odds ratio found in logistic regression (LR). The dissertation therefore dedicates a section to, amongst other things, study the paper entitled \Model-Free Objective Bayesian Prediction" (Bernardo 1999). Since this approach to Bayesian kernel density estimation is only developed for the univariate and the uncorrelated multivariate case, the section develops a theoretical multivariate approach to Bayesian kernel density estimation. This approach is theoretically capable of handling both correlated as well as uncorrelated features in data. This is done through the assumption of a multivariate Gaussian kernel function and the use of an inverse Wishart prior.
Dissertation (MSc)--University of Pretoria, 2017.
The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF.
Statistics
MSc
Unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Riggelsen, Carsten. "Approximation methods for efficient learning of Bayesian networks /." Amsterdam ; Washington, DC : IOS Press, 2008. http://www.loc.gov/catdir/toc/fy0804/2007942192.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Gabbur, Prasad. "Machine Learning Methods for Microarray Data Analysis." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/195829.

Повний текст джерела
Анотація:
Microarrays emerged in the 1990s as a consequence of the efforts to speed up the process of drug discovery. They revolutionized molecular biological research by enabling monitoring of thousands of genes together. Typical microarray experiments measure the expression levels of a large numberof genes on very few tissue samples. The resulting sparsity of data presents major challenges to statistical methods used to perform any kind of analysis on this data. This research posits that phenotypic classification and prediction serve as good objective functions for both optimization and evaluation of microarray data analysis methods. This is because classification measures whatis needed for diagnostics and provides quantitative performance measures such as leave-one-out (LOO) or held-out prediction accuracy and confidence. Under the classification framework, various microarray data normalization procedures are evaluated using a class label hypothesis testing framework and also employing Support Vector Machines (SVM) and linear discriminant based classifiers. A novel normalization technique based on minimizing the squared correlation coefficients between expression levels of gene pairs is proposed and evaluated along with the other methods. Our results suggest that most normalization methods helped classification on the datasets considered except the rank method, most likely due to its quantization effects.Another contribution of this research is in developing machine learning methods for incorporating an independent source of information, in the form of gene annotations, to analyze microarray data. Recently, genes of many organisms have been annotated with terms from a limited vocabulary called Gene Ontologies (GO), describing the genes' roles in various biological processes, molecular functions and their locations within the cell. Novel probabilistic generative models are proposed for clustering genes using both their expression levels and GO tags. These models are similar in essence to the ones used for multimodal data, such as images and words, with learning and inference done in a Bayesian framework. The multimodal generative models are used for phenotypic class prediction. More specifically, the problems of phenotype prediction for static gene expression data and state prediction for time-course data are emphasized. Using GO tags for organisms whose genes have been studied more comprehensively leads to an improvement in prediction. Our methods also have the potential to provide a way to assess the quality of available GO tags for the genes of various model organisms.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Cheng, Jie. "Learning Bayesian networks from data : an information theory based approach." Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243621.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wistuba, Martin [Verfasser], and Lars [Akademischer Betreuer] Schmidt-Thieme. "Automated Machine Learning - Bayesian Optimization, Meta-Learning & Applications / Martin Wistuba ; Betreuer: Lars Schmidt-Thieme." Hildesheim : Universität Hildesheim, 2018. http://d-nb.info/1161526323/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Fredlund, Richard. "A Bayesian expected error reduction approach to Active Learning." Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3170.

Повний текст джерела
Анотація:
There has been growing recent interest in the field of active learning for binary classification. This thesis develops a Bayesian approach to active learning which aims to minimise the objective function on which the learner is evaluated, namely the expected misclassification cost. We call this approach the expected cost reduction approach to active learning. In this form of active learning queries are selected by performing a `lookahead' to evaluate the associated expected misclassification cost. \paragraph{} Firstly, we introduce the concept of a \textit{query density} to explicitly model how new data is sampled. An expected cost reduction framework for active learning is then developed which allows the learner to sample data according to arbitrary query densities. The model makes no assumption of independence between queries, instead updating model parameters on the basis of both which observations were made \textsl{and} how they were sampled. This approach is demonstrated on the probabilistic high-low game which is a non-separable extension of the high-low game presented by \cite{Seung_etal1993}. The results indicate that the Bayes expected cost reduction approach performs significantly better than passive learning even when there is considerable overlap between the class distributions, covering $30\%$ of input space. For the probabilistic high-low game however narrow queries appear to consistently outperform wide queries. We therefore conclude the first part of the thesis by investigating whether or not this is always the case, demonstrating examples where sampling broadly is favourable to a single input query. \paragraph{} Secondly, we explore the Bayesian expected cost reduction approach to active learning within the pool-based setting. This is where learning is limited to a finite pool of unlabelled observations from which the learner may select observations to be queried for class-labels. Our implementation of this approach uses Gaussian process classification with the expectation propagation approximation to make the necessary inferences. The implementation is demonstrated on six benchmark data sets and again demonstrates superior performance to passive learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Yu, Xiaofeng. "Prediction Intervals for Class Probabilities." The University of Waikato, 2007. http://hdl.handle.net/10289/2436.

Повний текст джерела
Анотація:
Prediction intervals for class probabilities are of interest in machine learning because they can quantify the uncertainty about the class probability estimate for a test instance. The idea is that all likely class probability values of the test instance are included, with a pre-specified confidence level, in the calculated prediction interval. This thesis proposes a probabilistic model for calculating such prediction intervals. Given the unobservability of class probabilities, a Bayesian approach is employed to derive a complete distribution of the class probability of a test instance based on a set of class observations of training instances in the neighbourhood of the test instance. A random decision tree ensemble learning algorithm is also proposed, whose prediction output constitutes the neighbourhood that is used by the Bayesian model to produce a PI for the test instance. The Bayesian model, which is used in conjunction with the ensemble learning algorithm and the standard nearest-neighbour classifier, is evaluated on artificial datasets and modified real datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Abeywardana, Sachinthaka. "Variational Inference in Generalised Hyperbolic and von Mises-Fisher Distributions." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/16504.

Повний текст джерела
Анотація:
Most real world data are skewed, contain more than the set of real numbers, and have higher probabilities of extreme events occurring compared to a normal distribution. In this thesis we explore two non-Gaussian distributions, the Generalised Hyperbolic Distribution (GHD) and, the von-Mises Fisher (vMF) Distribution. These distributions are studied in the context of 1) Regression in heavy tailed data, 2) Quantifying variance of functions with reference to finding relevant quantiles and, 3) Clustering data that lie on the surface of the sphere. Firstly, we extend Gaussian Processes (GPs) and use the Genralised Hyperbolic Processes as a prior on functions instead. This prior is more flexible than GPs and is especially able to model data that has high kurtosis. The method is based on placing a Generalised Inverse Gaussian prior over the signal variance, which yields a scalar mixture of GPs. We show how to perform inference efficiently for the predictive mean and variance, and use a variational EM method for learning. Secondly, the skewed extension of the GHD is studied with respect to quantile regression. An underlying GP prior on the quantile function is used to make the inference non-parametric, while the skewed GHD is used as the data likelihood. The skewed GHD has a single parameter alpha which states the required quantile. Similar variational methods as the first contribution is used to perform inference. Finally, vMF distributions are introduced in order to cluster spherical data. In the two previous contributions continuous scalar mixtures of Gaussians were used to make the inference process simpler. However, for clustering, a discrete number of vMF distributions are typically used. We propose a Dirichlet Process (DP) to infer the number of clusters in the spherical data setup. The framework is extended to incorporate a nested and a temporal clustering architecture. Throughout this thesis in many cases the posterior cannot be calculated in closed form. Variational Bayesian approximations are derived in this situation for efficient inference. In certain cases further lower bounding of the optimisation function is required in order to perform Variational Bayes. These bounds themselves are novel.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Trifonova, Neda. "Machine-learning approaches for modelling fish population dynamics." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13386.

Повний текст джерела
Анотація:
Ecosystems consist of complex dynamic interactions among species and the environment, the understanding of which has implications for predicting the environmental response to changes in climate and biodiversity. Understanding the nature of functional relationships (such as prey-predator) between species is important for building predictive models. However, modelling the interactions with external stressors over time and space is also essential for ecosystem-based approaches to fisheries management. With the recent adoption of more explorative tools, like Bayesian networks, in predictive ecology, fewer assumptions can be made about the data and complex, spatially varying interactions can be recovered from collected field data and combined with existing knowledge. In this thesis, we explore Bayesian network modelling approaches, accounting for latent effects to reveal species dynamics within geographically different marine ecosystems. First, we introduce the concept of functional equivalence between different fish species and generalise trophic structure from different marine ecosystems in order to predict influence from natural and anthropogenic sources. The importance of a hidden variable in fish community change studies of this nature was acknowledged because it allows causes of change which are not purely found within the constrained model structure. Then, a functional network modelling approach was developed for the region of North Sea that takes into consideration unmeasured latent effects and spatial autocorrelation to model species interactions and associations with external factors such as climate and fisheries exploitation. The proposed model was able to produce novel insights on the ecosystem's dynamics and ecological interactions mainly because it accounts for the heterogeneous nature of the driving factors within spatially differentiated areas and their changes over time. Finally, a modified version of this dynamic Bayesian network model was used to predict the response of different ecosystem components to change in anthropogenic and environmental factors. Through the development of fisheries catch, temperature and productivity scenarios, we explore the future of different fish and zooplankton species and examine what trends of fisheries exploitation and environmental change are potentially beneficial in terms of ecological stability and resilience. Thus, we were able to provide a new data-driven modelling approach which might be beneficial to give strategic advice on potential response of the system to pressure.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Hospedales, Timothy. "Bayesian multisensory perception." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/2156.

Повний текст джерела
Анотація:
A key goal for humans and artificial intelligence systems is to develop an accurate and unified picture of the outside world based on the data from any sense(s) that may be available. The availability of multiple senses presents the perceptual system with new opportunities to fulfil this goal, but exploiting these opportunities first requires the solution of two related tasks. The first is how to make the best use of any redundant information from the sensors to produce the most accurate percept of the state of the world. The second is how to interpret the relationship between observations in each modality; for example, the correspondence problem of whether or not they originate from the same source. This thesis investigates these questions using ideal Bayesian observers as the underlying theoretical approach. In particular, the latter correspondence task is treated as a problem of Bayesian model selection or structure inference in Bayesian networks. This approach provides a unified and principled way of representing and understanding the perceptual problems faced by humans and machines and their commonality. In the domain of machine intelligence, we exploit the developed theory for practical benefit, developing a model to represent audio-visual correlations. Unsupervised learning in this model provides automatic calibration and user appearance learning, without human intervention. Inference in the model involves explicit reasoning about the association between latent sources and observations. This provides audio-visual tracking through occlusion with improved accuracy compared to standard techniques. It also provides detection, verification and speech segmentation, ultimately allowing the machine to understand ``who said what, where?'' in multi-party conversations. In the domain of human neuroscience, we show how a variety of recent results in multimodal perception can be understood as the consequence of probabilistic reasoning about the causal structure of multimodal observations. We show this for a localisation task in audio-visual psychophysics, which is very similar to the task solved by our machine learning system. We also use the same theory to understand results from experiments in the completely different paradigm of oddity detection using visual and haptic modalities. These results begin to suggest that the human perceptual system performs -- or at least approximates -- sophisticated probabilistic reasoning about the causal structure of observations under the hood.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mohamed, Shakir. "Generalised Bayesian matrix factorisation models." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/237246.

Повний текст джерела
Анотація:
Factor analysis and related models for probabilistic matrix factorisation are of central importance to the unsupervised analysis of data, with a colourful history more than a century long. Probabilistic models for matrix factorisation allow us to explore the underlying structure in data, and have relevance in a vast number of application areas including collaborative filtering, source separation, missing data imputation, gene expression analysis, information retrieval, computational finance and computer vision, amongst others. This thesis develops generalisations of matrix factorisation models that advance our understanding and enhance the applicability of this important class of models. The generalisation of models for matrix factorisation focuses on three concerns: widening the applicability of latent variable models to the diverse types of data that are currently available; considering alternative structural forms in the underlying representations that are inferred; and including higher order data structures into the matrix factorisation framework. These three issues reflect the reality of modern data analysis and we develop new models that allow for a principled exploration and use of data in these settings. We place emphasis on Bayesian approaches to learning and the advantages that come with the Bayesian methodology. Our port of departure is a generalisation of latent variable models to members of the exponential family of distributions. This generalisation allows for the analysis of data that may be real-valued, binary, counts, non-negative or a heterogeneous set of these data types. The model unifies various existing models and constructs for unsupervised settings, the complementary framework to the generalised linear models in regression. Moving to structural considerations, we develop Bayesian methods for learning sparse latent representations. We define ideas of weakly and strongly sparse vectors and investigate the classes of prior distributions that give rise to these forms of sparsity, namely the scale-mixture of Gaussians and the spike-and-slab distribution. Based on these sparsity favouring priors, we develop and compare methods for sparse matrix factorisation and present the first comparison of these sparse learning approaches. As a second structural consideration, we develop models with the ability to generate correlated binary vectors. Moment-matching is used to allow binary data with specified correlation to be generated, based on dichotomisation of the Gaussian distribution. We then develop a novel and simple method for binary PCA based on Gaussian dichotomisation. The third generalisation considers the extension of matrix factorisation models to multi-dimensional arrays of data that are increasingly prevalent. We develop the first Bayesian model for non-negative tensor factorisation and explore the relationship between this model and the previously described models for matrix factorisation.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Prando, Giulia. "Non-Parametric Bayesian Methods for Linear System Identification." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3426195.

Повний текст джерела
Анотація:
Recent contributions have tackled the linear system identification problem by means of non-parametric Bayesian methods, which are built on largely adopted machine learning techniques, such as Gaussian Process regression and kernel-based regularized regression. Following the Bayesian paradigm, these procedures treat the impulse response of the system to be estimated as the realization of a Gaussian process. Typically, a Gaussian prior accounting for stability and smoothness of the impulse response is postulated, as a function of some parameters (called hyper-parameters in the Bayesian framework). These are generally estimated by maximizing the so-called marginal likelihood, i.e. the likelihood after the impulse response has been marginalized out. Once the hyper-parameters have been fixed in this way, the final estimator is computed as the conditional expected value of the impulse response w.r.t. the posterior distribution, which coincides with the minimum variance estimator. Assuming that the identification data are corrupted by Gaussian noise, the above-mentioned estimator coincides with the solution of a regularized estimation problem, in which the regularization term is the l2 norm of the impulse response, weighted by the inverse of the prior covariance function (a.k.a. kernel in the machine learning literature). Recent works have shown how such Bayesian approaches are able to jointly perform estimation and model selection, thus overcoming one of the main issues affecting parametric identification procedures, that is complexity selection.
While keeping the classical system identification methods (e.g. Prediction Error Methods and subspace algorithms) as a benchmark for numerical comparison, this thesis extends and analyzes some key aspects of the above-mentioned Bayesian procedure. In particular, four main topics are considered. 1. PRIOR DESIGN. Adopting Maximum Entropy arguments, a new type of l2 regularization is derived: the aim is to penalize the rank of the block Hankel matrix built with Markov coefficients, thus controlling the complexity of the identified model, measured by its McMillan degree. By accounting for the coupling between different input-output channels, this new prior results particularly suited when dealing for the identification of MIMO systems
To speed up the computational requirements of the estimation algorithm, a tailored version of the Scaled Gradient Projection algorithm is designed to optimize the marginal likelihood. 2. CHARACTERIZATION OF UNCERTAINTY. The confidence sets returned by the non-parametric Bayesian identification algorithm are analyzed and compared with those returned by parametric Prediction Error Methods. The comparison is carried out in the impulse response space, by deriving “particle” versions (i.e. Monte-Carlo approximations) of the standard confidence sets. 3. ONLINE ESTIMATION. The application of the non-parametric Bayesian system identification techniques is extended to an online setting, in which new data become available as time goes. Specifically, two key modifications of the original “batch” procedure are proposed in order to meet the real-time requirements. In addition, the identification of time-varying systems is tackled by introducing a forgetting factor in the estimation criterion and by treating it as a hyper-parameter. 4. POST PROCESSING: MODEL REDUCTION. Non-parametric Bayesian identification procedures estimate the unknown system in terms of its impulse response coefficients, thus returning a model with high (possibly infinite) McMillan degree. A tailored procedure is proposed to reduce such model to a lower degree one, which appears more suitable for filtering and control applications. Different criteria for the selection of the order of the reduced model are evaluated and compared.
Recentemente, il problema di identificazione di sistemi lineari è stato risolto ricorrendo a metodi Bayesiani non-parametrici, che sfruttano di tecniche di Machine Learning ampiamente utilizzate, come la regressione gaussiana e la regolarizzazione basata su kernels. Seguendo il paradigma Bayesiano, queste procedure richiedono una distribuzione Gaussiana a-priori per la risposta impulsiva. Tale distribuzione viene definita in funzione di alcuni parametri (chiamati iper-parametri nell'ambito Bayesiano), che vengono stimati usando i dati a disposizione. Una volta che gli iper-parametri sono stati fissati, è possibile calcolare lo stimatore a minima varianza come il valore atteso della risposta impulsiva, condizionato rispetto alla distribuzione a posteriori. Assumendo che i dati di identificazione siano corrotti da rumore Gaussiano, tale stimatore coincide con la soluzione di un problema di stima regolarizzato, nel quale il termine di regolarizzazione è la norma l2 della risposta impulsiva, pesata dall'inverso della funzione di covarianza a priori (tale funzione viene anche detta "kernel" nella letteratura di Machine Learning). Recenti lavori hanno dimostrato come questi metodi Bayesiani possano contemporaneamente selezionare un modello ottimale e stimare la quantità sconosciuta. In tal modo sono in grado di superare uno dei principali problemi che affliggono le tecniche di identificazione parametrica, ovvero quella della selezione della complessità di modello. Considerando come benchmark le tecniche classiche di identificazione (ovvero i Metodi a Predizione d'Errore e gli algoritmi Subspace), questa tesi estende ed analizza alcuni aspetti chiave della procedura Bayesiana sopraccitata. In particolare, la tesi si sviluppa su quattro argomenti principali. 1. DESIGN DELLA DISTRIBUZIONE A PRIORI. Sfruttando la teoria delle distribuzioni a Massima Entropia, viene derivato un nuovo tipo di regolarizzazione l2 con l'obiettivo di penalizzare il rango della matrice di Hankel contenente i coefficienti di Markov. In tal modo è possibile controllare la complessità del modello stimato, misurata in termini del grado di McMillan. 2. CARATTERIZZAZIONE DELL'INCERTEZZA. Gli intervalli di confidenza costruiti dall'algoritmo di identificazione Bayesiana non-parametrica vengono analizzati e confrontati con quelli restituiti dai metodi parametrici a Predizione d'Errore. Convertendo quest'ultimi nelle loro approssimazioni campionarie, il confronto viene effettuato nello spazio a cui appartiene la risposta impulsiva. 3. STIMA ON-LINE. L'applicazione delle tecniche Bayesiane non-parametriche per l'identificazione dei sistemi viene estesa ad uno scenario on-line, in cui nuovi dati diventano disponibili ad intervalli di tempo prefissati. Vengono proposte due modifiche chiave della procedura standard off-line in modo da soddisfare i requisiti della stima real-time. Viene anche affrontata l'identificazione di sistemi tempo-varianti tramite l'introduzione, nel criterio di stima, di un fattore di dimenticanza, il quale e' in seguito trattato come un iper-parametro. 4. RIDUZIONE DEL MODELLO STIMATO. Le tecniche di identificazione Bayesiana non-parametrica restituiscono una stima della risposta impulsiva del sistema sconosciuto, ovvero un modello con un alto (verosimilmente infinito) grado di McMillan. Viene quindi proposta un'apposita procedura per ridurre tale modello ad un grado più basso, in modo che risulti più adatto per future applicazioni di controllo e filtraggio. Vengono inoltre confrontati diversi criteri per la selezione dell'ordine del modello ridotto.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Graff, Philip B. "Bayesian methods for gravitational waves and neural networks." Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/244270.

Повний текст джерела
Анотація:
Einstein’s general theory of relativity has withstood 100 years of testing and will soon be facing one of its toughest challenges. In a few years we expect to be entering the era of the first direct observations of gravitational waves. These are tiny perturbations of space-time that are generated by accelerating matter and affect the measured distances between two points. Observations of these using the laser interferometers, which are the most sensitive length-measuring devices in the world, will allow us to test models of interactions in the strong field regime of gravity and eventually general relativity itself. I apply the tools of Bayesian inference for the examination of gravitational wave data from the LIGO and Virgo detectors. This is used for signal detection and estimation of the source parameters. I quantify the ability of a network of ground-based detectors to localise a source position on the sky for electromagnetic follow-up. Bayesian criteria are also applied to separating real signals from glitches in the detectors. These same tools and lessons can also be applied to the type of data expected from planned space-based detectors. Using simulations from the Mock LISA Data Challenges, I analyse our ability to detect and characterise both burst and continuous signals. The two seemingly different signal types will be overlapping and confused with one another for a space-based detector; my analysis shows that we will be able to separate and identify many signals present. Data sets and astrophysical models are continuously increasing in complexity. This will create an additional computational burden for performing Bayesian inference and other types of data analysis. I investigate the application of the MOPED algorithm for faster parameter estimation and data compression. I find that its shortcomings make it a less favourable candidate for further implementation. The framework of an artificial neural network is a simple model for the structure of a brain which can “learn” functional relationships between sets of inputs and outputs. I describe an algorithm developed for the training of feed-forward networks on pre-calculated data sets. The trained networks can then be used for fast prediction of outputs for new sets of inputs. After demonstrating capabilities on toy data sets, I apply the ability of the network to classifying handwritten digits from the MNIST database and measuring ellipticities of galaxies in the Mapping Dark Matter challenge. The power of neural networks for learning and rapid prediction is also useful in Bayesian inference where the likelihood function is computationally expensive. The new BAMBI algorithm is detailed, in which our network training algorithm is combined with the nested sampling algorithm MULTINEST to provide rapid Bayesian inference. Using samples from the normal inference, a network is trained on the likelihood function and eventually used in its place. This is able to provide significant increase in the speed of Bayesian inference while returning identical results. The trained networks can then be used for extremely rapid follow-up analyses with different priors, obtaining orders of magnitude of speed increase. Learning how to apply the tools of Bayesian inference for the optimal recovery of gravitational wave signals will provide the most scientific information when the first detections are made. Complementary to this, the improvement of our analysis algorithms to provide the best results in less time will make analysis of larger and more complicated models and data sets practical.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Brouwer, Thomas Alexander. "Bayesian matrix factorisation : inference, priors, and data integration." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/269921.

Повний текст джерела
Анотація:
In recent years the amount of biological data has increased exponentially. Most of these data can be represented as matrices relating two different entity types, such as drug-target interactions (relating drugs to protein targets), gene expression profiles (relating drugs or cell lines to genes), and drug sensitivity values (relating drugs to cell lines). Not only the size of these datasets is increasing, but also the number of different entity types that they relate. Furthermore, not all values in these datasets are typically observed, and some are very sparse. Matrix factorisation is a popular group of methods that can be used to analyse these matrices. The idea is that each matrix can be decomposed into two or more smaller matrices, such that their product approximates the original one. This factorisation of the data reveals patterns in the matrix, and gives us a lower-dimensional representation. Not only can we use this technique to identify clusters and other biological signals, we can also predict the unobserved entries, allowing us to prune biological experiments. In this thesis we introduce and explore several Bayesian matrix factorisation models, focusing on how to best use them for predicting these missing values in biological datasets. Our main hypothesis is that matrix factorisation methods, and in particular Bayesian variants, are an extremely powerful paradigm for predicting values in biological datasets, as well as other applications, and especially for sparse and noisy data. We demonstrate the competitiveness of these approaches compared to other state-of-the-art methods, and explore the conditions under which they perform the best. We consider several aspects of the Bayesian approach to matrix factorisation. Firstly, the effect of inference approaches that are used to find the factorisation on predictive performance. Secondly, we identify different likelihood and Bayesian prior choices that we can use for these models, and explore when they are most appropriate. Finally, we introduce a Bayesian matrix factorisation model that can be used to integrate multiple biological datasets, and hence improve predictions. This model hybridly combines different matrix factorisation models and Bayesian priors. Through these models and experiments we support our hypothesis and provide novel insights into the best ways to use Bayesian matrix factorisation methods for predictive purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Jiang, Ke. "Small-Variance Asymptotics for Bayesian Models." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492465751839975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Bui, Thang Duc. "Efficient deterministic approximate Bayesian inference for Gaussian process models." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273833.

Повний текст джерела
Анотація:
Gaussian processes are powerful nonparametric distributions over continuous functions that have become a standard tool in modern probabilistic machine learning. However, the applicability of Gaussian processes in the large-data regime and in hierarchical probabilistic models is severely limited by analytic and computational intractabilities. It is, therefore, important to develop practical approximate inference and learning algorithms that can address these challenges. To this end, this dissertation provides a comprehensive and unifying perspective of pseudo-point based deterministic approximate Bayesian learning for a wide variety of Gaussian process models, which connects previously disparate literature, greatly extends them and allows new state-of-the-art approximations to emerge. We start by building a posterior approximation framework based on Power-Expectation Propagation for Gaussian process regression and classification. This framework relies on a structured approximate Gaussian process posterior based on a small number of pseudo-points, which is judiciously chosen to summarise the actual data and enable tractable and efficient inference and hyperparameter learning. Many existing sparse approximations are recovered as special cases of this framework, and can now be understood as performing approximate posterior inference using a common approximate posterior. Critically, extensive empirical evidence suggests that new approximation methods arisen from this unifying perspective outperform existing approaches in many real-world regression and classification tasks. We explore the extensions of this framework to Gaussian process state space models, Gaussian process latent variable models and deep Gaussian processes, which also unify many recently developed approximation schemes for these models. Several mean-field and structured approximate posterior families for the hidden variables in these models are studied. We also discuss several methods for approximate uncertainty propagation in recurrent and deep architectures based on Gaussian projection, linearisation, and simple Monte Carlo. The benefit of the unified inference and learning frameworks for these models are illustrated in a variety of real-world state-space modelling and regression tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Srivastava, Santosh. "Bayesian minimum expected risk estimation of distributions for statistical learning /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6765.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Scherreik, Matthew D. "Online Clustering with Bayesian Nonparametrics." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1610711743492959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Haußmann, Manuel [Verfasser], and Fred A. [Akademischer Betreuer] Hamprecht. "Bayesian Neural Networks for Probabilistic Machine Learning / Manuel Haußmann ; Betreuer: Fred A. Hamprecht." Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/1239116233/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hayashi, Shogo. "Information Exploration and Exploitation for Machine Learning with Small Data." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263774.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Xu, Jian. "Iterative Aggregation of Bayesian Networks Incorporating Prior Knowledge." Miami University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=miami1105563019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Yang, Ying. "Discretization for Naive-Bayes learning." Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Zhu, Zhanxing. "Integrating local information for inference and optimization in machine learning." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20980.

Повний текст джерела
Анотація:
In practice, machine learners often care about two key issues: one is how to obtain a more accurate answer with limited data, and the other is how to handle large-scale data (often referred to as “Big Data” in industry) for efficient inference and optimization. One solution to the first issue might be aggregating learned predictions from diverse local models. For the second issue, integrating the information from subsets of the large-scale data is a proven way of achieving computation reduction. In this thesis, we have developed some novel frameworks and schemes to handle several scenarios in each of the two salient issues. For aggregating diverse models – in particular, aggregating probabilistic predictions from different models – we introduce a spectrum of compositional methods, Rényi divergence aggregators, which are maximum entropy distributions subject to biases from individual models, with the Rényi divergence parameter dependent on the bias. Experiments are implemented on various simulated and real-world datasets to verify the findings. We also show the theoretical connections between Rényi divergence aggregators and machine learning markets with isoelastic utilities. The second issue involves inference and optimization with large-scale data. We consider two important scenarios: one is optimizing large-scale Convex-Concave Saddle Point problem with a Separable structure, referred as Sep-CCSP; and the other is large-scale Bayesian posterior sampling. Two different settings of Sep-CCSP problem are considered, Sep-CCSP with strongly convex functions and non-strongly convex functions. We develop efficient stochastic coordinate descent methods for both of the two cases, which allow fast parallel processing for large-scale data. Both theoretically and empirically, it is demonstrated that the developed methods perform comparably, or more often, better than state-of-the-art methods. To handle the scalability issue in Bayesian posterior sampling, the stochastic approximation technique is employed, i.e., only touching a small mini batch of data items to approximate the full likelihood or its gradient. In order to deal with subsampling error introduced by stochastic approximation, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. This method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Romanes, Sarah Elizabeth. "Discriminant Analysis Methods for Large Scale and Complex Datasets." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/21721.

Повний текст джерела
Анотація:
Discriminant analysis (DA) methods are effective and intuitive classifiers for correlated, Gaussian data. However, they cannot be used in high dimensional problems without modification (that is, when the number of features outnumbers that of the observations) and/or non-Gaussian data, due to the strict parametric assumptions underlying such classifiers. In this thesis we aim to extend DA to modern settings. First, we introduce a new class of priors for Bayesian hypothesis testing, which we name ``cake priors". These priors allow the use of diffuse priors while achieving theoretically sound inferences. In this thesis we develop a foundation for hypothesis testing as a means for feature selection for DA classifiers, with the resultant Bayesian test statistic taking the form of a penalised likelihood ratio test statistic, allowing for natural comparison between nested models. Further, we show that these resultant tests using cake priors are Chernoff consistent. We next introduce a new method of performing high dimensional discriminant analysis, named multiDA. We achieve this by constructing a hybrid model which integrates both a diagonal discriminant analysis model and feature selection components based on likelihood ratio statistics, and provide heuristic arguments suggesting sound asymptotic performance for feature selection. Finally, we develop a method for generalised DA, named genDA. This method extends DA beyond the usual Gaussian response, and utilises generalised linear latent variable models in the place of Gaussian distributions, allowing for flexible modelling of multi-distributional response data, whilst capturing and exploiting between feature correlation structure. R packages implementing the classification methodology using efficient computational routines are available as multiDA and genDA and have been released on GitHub. Using these packages, we demonstrate competitive performance on simulated and benchmark datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Grimes, David B. "Learning by imitation and exploration : Bayesian models and applications in humanoid robotics /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6879.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Dondelinger, Frank. "Machine learning approach to reconstructing signalling pathways and interaction networks in biology." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7850.

Повний текст джерела
Анотація:
In this doctoral thesis, I present my research into applying machine learning techniques for reconstructing species interaction networks in ecology, reconstructing molecular signalling pathways and gene regulatory networks in systems biology, and inferring parameters in ordinary differential equation (ODE) models of signalling pathways. Together, the methods I have developed for these applications demonstrate the usefulness of machine learning for reconstructing networks and inferring network parameters from data. The thesis consists of three parts. The first part is a detailed comparison of applying static Bayesian networks, relevance vector machines, and linear regression with L1 regularisation (LASSO) to the problem of reconstructing species interaction networks from species absence/presence data in ecology (Faisal et al., 2010). I describe how I generated data from a stochastic population model to test the different methods and how the simulation study led us to introduce spatial autocorrelation as an important covariate. I also show how we used the results of the simulation study to apply the methods to presence/absence data of bird species from the European Bird Atlas. The second part of the thesis describes a time-varying, non-homogeneous dynamic Bayesian network model for reconstructing signalling pathways and gene regulatory networks, based on L`ebre et al. (2010). I show how my work has extended this model to incorporate different types of hierarchical Bayesian information sharing priors and different coupling strategies among nodes in the network. The introduction of these priors reduces the inference uncertainty by putting a penalty on the number of structure changes among network segments separated by inferred changepoints (Dondelinger et al., 2010; Husmeier et al., 2010; Dondelinger et al., 2012b). Using both synthetic and real data, I demonstrate that using information sharing priors leads to a better reconstruction accuracy of the underlying gene regulatory networks, and I compare the different priors and coupling strategies. I show the results of applying the model to gene expression datasets from Drosophila melanogaster and Arabidopsis thaliana, as well as to a synthetic biology gene expression dataset from Saccharomyces cerevisiae. In each case, the underlying network is time-varying; for Drosophila melanogaster, as a consequence of measuring gene expression during different developmental stages; for Arabidopsis thaliana, as a consequence of measuring gene expression for circadian clock genes under different conditions; and for the synthetic biology dataset, as a consequence of changing the growth environment. I show that in addition to inferring sensible network structures, the model also successfully predicts the locations of changepoints. The third and final part of this thesis is concerned with parameter inference in ODE models of biological systems. This problem is of interest to systems biology researchers, as kinetic reaction parameters can often not be measured, or can only be estimated imprecisely from experimental data. Due to the cost of numerically solving the ODE system after each parameter adaptation, this is a computationally challenging problem. Gradient matching techniques circumvent this problem by directly fitting the derivatives of the ODE to the slope of an interpolant. I present an inference procedure for a model using nonparametric Bayesian statistics with Gaussian processes, based on Calderhead et al. (2008). I show that the new inference procedure improves on the original formulation in Calderhead et al. (2008) and I present the result of applying it to ODE models of predator-prey interactions, a circadian clock gene, a signal transduction pathway, and the JAK/STAT pathway.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Michelen, Strofer Carlos Alejandro. "Machine Learning and Field Inversion approaches to Data-Driven Turbulence Modeling." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103155.

Повний текст джерела
Анотація:
There still is a practical need for improved closure models for the Reynolds-averaged Navier-Stokes (RANS) equations. This dissertation explores two different approaches for using experimental data to provide improved closure for the Reynolds stress tensor field. The first approach uses machine learning to learn a general closure model from data. A novel framework is developed to train deep neural networks using experimental velocity and pressure measurements. The sensitivity of the RANS equations to the Reynolds stress, required for gradient-based training, is obtained by means of both variational and ensemble methods. The second approach is to infer the Reynolds stress field for a flow of interest from limited velocity or pressure measurements of the same flow. Here, this field inversion is done using a Monte Carlo Bayesian procedure and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. The two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions.
Doctor of Philosophy
The Reynolds-averaged Navier-Stokes (RANS) equations are widely used to simulate fluid flows in engineering applications despite their known inaccuracy in many flows of practical interest. The uncertainty in the RANS equations is known to stem from the Reynolds stress tensor for which no universally applicable turbulence model exists. The computational cost of more accurate methods for fluid flow simulation, however, means RANS simulations will likely continue to be a major tool in engineering applications and there is still a need for improved RANS turbulence modeling. This dissertation explores two different approaches to use available experimental data to improve RANS predictions by improving the uncertain Reynolds stress tensor field. The first approach is using machine learning to learn a data-driven turbulence model from a set of training data. This model can then be applied to predict new flows in place of traditional turbulence models. To this end, this dissertation presents a novel framework for training deep neural networks using experimental measurements of velocity and pressure. When using velocity and pressure data, gradient-based training of the neural network requires the sensitivity of the RANS equations to the learned Reynolds stress. Two different methods, the continuous adjoint and ensemble approximation, are used to obtain the required sensitivity. The second approach explored in this dissertation is field inversion, whereby available data for a flow of interest is used to infer a Reynolds stress field that leads to improved RANS solutions for that same flow. Here, the field inversion is done via the ensemble Kalman inversion (EKI), a Monte Carlo Bayesian procedure, and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. While further development is needed, the two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Matosevic, Antonio. "On Bayesian optimization and its application to hyperparameter tuning." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74962.

Повний текст джерела
Анотація:
This thesis introduces the concept of Bayesian optimization, primarly used in optimizing costly black-box functions. Besides theoretical treatment of the topic, the focus of the thesis is on two numerical experiments. Firstly, different types of acquisition functions, which are the key components responsible for the performance, are tested and compared. Special emphasis is on the analysis of a so-called exploration-exploitation trade-off. Secondly, one of the most recent applications of Bayesian optimization concerns hyperparameter tuning in machine learning algorithms, where the objective function is expensive to evaluate and not given analytically. However, some results indicate that much simpler methods can give similar results. Our contribution is therefore a statistical comparison of simple random search and Bayesian optimization in the context of finding the optimal set of hyperparameters in support vector regression. It has been found that there is no significant difference in performance of these two methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Richmond, James Howard. "Bayesian Logistic Regression Models for Software Fault Localization." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1326658577.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Qiao, Junqing. "Semi-Autonomous Wheelchair Navigation With Statistical Context Prediction." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/869.

Повний текст джерела
Анотація:
"This research introduces the structure and elements of the system used to predict the user's interested location. The combination of DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm and GMM (Gaussian Mixture Model) algorithm is used to find locations where the user usually visits. In addition, the testing result of applying other clustering algorithms such as Gaussian Mixture model, Density Based clustering algorithm and K-means clustering algorithm on actual data are also shown as comparison. With having the knowledge of locations where the user usually visits, Discrete Bayesian Network is generated from the user's time-sequence location data. Combining the Bayesian Network, the user's current location and the time when the user left the other locations, the user's interested location can be predicted."
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Walker, Daniel David. "Bayesian Test Analytics for Document Collections." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3530.

Повний текст джерела
Анотація:
Modern document collections are too large to annotate and curate manually. As increasingly large amounts of data become available, historians, librarians and other scholars increasingly need to rely on automated systems to efficiently and accurately analyze the contents of their collections and to find new and interesting patterns therein. Modern techniques in Bayesian text analytics are becoming wide spread and have the potential to revolutionize the way that research is conducted. Much work has been done in the document modeling community towards this end,though most of it is focused on modern, relatively clean text data. We present research for improved modeling of document collections that may contain textual noise or that may include real-valued metadata associated with the documents. This class of documents includes many historical document collections. Indeed, our specific motivation for this work is to help improve the modeling of historical documents, which are often noisy and/or have historical context represented by metadata. Many historical documents are digitized by means of Optical Character Recognition(OCR) from document images of old and degraded original documents. Historical documents also often include associated metadata, such as timestamps,which can be incorporated in an analysis of their topical content. Many techniques, such as topic models, have been developed to automatically discover patterns of meaning in large collections of text. While these methods are useful, they can break down in the presence of OCR errors. We show the extent to which this performance breakdown occurs. The specific types of analyses covered in this dissertation are document clustering, feature selection, unsupervised and supervised topic modeling for documents with and without OCR errors and a new supervised topic model that uses Bayesian nonparametrics to improve the modeling of document metadata. We present results in each of these areas, with an emphasis on studying the effects of noise on the performance of the algorithms and on modeling the metadata associated with the documents. In this research we effectively: improve the state of the art in both document clustering and topic modeling; introduce a useful synthetic dataset for historical document researchers; and present analyses that empirically show how existing algorithms break down in the presence of OCR errors.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії