To see the other types of publications on this topic, follow the link: Extremes of Gaussian processes.

Dissertations / Theses on the topic 'Extremes of Gaussian processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Extremes of Gaussian processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kratz, Marie. "Some contributions in probability and statistics of extremes." Habilitation à diriger des recherches, Université Panthéon-Sorbonne - Paris I, 2005. http://tel.archives-ouvertes.fr/tel-00239329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stewart, Michael Ian. "Asymptotic methods for tests of homogeneity for finite mixture models." Thesis, The University of Sydney, 2002. http://hdl.handle.net/2123/855.

Full text
Abstract:
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
APA, Harvard, Vancouver, ISO, and other styles
3

Stewart, Michael Ian. "Asymptotic methods for tests of homogeneity for finite mixture models." University of Sydney. Mathematics and Statistics, 2002. http://hdl.handle.net/2123/855.

Full text
Abstract:
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
APA, Harvard, Vancouver, ISO, and other styles
4

Schmid, Christoph Manuel. "Extreme values of Gaussian processes and a heterogeneous multi agents model." [S.l.] : [s.n.], 2002. http://www.zb.unibe.ch/download/eldiss/02schmid_c.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Engelke, Sebastian. "Brown-Resnick Processes: Analysis, Inference and Generalizations." Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-F1B3-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Skolidis, Grigorios. "Transfer learning with Gaussian processes." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6271.

Full text
Abstract:
Transfer Learning is an emerging framework for learning from data that aims at intelligently transferring information between tasks. This is achieved by developing algorithms that can perform multiple tasks simultaneously, as well as translating previously acquired knowledge to novel learning problems. In this thesis, we investigate the application of Gaussian Processes to various forms of transfer learning with a focus on classification problems. This process initiates with a thorough introduction to the framework of Transfer learning, providing a clear taxonomy of the areas of research. Following that, we continue by reviewing the recent advances on Multi-task learning for regression with Gaussian processes, and compare the performance of some of these methods on a real data set. This review gives insights about the strengths and weaknesses of each method, which acts as a point of reference to apply these methods to other forms of transfer learning. The main contributions of this thesis are reported in the three following chapters. The third chapter investigates the application of Multi-task Gaussian processes to classification problems. We extend a previously proposed model to the classification scenario, providing three inference methods due to the non-Gaussian likelihood the classification paradigm imposes. The forth chapter extends the multi-task scenario to the semi-supervised case. Using labeled and unlabeled data, we construct a novel covariance function that is able to capture the geometry of the distribution of each task. This setup allows unlabeled data to be utilised to infer the level of correlation between the tasks. Moreover, we also discuss the potential use of this model to situations where no labeled data are available for certain tasks. The fifth chapter investigates a novel form of transfer learning called meta-generalising. The question at hand is if, after training on a sufficient number of tasks, it is possible to make predictions on a novel task. In this situation, the predictor is embedded in an environment of multiple tasks but has no information about the origins of the test task. This elevates the concept of generalising from the level of data to the level of tasks. We employ a model based on a hierarchy of Gaussian processes, in a mixtures of expert sense, to make predictions based on the relation between the distributions of the novel and the training tasks. Each chapter is accompanied with a thorough experimental part giving insights about the potentials and the limits of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Blitvic, Natasa. "Two-parameter noncommutative Gaussian processes." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78440.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 225-237).
The reality of billion-user networks and multi-terabyte data sets brings forth the need for accurate and computationally tractable descriptions of large random structures, such as random matrices or random graphs. The modern mathematical theory of free probability is increasingly giving rise to analysis tools specifically adapted to such large-dimensional regimes and, more generally, non-commutative probability is emerging as an area of interdisciplinary interest. This thesis develops a new non-commutative probabilistic framework that is both a natural generalization of several existing frameworks (viz. free probability, q-deformed probability) and a setting in which to describe a broader class of random matrix limits. From the practical perspective, this new setting is particularly interesting in its ability to characterize the behavior of large random objects that asymptotically retain a certain degree of commutative structure and therefore fall outside the scope of free probability. The type of commutative structure considered is modeled on the two-parameter families of generalized harmonic oscillators found in physics and the presently introduced framework may be viewed as a two-parameter deformation of classical probability. Specifically, we introduce (1) a generalized Non-commutative Central Limit Theorem giving rise to a two-parameter deformation of the classical Gaussian statistics and (2) a two-parameter continuum of non-commutative probability spaces in which to realize these statistics. The framework that emerges has a remarkably rich combinatorial structure and bears upon a number of well-known mathematical objects, such as a quantum deformation of the Airy function, that had not previously played a prominent role in a probabilistic setting. Finally, the present framework paves the way to new types of asymptotic results, by providing more general asymptotic theorems and revealing new layers of structure in previously known results, notably in the "correlated process version" of Wigner's Semicircle Law.
by Natasha Blitvić.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
8

Feng, Shimin. "Sensor fusion with Gaussian processes." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5626/.

Full text
Abstract:
This thesis presents a new approach to multi-rate sensor fusion for (1) user matching and (2) position stabilisation and lag reduction. The Microsoft Kinect sensor and the inertial sensors in a mobile device are fused with a Gaussian Process (GP) prior method. We present a Gaussian Process prior model-based framework for multisensor data fusion and explore the use of this model for fusing mobile inertial sensors and an external position sensing device. The Gaussian Process prior model provides a principled mechanism for incorporating the low-sampling-rate position measurements and the high-sampling-rate derivatives in multi-rate sensor fusion, which takes account of the uncertainty of each sensor type. We explore the complementary properties of the Kinect sensor and the built-in inertial sensors in a mobile device and apply the GP framework for sensor fusion in the mobile human-computer interaction area. The Gaussian Process prior model-based sensor fusion is presented as a principled probabilistic approach to dealing with position uncertainty and the lag of the system, which are critical for indoor augmented reality (AR) and other location-aware sensing applications. The sensor fusion helps increase the stability of the position and reduce the lag. This is of great benefit for improving the usability of a human-computer interaction system. We develop two applications using the novel and improved GP prior model. (1) User matching and identification. We apply the GP model to identify individual users, by matching the observed Kinect skeletons with the sensed inertial data from their mobile devices. (2) Position stabilisation and lag reduction in a spatially aware display application for user performance improvement. We conduct a user study. Experimental results show the improved accuracy of target selection, and reduced delay from the sensor fusion system, allowing the users to acquire the target more rapidly, and with fewer errors in comparison with the Kinect filtered system. They also reported improved performance in subjective questions. The two applications can be combined seamlessly in a proxemic interaction system as identification of people and their positions in a room-sized environment plays a key role in proxemic interactions.
APA, Harvard, Vancouver, ISO, and other styles
9

Aguilar, Tamara Alejandra Fernandez. "Gaussian processes for survival analysis." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:b5a7a3b2-d1bd-40f1-9b8d-dbb2b9cedd29.

Full text
Abstract:
Survival analysis is an old area of statistics dedicated to the study of time-to-event random variables. Typically, survival data have three important characteristics. First, the response is a waiting time until the occurrence of a predetermined event. Second, the response can be "censored", meaning that we do not observe its actual value but a bound for it. Last, the presence of covariates. While there exists some feasible parametric methods for modelling this type of data, they usually impose very strong assumptions on the real complexity of the response and how it interacts with the covariates. While these assumptions allow us to have tractable inference schemes, we lose inference power and overlook important relationships in the data. Due to the inherent limitations of parametric models, it is natural to consider non-parametric approaches. In this thesis, we introduce a novel Bayesian non-parametric model for survival data. The model is based on using a positive map of a Gaussian process with stationary covariance function as prior over the so-called hazard function. This model is thoughtfully studied in terms of prior behaviour and posterior consistency. Alternatives to incorporate covariates are discussed as well as an exact and tractable inference scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

Beck, Daniel Emilio. "Gaussian processes for text regression." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/17619/.

Full text
Abstract:
Text Regression is the task of modelling and predicting numerical indicators or response variables from textual data. It arises in a range of different problems, from sentiment and emotion analysis to text-based forecasting. Most models in the literature apply simple text representations such as bag-of-words and predict response variables in the form of point estimates. These simplifying assumptions ignore important information coming from the data such as the underlying uncertainty present in the outputs and the linguistic structure in the textual inputs. The former is particularly important when the response variables come from human annotations while the latter can capture linguistic phenomena that go beyond simple lexical properties of a text. In this thesis our aim is to advance the state-of-the-art in Text Regression by improving these two aspects, better uncertainty modelling in the response variables and improved text representations. Our main workhorse to achieve these goals is Gaussian Processes (GPs), a Bayesian kernelised probabilistic framework. GP-based regression models the response variables as well-calibrated probability distributions, providing additional information in predictions which in turn can improve subsequent decision making. They also model the data using kernels, enabling richer representations based on similarity measures between texts. To be able to reach our main goals we propose new kernels for text which aim at capturing richer linguistic information. These kernels are then parameterised and learned from the data using efficient model selection procedures that are enabled by the GP framework. Finally we also capitalise on recent advances in the GP literature to better capture uncertainty in the response variables, such as multi-task learning and models that can incorporate non-Gaussian variables through the use of warping functions. Our proposed architectures are benchmarked in two Text Regression applications: Emotion Analysis and Machine Translation Quality Estimation. Overall we are able to obtain better results compared to baselines while also providing uncertainty estimates for predictions in the form of posterior distributions. Furthermore we show how these models can be probed to obtain insights about the relation between the data and the response variables and also how to apply predictive distributions in subsequent decision making procedures.
APA, Harvard, Vancouver, ISO, and other styles
11

Csató, Lehel. "Gaussian processes : iterative sparse approximations." Thesis, Aston University, 2002. http://publications.aston.ac.uk/1327/.

Full text
Abstract:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
12

Andrade, Pacheco R. "Gaussian processes for spatiotemporal modelling." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/11173/.

Full text
Abstract:
A statistical framework for spatiotemporal modelling should ideally be able to assimilate different types of data from different of sources. Gaussian processes are commonly used tool for interpolating values across time and space domains. In this thesis we work on extending the Gaussian processes framework to deal with diverse noise model assumptions. We present a model based on a hybrid approach that combines some of the features of the discriminative and generative perspectives, allowing continuous dimensionality reduction of hybrid discrete-continuous data, discriminative classification with missing inputs and manifold learning informed by class labels. We present an application of malaria density modelling across Uganda using administrative records. This disease represents a threat for approximately 3.3 billion people around the globe. The analysis of malaria based on the records available faces two main complications: noise induced by a highly variable rate of reporting health facilities; and lack of comparability across time, due to changes in districts delimitation. We define a Gaussian process model able to assimilate this features in the data and provide an insight on the generating process behind the records. Finally, a method to monitor malaria case-counts is proposed. We use vector-valued covariance kernels to analyze the time series components individually. The short term variations of the infection are divided into four cyclical phases. The graphical tool provided can help quick response planning and resources allocation.
APA, Harvard, Vancouver, ISO, and other styles
13

Metheny, Maryssa. "Covariance structures of Gaussian and log-Gaussian vector stochastic processes." Diss., Wichita State University, 2012. http://hdl.handle.net/10057/5587.

Full text
Abstract:
Although the covariance structures of univariate Gaussian and log Gaussian stochastic processes have been extensively studied in the past few decades, the development of covariance structures for Gaussian and log-Gaussian vector stochastic processes is still in the early stages. Speci cally, there has been little discussion about how to construct the covariance matrix functions of multivariate Gaussian time series with long memory, especially ones with power-law and log-law decaying covariance structures. Furthermore, there have been relatively few results about how to determine whether a given matrix function is the covariance matrix function of a log-Gaussian vector random field. This dissertation provides new methods for identifying and constructing covariance matrix functions of Gaussian vector time series and log-Gaussian vector random fields. In particular, research is presented on how to fi nd covariance matrix structures with power-law decaying and log-law decaying direct and cross covariances. Also, the intricate relationship between the mean function and the covariance matrix function of the log-Gaussian vector random fi eld is explored. In additon, operation preserving properties are investigated for the log-Gaussian vector random field.
Thesis (Ph.D.)--Wichita State University, College of Liberal Arts and Sciences, Dept. of Mathematics, Statistics, and Physics
APA, Harvard, Vancouver, ISO, and other styles
14

Yerramothu, Madhu Kishore. "Stochastic Gaussian and non-Gaussian signal modeling." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hägg, Jonas. "Gaussian fluctuations in some determinantal processes." Doctoral thesis, KTH, Matematik (Inst.), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4343.

Full text
Abstract:
This thesis consists of two parts, Papers A and B, in which some stochastic processes, originating from random matrix theory (RMT), are studied. In the first paper we study the fluctuations of the kth largest eigenvalue, xk, of the Gaussian unitary ensemble (GUE). That is, let N be the dimension of the matrix and k depend on N in such a way that k and N-k both tend to infinity as N - ∞. The main result is that xk, when appropriately rescaled, converges in distribution to a Gaussian random variable as N → ∞. Furthermore, if k1 < ...< km are such that k1, ki+1 - ki and N - km, i =1, ... ,m - 1, tend to infinity as N → ∞ it is shown that (xk1 , ... , xkm) is multivariate Gaussian in the rescaled N → ∞ limit. In the second paper we study the Airy process, A(t), and prove that it fluctuates like a Brownian motion on a local scale. We also prove that the Discrete polynuclear growth process (PNG) fluctuates like a Brownian motion in a scaling limit smaller than the one where one gets the Airy process.
QC 20100716
APA, Harvard, Vancouver, ISO, and other styles
16

Wågberg, Johan, and Viklund Emanuel Walldén. "Continuous Occupancy Mapping Using Gaussian Processes." Thesis, Linköpings universitet, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81464.

Full text
Abstract:
The topic of this thesis is occupancy mapping for mobile robots, with an emphasis on a novel method for continuous occupancy mapping using Gaussian processes. In the new method, spatial correlation is accounted for in a natural way, and an a priori discretization of the area to be mapped is not necessary as within most other common methods. The main contribution of this thesis is the construction of a Gaussian process library for C++, and the use of this library to implement the continuous occupancy mapping algorithm. The continuous occupancy mapping is evaluated using both simulated and real world experimental data. The main result is that the method, in its current form, is not fit for online operations due to its computational complexity. By using approximations and ad hoc solutions, the method can be run in real time on a mobile robot, though not without losing many of its benefits.
APA, Harvard, Vancouver, ISO, and other styles
17

Duvenaud, David. "Automatic model construction with Gaussian processes." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/247281.

Full text
Abstract:
This thesis develops a method for automatically constructing, visualizing and describing a large class of models, useful for forecasting and finding structure in domains such as time series, geological formations, and physical dynamics. These models, based on Gaussian processes, can capture many types of statistical structure, such as periodicity, changepoints, additivity, and symmetries. Such structure can be encoded through kernels, which have historically been hand-chosen by experts. We show how to automate this task, creating a system that explores an open-ended space of models and reports the structures discovered. To automatically construct Gaussian process models, we search over sums and products of kernels, maximizing the approximate marginal likelihood. We show how any model in this class can be automatically decomposed into qualitatively different parts, and how each component can be visualized and described through text. We combine these results into a procedure that, given a dataset, automatically constructs a model along with a detailed report containing plots and generated text that illustrate the structure discovered in the data. The introductory chapters contain a tutorial showing how to express many types of structure through kernels, and how adding and multiplying different kernels combines their properties. Examples also show how symmetric kernels can produce priors over topological manifolds such as cylinders, toruses, and Möbius strips, as well as their higher-dimensional generalizations. This thesis also explores several extensions to Gaussian process models. First, building on existing work that relates Gaussian processes and neural nets, we analyze natural extensions of these models to deep kernels and deep Gaussian processes. Second, we examine additive Gaussian processes, showing their relation to the regularization method of dropout. Third, we combine Gaussian processes with the Dirichlet process to produce the warped mixture model: a Bayesian clustering model having nonparametric cluster shapes, and a corresponding latent space in which each cluster has an interpretable parametric form.
APA, Harvard, Vancouver, ISO, and other styles
18

Hägg, Jonas. "Gaussian fluctuations in some determinantal processes /." Stockholm : Matematik, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chai, Kian Ming. "Multi-task learning with Gaussian processes." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3847.

Full text
Abstract:
Multi-task learning refers to learning multiple tasks simultaneously, in order to avoid tabula rasa learning and to share information between similar tasks during learning. We consider a multi-task Gaussian process regression model that learns related functions by inducing correlations between tasks directly. Using this model as a reference for three other multi-task models, we provide a broad unifying view of multi-task learning. This is possible because, unlike the other models, the multi-task Gaussian process model encodes task relatedness explicitly. Each multi-task learning model generally assumes that learning multiple tasks together is beneficial. We analyze how and the extent to which multi-task learning helps improve the generalization of supervised learning. Our analysis is conducted for the average-case on the multi-task Gaussian process model, and we concentrate mainly on the case of two tasks, called the primary task and the secondary task. The main parameters are the degree of relatedness ρ between the two tasks, and πS, the fraction of the total training observations from the secondary task. Among other results, we show that asymmetric multitask learning, where the secondary task is to help the learning of the primary task, can decrease a lower bound on the average generalization error by a factor of up to ρ2πS. When there are no observations for the primary task, there is also an intrinsic limit to which observations for the secondary task can help the primary task. For symmetric multi-task learning, where the two tasks are to help each other to learn, we find the learning to be characterized by the term πS(1 − πS)(1 − ρ2). As far as we are aware, our analysis contributes to an understanding of multi-task learning that is orthogonal to the existing PAC-based results on multi-task learning. For more than two tasks, we provide an understanding of the multi-task Gaussian process model through structures in the predictive means and variances given certain configurations of training observations. These results generalize existing ones in the geostatistics literature, and may have practical applications in that domain. We evaluate the multi-task Gaussian process model on the inverse dynamics problem for a robot manipulator. The inverse dynamics problem is to compute the torques needed at the joints to drive the manipulator along a given trajectory, and there are advantages to learning this function for adaptive control. A robot manipulator will often need to be controlled while holding different loads in its end effector, giving rise to a multi-context or multi-load learning problem, and we treat predicting the inverse dynamics for a context/load as a task. We view the learning of the inverse dynamics as a function approximation problem and place Gaussian process priors over the space of functions. We first show that this is effective for learning the inverse dynamics for a single context. Then, by placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-context similarity depends on the underlying inertial parameters of the manipulator. Experiments demonstrate that this multi-task formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single contexts or pooling the data over all contexts. In addition to the experimental results, one of the contributions of this study is showing that the multi-task Gaussian process model follows naturally from the physics of the inverse dynamics.
APA, Harvard, Vancouver, ISO, and other styles
20

Ou, Xiaoling. "Batch process modelling with Gaussian processes." Thesis, University of Newcastle Upon Tyne, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jidling, Carl. "Strain Field Modelling using Gaussian Processes." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-315254.

Full text
Abstract:
This report deals with reconstruction of strain fields within deformed materials. The method relies upon data generated from Bragg edge measurements, in which information is gained from neutron beams that are sent through the sample. The reconstruction has been made by modelling the strain field as a Gaussian process, assigned a covariance structure customized by incorporation of the so-called equilibrium constraints. By making use of an approximation scheme well suited for the problem, the complexity of the computations has been significantly reduced. The results from numerical simulations indicates a better performance as compared to previous work in this area.
APA, Harvard, Vancouver, ISO, and other styles
22

Hultin, Hanna. "Evaluation of Massively Scalable Gaussian Processes." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209244.

Full text
Abstract:
Gaussian process methods are flexible non-parametric Bayesian methods used for regression and classification. They allow for explicit handling of uncertainty and are able to learn complex structures in the data. Their main limitation is their scaling characteristics: for n training points the complexity is O(n³) for training and O(n²) for prediction per test data point. This makes full Gaussian process methods prohibitive to use on training sets larger than a few thousand data points. There has been recent research on approximation methods to make Gaussian processes scalable without severely affecting the performance. Some of these new approximation techniques are still not fully investigated and in a practical situation it is hard to know which method to choose. This thesis examines and evaluates scalable GP methods, especially focusing on the framework Massively Scalable Gaussian Processes introduced by Wilson et al. in 2016, which reduces the training complexity to nearly O(n) and the prediction complexity to O(1). The framework involves inducing point methods, local covariance function interpolation, exploitations of structured matrices and projections to low-dimensional spaces. The properties of the different approximations are studied and the possibilities of making improvements are discussed.
Gaussiska processmetoder är flexibla icke-parametriska Bayesianska metoder som används för regression och klassificering. De tillåter explicit hantering av osäkerhet och kan lära sig komplexa strukturer i data. Den största begränsningen är deras skalningsegenskaper: för n träningspunkter är komplexiteten O(n³) för träning och O(n²) för prediktion per ny datapunkt. Detta gör att kompletta Gaussiska processer är för krävande föratt använda på träningsdata större än några tusen datapunkter. Det har nyligen forskats på approximationsmetoder för att göra Gaussiska processer skalbara utan att påverka prestandan allvarligt. Några av dessa nya approximationsstekniker är fortfarande inte fullkomligt undersökta och i en praktisk situation är det svårt att veta vilken metod man ska använda. Denna uppsats undersöker och utvärderar skalbara GP-metoder, särskilt med fokus på ramverket Massivt Skalbara Gaussiska Processer introducerat av Wilson et al. 2016, vilket minskar träningskomplexiteten till O(n) och prediktionskomplexiteten till O(1). Ramverket innehåller inducerande punkt-metoder, lokal kärninterpolering, utnyttjande av strukturerade matriser och projiceringar till lågdimensionella rum. Egenskaperna hos de olika approximationerna studeras och möjligheterna att göra förbättringar diskuteras
APA, Harvard, Vancouver, ISO, and other styles
23

Plagemann, Christian. "Gaussian processes for flexible robot learning." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:25-opus-61088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jidling, Carl. "Tailoring Gaussian processes for tomographic reconstruction." Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-394093.

Full text
Abstract:
A probabilistic model reasons about physical quantities as random variables that can be estimated from measured data. The Gaussian process is a respected member of this family, being a flexible non-parametric method that has proven strong capabilities in modelling a wide range of nonlinear functions. This thesis focuses on advanced Gaussian process techniques; the contribution consist of practical methodologies primarily intended for inverse tomographic applications. In our most theoretical formulation, we propose a constructive procedure for building a customised covariance function given any set of linear constraints. These are explicitly incorporated in the prior distribution and thereby guaranteed to be fulfilled by the prediction. One such construction is employed for strain field reconstruction, to which end we successfully introduce the Gaussian process framework. A particularly well-suited spectral based approximation method is used to obtain a significant reduction of the computational load. The formulation has seen several subsequent extensions, represented in this thesis by a generalisation that includes boundary information and uses variational inference to overcome the challenge provided by a nonlinear measurement model. We also consider X-ray computed tomography, a field of high importance primarily due to its central role in medical treatments. We use the Gaussian process to provide an alternative interpretation of traditional algorithms and demonstrate promising experimental results. Moreover, we turn our focus to deep kernel learning, a special construction in which the expressiveness of a standard covariance function is increased through a neural network input transformation. We develop a method that makes this approach computationally feasible for integral measurements, and the results indicate a high potential for computed tomography problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Hartmann, Marcelo. "Métodos de Monte Carlo Hamiltoniano na inferência Bayesiana não-paramétrica de valores extremos." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/4601.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:51Z (GMT). No. of bitstreams: 1 6609.pdf: 3049383 bytes, checksum: 33c7f1618f776ca50cf4694aaba80ea5 (MD5) Previous issue date: 2015-03-09
In this work we propose a Bayesian nonparametric approach for modeling extreme value data. We treat the location parameter _ of the generalized extreme value distribution as a random function following a Gaussian process model (Rasmussem & Williams 2006). This configuration leads to no closed-form expressions for the highdimensional posterior distribution. To tackle this problem we use the Riemannian Manifold Hamiltonian Monte Carlo algorithm which allows samples from the posterior distribution with complex form and non-usual correlation structure (Calderhead & Girolami 2011). Moreover, we propose an autoregressive time series model assuming the generalized extreme value distribution for the noise and obtained its Fisher information matrix. Throughout this work we employ some computational simulation studies to assess the performance of the algorithm in its variants and show many examples with simulated and real data-sets.
Neste trabalho propomos uma abordagem Bayesiana não-paramétrica para a modelagem de dados com comportamento extremo. Tratamos o parâmetro de locação _ da distribuição generalizada de valor extremo como uma função aleatória e assumimos um processo Gaussiano para tal função (Rasmussem & Williams 2006). Esta situação leva à intratabilidade analítica da distribuição a posteriori de alta dimensão. Para lidar com este problema fazemos uso do método Hamiltoniano de Monte Carlo em variedade Riemanniana que permite a simulação de valores da distribuição a posteriori com forma complexa e estrutura de correlação incomum (Calderhead & Girolami 2011). Além disso, propomos um modelo de série temporal autoregressivo de ordem p, assumindo a distribuição generalizada de valor extremo para o ruído e determinamos a respectiva matriz de informação de Fisher. No decorrer de todo o trabalho, estudamos a qualidade do algoritmo em suas variantes através de simulações computacionais e apresentamos vários exemplos com dados reais e simulados.
APA, Harvard, Vancouver, ISO, and other styles
26

Razaaly, Nassim. "Rare Event Estimation and Robust Optimization Methods with Application to ORC Turbine Cascade." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX027.

Full text
Abstract:
Cette thèse vise à formuler des méthodes innovantes de quantification d'incertitude (UQ) à la fois pour l'optimisation robuste (RO) et l'optimisation robuste et fiable (RBDO). L’application visée est l’optimisation des turbines supersoniques pour les Cycles Organiques de Rankine (ORC).Les sources d'énergie typiques des systèmes d'alimentation ORC sont caractérisées par une source de chaleur et des conditions thermodynamiques entrée/sortie de turbine variables. L'utilisation de composés organiques, généralement de masse moléculaire élevée, conduit à des configurations de turbines sujettes à des écoulements supersoniques et des chocs, dont l'intensité augmente dans les conditions off-design; ces caractéristiques dépendent également de la forme locale de la pâle, qui peut être influencée par la variabilité géométrique induite par les procédures de fabrication. Il existe un consensus sur la nécessité d’inclure ces incertitudes dans la conception, nécessitant ainsi des méthodes UQ et un outil permettant l'optimisation de form adapté.Ce travail est décomposé en deux parties principales. La première partie aborde le problème de l’estimation des événements rares en proposant deux méthodes originales pour l'estimation de probabilité de défaillance (metaAL-OIS et eAK-MCS) et un pour le calcul quantile (QeAK-MCS). Les trois méthodes reposent sur des stratégies d’adaptation basées sur des métamodèles (Kriging), visant à affiner directement la région dite Limit-State-Surface (LSS), contrairement aux methodes de type Subset Simulation (SS). En effet, ces dernières considèrent différents seuils intermédiaires associés à des LSSs devant être raffinés. Cette propriété de raffinement direct est cruciale, car elle permet la compatibilité de couplage à des méthodes RBDO existantes.En particulier, les algorithmes proposés ne sont pas soumis à des hypothèses restrictives sur le LSS (contrairement aux méthodes de type FORM/SORM), tel que le nombre de modes de défaillance, cependant doivent être formulés dans l’espace standard. Les méthodes eAK-MCS et QeAK-MCS sont dérivées de la méthode AK-MCS, et d'un échantillonnage adaptatif et parallèle basé sur des algorithmes de type K-Means pondéré. MetaAL-OIS présente une stratégie de raffinement séquentiel plus élaborée basée sur des échantillons MCMC tirés à partir d'une densité d'échantillonage d'importance (ISD) quasi optimale. Par ailleurs, il propose la construction d’une ISD de type mélange de gaussiennes, permettant l’estimation précise de petites probabilités de défaillance lorsqu’un grand nombre d'échantillons (plusieurs millions) est disponible, comme alternative au SS. Les trois méthodes sont très performantes pour des exemples analytiques 2D à 8D classiques, tirés de la littérature sur la fiabilité des structures, certaines présentant plusieurs modes de défaillance, et tous caractérisés par une très faible probabilité de défaillance/niveau de quantile. Des estimations précises sont obtenues pour les cas considérés en un nombre raisonnable d'appels à la fonction de performance
This thesis aims to formulate innovative Uncertainty Quantification (UQ) methods in both Robust Optimization (RO) and Reliability-Based Design Optimization (RBDO) problems. The targeted application is the optimization of supersonic turbines used in Organic Rankine Cycle (ORC) power systems.Typical energy sources for ORC power systems feature variable heat load and turbine inlet/outlet thermodynamic conditions. The use of organic compounds with a heavy molecular weight typically leads to supersonic turbine configurations featuring supersonic flows and shocks, which grow in relevance in the aforementioned off-design conditions; these features also depend strongly on the local blade shape, which can be influenced by the geometric tolerances of the blade manufacturing. A consensus exists about the necessity to include these uncertainties in the design process, so requiring fast UQ methods and a comprehensive tool for performing shape optimization efficiently.This work is decomposed in two main parts. The first one addresses the problem of rare events estimation, proposing two original methods for failure probability (metaAL-OIS and eAK-MCS) and one for quantile computation (QeAK-MCS). The three methods rely on surrogate-based (Kriging) adaptive strategies, aiming at refining the so-called Limit-State Surface (LSS) directly, unlike Subset Simulation (SS) derived methods. Indeed, the latter consider intermediate threshold associated with intermediate LSSs to be refined. This direct refinement property is of crucial importance since it enables the adaptability of the developed methods for RBDO algorithms. Note that the proposed algorithms are not subject to restrictive assumptions on the LSS (unlike the well-known FORM/SORM), such as the number of failure modes, however need to be formulated in the Standard Space. The eAK-MCS and QeAK-MCS methods are derived from the AK-MCS method and inherit a parallel adaptive sampling based on weighed K-Means. MetaAL-OIS features a more elaborate sequential refinement strategy based on MCMC samples drawn from a quasi-optimal ISD. It additionally proposes the construction of a Gaussian mixture ISD, permitting the accurate estimation of small failure probabilities when a large number of evaluations (several millions) is tractable, as an alternative to SS. The three methods are shown to perform very well for 2D to 8D analytical examples popular in structural reliability literature, some featuring several failure modes, all subject to very small failure probability/quantile level. Accurate estimations are performed in the cases considered using a reasonable number of calls to the performance function.The second part of this work tackles original Robust Optimization (RO) methods applied to the Shape Design of a supersonic ORC Turbine cascade. A comprehensive Uncertainty Quantification (UQ) analysis accounting for operational, fluid parameters and geometric (aleatoric) uncertainties is illustrated, permitting to provide a general overview over the impact of multiple effects and constitutes a preliminary study necessary for RO. Then, several mono-objective RO formulations under a probabilistic constraint are considered in this work, including the minimization of the mean or a high quantile of the Objective Function. A critical assessment of the (Robust) Optimal designs is finally investigated
APA, Harvard, Vancouver, ISO, and other styles
27

Fels, Maximilian [Verfasser]. "Extremes of the discrete Gaussian free field in dimension two / Christian Joachim Maximilian Fels." Bonn : Universitäts- und Landesbibliothek Bonn, 2021. http://d-nb.info/1235524787/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Al, Hassan Ahmad. "Estimation des lois extremes multivariees." Paris 6, 1988. http://www.theses.fr/1988PA066014.

Full text
Abstract:
Soit (x::(1),y::(1)). . . (x::(n),y::(n)) un echantillon du vecteur aleatoire extreme (x,y) i. I. D. Suivant l'un des modeles : logistique, gumbel, mixte, naturel. En reduisant les informations par des procedes nouveaux, on presente des resultats originaux sur le probleme d'estimation des parametres de liaison de (x,y) et en faisant des tests bases sur ces estimateurs. Finalement, on etablit quelques resultats sur le probleme d'estimation non parametrique d'une fonction de dependance
APA, Harvard, Vancouver, ISO, and other styles
29

Veillette, Mark S. "Study of Gaussian processes, Lévy processes and infinitely divisible distributions." Thesis, Boston University, 2011. https://hdl.handle.net/2144/38109.

Full text
Abstract:
Thesis (Ph.D.)--Boston University
PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
In this thesis, we study distribution functions and distributional-related quantities for various stochastic processes and probability distributions, including Gaussian processes, inverse Levy subordinators, Poisson stochastic integrals, non-negative infinitely divisible distributions and the Rosenblatt distribution. We obtain analytical results for each case, and in instances where no closed form exists for the distribution, we provide numerical solutions. We mainly use two methods to analyze such distributions. In some cases, we characterize distribution functions by viewing them as solutions to differential equations. These are used to obtain moments and distributions functions of the underlying random variables. In other cases, we obtain results using inversion of Laplace or Fourier transforms. These methods include the Post-Widder inversion formula for Laplace transforms, and Edgeworth approximations. In Chapter 1, we consider differential equations related to Gaussian processes. It is well known that the heat equation together with appropriate initial conditions characterize the marginal distribution of Brownian motion. We generalize this connection to finite dimensional distributions of arbitrary Gaussian processes. In Chapter 2, we study the inverses of Levy subordinators. These processes are non-Markovian and their finite-dimensional distributions are not known in closed form. We derive a differential equation related to these processes and use it to find an expression for joint moments. We compute numerically these joint moments in Chapter 3 and include several examples. Chapter 4 considers Poisson stochastic integrals. We show that the distribution function of these random variables satisfies a Kolmogorov-Feller equation, and we describe the regularity of solutions and numerically solve this equation. Chapter 5 presents a technique for computing the density function or distribution function of any non-negative infinitely divisible distribution based on the Post-Widder method. In Chapter 6, we consider a distribution given by an infinite sum of weighted gamma distributions. We derive the Levy-Khintchine representation and show when the tail of this sum is asymptotically normal. We derive a Berry-Essen bound and Edgeworth expansions for its distribution function. Finally, in Chapter 7 we look at the Rosenblatt distribution, which can be expressed as a infinite sum of weighted chi-squared distributions. We apply the expansions in Chapter 6 to compute its distribution function.
2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
30

Hitz, Adrien. "Modelling of extremes." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ad32f298-b140-4aae-b50e-931259714085.

Full text
Abstract:
This work focuses on statistical methods to understand how frequently rare events occur and what the magnitude of extreme values such as large losses is. It lies in a field called extreme value analysis whose scope is to provide support for scientific decision making when extreme observations are of particular importance such as in environmental applications, insurance and finance. In the univariate case, I propose new techniques to model tails of discrete distributions and illustrate them in an application on word frequency and multiple birth data. Suitably rescaled, the limiting tails of some discrete distributions are shown to converge to a discrete generalized Pareto distribution and generalized Zipf distribution respectively. In the multivariate high-dimensional case, I suggest modeling tail dependence between random variables by a graph such that its nodes correspond to the variables and shocks propagate through the edges. Relying on the ideas of graphical models, I prove that if the variables satisfy a new notion called asymptotic conditional independence, then the density of the joint distribution can be simplified and expressed in terms of lower dimensional functions. This generalizes the Hammersley- Clifford theorem and enables us to infer tail distributions from observations in reduced dimension. As an illustration, extreme river flows are modeled by a tree graphical model whose structure appears to recover almost exactly the actual river network. A fundamental concept when studying limiting tail distributions is regular variation. I propose a new notion in the multivariate case called one-component regular variation, of which Karamata's and the representation theorem, two important results in the univariate case, are generalizations. Eventually, I turn my attention to website visit data and fit a censored copula Gaussian graphical model allowing the visualization of users' behavior by a graph.
APA, Harvard, Vancouver, ISO, and other styles
31

Kunz, Andreas. "Extremes of multidimensional stationary diffusion processes and applications in finance." [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=965676706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Casson, Edward Anthony. "Stochastic methodology for the extremes and directionality of meteorological processes." Thesis, Lancaster University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Aldgate, Hannah Jane. "Credit application scoring with Gaussian spatial processes." Thesis, Imperial College London, 2006. http://hdl.handle.net/10044/1/1256.

Full text
Abstract:
Credit scoring has been described as the most successful application of statistical and operational research methods to financial problems in recent decades. In this thesis methods analogous to those used in spatial modelling and prediction are applied to the problem of application scoring, a branch of credit scoring that involves deciding whether or not to offer credit and how much credit to offer. In particular, Gaussian spatial process (GSP) models, commonly employed in disease mapping, geostatistics and design, are explored in an approach that is novel in the credit scoring field. Credit scoring methods are well established and usually involve computations of scores. By contrast, the focus of this work is to use best linear unbiased predictors in order to predict the probabilities of repayment for credit applications. A spatial structure for the model is provided by reformulating the data. Both theoretical and industry standard methods are used in order to assess the predictive competence of GSP models. In addition, the GSP model approach is compared with standard methods for application scoring and conclusions are made regarding the relevance of such models in this area
APA, Harvard, Vancouver, ISO, and other styles
34

Gibbs, M. N. "Bayesian Gaussian processes for regression and classification." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599379.

Full text
Abstract:
Bayesian inference offers us a powerful tool with which to tackle the problem of data modelling. However, the performance of Bayesian methods is crucially dependent on being able to find good models for our data. The principal focus of this thesis is the development of models based on Gaussian process priors. Such models, which can be thought of as the infinite extension of several existing finite models, have the flexibility to model complex phenomena while being mathematically simple. In this thesis, I present a review of the theory of Gaussian processes and their covariance functions and demonstrate how they fit into the Bayesian framework. The efficient implementation of a Gaussian process is discussed with particular reference to approximate methods for matrix inversion based on the work of Skilling (1993). Several regression problems are examined. Non-stationary covariance functions are developed for the regression of neuron spike data and the use of Gaussian processes to model the potential energy surfaces of weakly bound molecules is discussed. Classification methods based on Gaussian processes are implemented using variational methods. Existing bounds (Jaakkola and Jordan 1996) for the sigmoid function are used to tackle binary problems and multi-dimensional bounds on the softmax function are presented for the multiple class case. The performance of the variational classifier is compared with that of other methods using the CRABS and PIMA datasets (Ripley 1996) and the problem of predicting the cracking of welds based on their chemical composition is also investigated. The theoretical calculation of the density of states of crystal structures is discussed in detail. Three possible approaches to the problem are described based on free energy minimization, Gaussian processes and the theory of random matrices. Results from these approaches are compared with the state-of-the-art techniques (Pickard 1997).
APA, Harvard, Vancouver, ISO, and other styles
35

McHutchon, Andrew James. "Nonlinear modelling and control using Gaussian processes." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Frigola-Alcalde, Roger. "Bayesian time series learning with Gaussian processes." Thesis, University of Cambridge, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Boedihardjo, Horatio S. "Signatures of Gaussian processes and SLE curves." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:5f835640-d3f5-4b03-b600-10d897644ced.

Full text
Abstract:
This thesis contains three main results. The first result states that, outside a slim set associated with a Gaussian process with long time memory, paths can be canonically enhanced to geometric rough paths. This allows us to apply the powerful Universal Limit Theorem in rough path theory to study the quasi-sure properties of the solutions of stochastic differential equations driven by Gaussian processes. The key idea is to use a norm, invented by B. Hambly and T.Lyons, which dominates the p-variation distance and the fact that the roughness of a Gaussian sample path is evenly distributed over time. The second result is the almost-sure uniqueness of the signatures of SLE kappa curves for kappa less than or equal to 4. We prove this by first expressing the Fourier transform of the winding angle of the SLE curve in terms of its signature. This formula also gives us a relation between the expected signature and the n-point functions studied in the SLE and Statistical Physics literature. It is important that the Chordal SLE measure in D is supported on simple curves from -1 to 1 for kappa between 0 and 4, and hence the image of the curve determines the curve up to reparametrisation. The third result is a formula for the expected signature of Gaussian processes generated by strictly regular kernels. The idea is to approximate the expected signature of this class of processes by the expected signature of their piecewise linear approximations. This reduces the problem to computing the moments of Gaussian random variables, which can be done using Wick’s formula.
APA, Harvard, Vancouver, ISO, and other styles
38

Eleftheriadis, Stefanos. "Gaussian processes for modeling of facial expressions." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/44106.

Full text
Abstract:
Automated analysis of facial expressions has been gaining significant attention over the past years. This stems from the fact that it constitutes the primal step toward developing some of the next-generation computer technologies that can make an impact in many domains, ranging from medical imaging and health assessment to marketing and education. No matter the target application, the need to deploy systems under demanding, real-world conditions that can generalize well across the population is urgent. Hence, careful consideration of numerous factors has to be taken prior to designing such a system. The work presented in this thesis focuses on tackling two important problems in automated analysis of facial expressions: (i) view-invariant facial expression analysis; (ii) modeling of the structural patterns in the face, in terms of well coordinated facial muscle movements. Driven by the necessity for efficient and accurate inference mechanisms we explore machine learning techniques based on the probabilistic framework of Gaussian processes (GPs). Our ultimate goal is to design powerful models that can efficiently handle imagery with spontaneously displayed facial expressions, and explain in detail the complex configurations behind the human face in real-world situations. To effectively decouple the head pose and expression in the presence of large out-of-plane head rotations we introduce a manifold learning approach based on multi-view learning strategies. Contrary to the majority of existing methods that typically treat the numerous poses as individual problems, in this model we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Hence, the pose normalization problem is solved by aligning the facial expressions from different poses in a common latent space. We demonstrate that the recovered manifold can efficiently generalize to various poses and expressions even from a small amount of training data, while also being largely robust to corrupted image features due to illumination variations. State-of-the-art performance is achieved in the task of facial expression classification of basic emotions. The methods that we propose for learning the structure in the configuration of the muscle movements represent some of the first attempts in the field of analysis and intensity estimation of facial expressions. In these models, we extend our multi-view approach to exploit relationships not only in the input features but also in the multi-output labels. The structure of the outputs is imposed into the recovered manifold either from heuristically defined hard constraints, or in an auto-encoded manner, where the structure is learned automatically from the input data. The resulting models are proven to be robust to data with imbalanced expression categories, due to our proposed Bayesian learning of the target manifold. We also propose a novel regression approach based on product of GP experts where we take into account people's individual expressiveness in order to adapt the learned models on each subject. We demonstrate the superior performance of our proposed models on the task of facial expression recognition and intensity estimation.
APA, Harvard, Vancouver, ISO, and other styles
39

Sun, Furong. "Some Advances in Local Approximate Gaussian Processes." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/97245.

Full text
Abstract:
Nowadays, Gaussian Process (GP) has been recognized as an indispensable statistical tool in computer experiments. Due to its computational complexity and storage demand, its application in real-world problems, especially in "big data" settings, is quite limited. Among many strategies to tailor GP to such settings, Gramacy and Apley (2015) proposed local approximate GP (laGP), which constructs approximate predictive equations by constructing small local designs around the predictive location under certain criterion. In this dissertation, several methodological extensions based upon laGP are proposed. One methodological contribution is the multilevel global/local modeling, which deploys global hyper-parameter estimates to perform local prediction. The second contribution comes from extending the laGP notion of "locale" to a set of predictive locations, along paths in the input space. These two contributions have been applied in the satellite drag emulation, which is illustrated in Chapter 3. Furthermore, the multilevel GP modeling strategy has also been applied to synthesize field data and computer model outputs of solar irradiance across the continental United States, combined with inverse-variance weighting, which is detailed in Chapter 4. Last but not least, in Chapter 5, laGP's performance has been tested on emulating daytime land surface temperatures estimated via satellites, in the settings of irregular grid locations.
Doctor of Philosophy
In many real-life settings, we want to understand a physical relationship/phenomenon. Due to limited resources and/or ethical reasons, it is impossible to perform physical experiments to collect data, and therefore, we have to rely upon computer experiments, whose evaluation usually requires expensive simulation, involving complex mathematical equations. To reduce computational efforts, we are looking for a relatively cheap alternative, which is called an emulator, to serve as a surrogate model. Gaussian process (GP) is such an emulator, and has been very popular due to fabulous out-of-sample predictive performance and appropriate uncertainty quantification. However, due to computational complexity, full GP modeling is not suitable for “big data” settings. Gramacy and Apley (2015) proposed local approximate GP (laGP), the core idea of which is to use a subset of the data for inference and further prediction at unobserved inputs. This dissertation provides several extensions of laGP, which are applied to several real-life “big data” settings. The first application, detailed in Chapter 3, is to emulate satellite drag from large simulation experiments. A smart way is figured out to capture global input information in a comprehensive way by using a small subset of the data, and local prediction is performed subsequently. This method is called “multilevel GP modeling”, which is also deployed to synthesize field measurements and computational outputs of solar irradiance across the continental United States, illustrated in Chapter 4, and to emulate daytime land surface temperatures estimated by satellites, discussed in Chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Ya Li. "Interactions between gaussian processes and bayesian estimation." Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25377.

Full text
Abstract:
L’apprentissage (machine) de modèle et l’estimation d’état sont cruciaux pour interpréter les phénomènes sous-jacents à de nombreuses applications du monde réel. Toutefois, il est souvent difficile d’apprendre le modèle d’un système et de capturer les états latents, efficacement et avec précision, en raison du fait que la connaissance du monde est généralement incertaine. Au cours des dernières années, les approches d’estimation et de modélisation bayésiennes ont été extensivement étudiées afin que l’incertain soit réduit élégamment et de manière flexible. Dans la pratique cependant, différentes limitations au niveau de la modélisation et de l’estimation bayésiennes peuvent détériorer le pouvoir d’interprétation bayésienne. Ainsi, la performance de l’estimation est souvent limitée lorsque le modèle de système manque de souplesse ou/et est partiellement inconnu. De même, la performance de la modélisation est souvent restreinte lorsque l’estimateur Bayésien est inefficace. Inspiré par ces faits, nous proposons d’étudier dans cette thèse, les connections possibles entre modélisation bayésienne (via le processus gaussien) et l’estimation bayésienne (via le filtre de Kalman et les méthodes de Monte Carlo) et comment on pourrait améliorer l’une en utilisant l’autre. À cet effet, nous avons d’abord vu de plus près comment utiliser les processus gaussiens pour l’estimation bayésienne. Dans ce contexte, nous avons utilisé le processus gaussien comme un prior non-paramétrique des modèles et nous avons montré comment cela permettait d’améliorer l’efficacité et la précision de l’estimation bayésienne. Ensuite, nous nous somme intéressé au fait de savoir comment utiliser l’estimation bayésienne pour le processus gaussien. Dans ce cadre, nous avons utilisé différentes estimations bayésiennes comme le filtre de Kalman et les filtres particulaires en vue d’améliorer l’inférence au niveau du processus gaussien. Ceci nous a aussi permis de capturer différentes propriétés au niveau des données d’entrée. Finalement, on s’est intéressé aux interactions dynamiques entre estimation bayésienne et processus gaussien. On s’est en particulier penché sur comment l’estimation bayésienne et le processus gaussien peuvent ”travailler” de manière interactive et complémentaire de façon à améliorer à la fois le modèle et l’estimation. L’efficacité de nos approches, qui contribuent à la fois au processus gaussien et à l’estimation bayésienne, est montrée au travers d’une analyse mathématique rigoureuse et validée au moyen de différentes expérimentations reflétant des applications réelles.
Model learning and state estimation are crucial to interpret the underlying phenomena in many real-world applications. However, it is often challenging to learn the system model and capture the latent states accurately and efficiently due to the fact that the knowledge of the world is highly uncertain. During the past years, Bayesian modeling and estimation approaches have been significantly investigated so that the uncertainty can be elegantly reduced in a flexible probabilistic manner. In practice, however, several drawbacks in both Bayesian modeling and estimation approaches deteriorate the power of Bayesian interpretation. On one hand, the estimation performance is often limited when the system model lacks in flexibility and/or is partially unknown. On the other hand, the modeling performance is often restricted when a Bayesian estimator is not efficient and/or accurate. Inspired by these facts, we propose Interactions Between Gaussian Processes and Bayesian Estimation where we investigate the novel connections between Bayesian model (Gaussian processes) and Bayesian estimator (Kalman filter and Monte Carlo methods) in different directions to address a number of potential difficulties in modeling and estimation tasks. Concretely, we first pay our attention to Gaussian Processes for Bayesian Estimation where a Gaussian process (GP) is used as an expressive nonparametric prior for system models to improve the accuracy and efficiency of Bayesian estimation. Then, we work on Bayesian Estimation for Gaussian Processes where a number of Bayesian estimation approaches, especially Kalman filter and particle filters, are used to speed up the inference efficiency of GP and also capture the distinct input-dependent data properties. Finally, we investigate Dynamical Interaction Between Gaussian Processes and Bayesian Estimation where GP modeling and Bayesian estimation work in a dynamically interactive manner so that GP learner and Bayesian estimator are positively complementary to improve the performance of both modeling and estimation. Through a number of mathematical analysis and experimental demonstrations, we show the effectiveness of our approaches which contribute to both GP and Bayesian estimation.
APA, Harvard, Vancouver, ISO, and other styles
41

Mannersalo, Petteri. "Gaussian and multifractal processes in teletraffic theory /." Espoo : Technical Research Centre of Finland, 2003. http://www.vtt.fi/inf/pdf/publications/2003/P491.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Mattos, César Lincoln Cavalcante. "Recurrent gaussian processes and robust dynamical modeling." reponame:Repositório Institucional da UFC, 2017. http://www.repositorio.ufc.br/handle/riufc/25604.

Full text
Abstract:
MATTOS, C. L. C. Recurrent gaussian processes and robust dynamical modeling. 2017. 189 f. Tese (Doutorado em Engenharia de Teleinformática)–Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2017.
Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-09T02:26:38Z No. of bitstreams: 1 2017_tes_clcmattos.pdf: 5961013 bytes, checksum: fc44d8b852e28fa0e1ebe0c87389c0da (MD5)
Rejected by Marlene Sousa (mmarlene@ufc.br), reason: Prezado César; Prezado Pedro: Existe uma orientação para que normalizemos as dissertações e teses da UFC, em suas paginas pré-textuais e lista de referencias, pelas regras da ABNT. Por esse motivo, sugerimos consultar o modelo de template, para ajudá-lo nesta tarefa, disponível em: http://www.biblioteca.ufc.br/educacao-de-usuarios/templates/ Vamos agora as correções sempre de acordo com o template: 1. A partir da folha de aprovação as informações devem ser em língua inglesa. 2. A dedicatória deve ter a distancia até o final da folha observado. Veja no guia www.bibliotecas.ufc.br 3. A epígrafe deve ter a distancia até o final da folha observado. Veja no guia www.bibliotecas.ufc.br 4. As palavras List of Figures, LIST OF ALGORITHMS, List of Tables, Não devem ter caixa delimitando e nem ser na cor vermelha. 5. O sumário Não deve ter caixa delimitando e nem ser na cor vermelha. Nas seções terciárias, os dígitos também ficam em itálico. Os Apêndices e seus títulos, devem ficar na mesma margem da Palavra Referencias e devem iniciar com APENDICE A - Seguido do titulo. Após essas correções, enviaremos o nada consta por e-mail. Att. Marlene Rocha mmarlene@ufc.br on 2017-09-11T13:44:25Z (GMT)
Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-11T20:04:00Z No. of bitstreams: 1 2017_tes_clcmattos.pdf: 6102703 bytes, checksum: 34d9e125c70f66ca9c095e1bc6bfb7e7 (MD5)
Rejected by Marlene Sousa (mmarlene@ufc.br), reason: Lincoln, Falta apenas vc colocar no texto em português a palavra RESUMO (nesse caso não é traduzido pois se refere ao resumo em língua portuguesa) pois vc colocou ABSTRACT duas vezes para o texto em português e inglês. on 2017-09-12T11:06:29Z (GMT)
Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-12T14:05:11Z No. of bitstreams: 1 2017_tes_clcmattos.pdf: 6102699 bytes, checksum: 0a85b8841d77f0685b1153ee8ede0d85 (MD5)
Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-09-12T16:29:17Z (GMT) No. of bitstreams: 1 2017_tes_clcmattos.pdf: 6102699 bytes, checksum: 0a85b8841d77f0685b1153ee8ede0d85 (MD5)
Made available in DSpace on 2017-09-12T16:29:18Z (GMT). No. of bitstreams: 1 2017_tes_clcmattos.pdf: 6102699 bytes, checksum: 0a85b8841d77f0685b1153ee8ede0d85 (MD5) Previous issue date: 2017-08-25
The study of dynamical systems is widespread across several areas of knowledge. Sequential data is generated constantly by different phenomena, most of them we cannot explain by equations derived from known physical laws and structures. In such context, this thesis aims to tackle the task of nonlinear system identification, which builds models directly from sequential measurements. More specifically, we approach challenging scenarios, such as learning temporal relations from noisy data, data containing discrepant values (outliers) and large datasets. In the interface between statistics, computer science, data analysis and engineering lies the machine learning community, which brings powerful tools to find patterns from data and make predictions. In that sense, we follow methods based on Gaussian Processes (GP), a principled, practical, probabilistic approach to learning in kernel machines. We aim to exploit recent advances in general GP modeling to bring new contributions to the dynamical modeling exercise. Thus, we propose the novel family of Recurrent Gaussian Processes (RGPs) models and extend their concept to handle outlier-robust requirements and scalable stochastic learning. The hierarchical latent (non-observed) structure of those models impose intractabilities in the form of non-analytical expressions, which are handled with the derivation of new variational algorithms to perform approximate deterministic inference as an optimization problem. The presented solutions enable uncertainty propagation on both training and testing, with focus on free simulation. We comprehensively evaluate the proposed methods with both artificial and real system identification benchmarks, as well as other related dynamical settings. The obtained results indicate that the proposed approaches are competitive when compared to the state of the art in the aforementioned complicated setups and that GP-based dynamical modeling is a promising area of research.
O estudo dos sistemas dinâmicos encontra-se disseminado em várias áreas do conhecimento. Dados sequenciais são gerados constantemente por diversos fenômenos, a maioria deles não passíveis de serem explicados por equações derivadas de leis físicas e estruturas conhecidas. Nesse contexto, esta tese tem como objetivo abordar a tarefa de identificação de sistemas não lineares, por meio da qual são obtidos modelos diretamente a partir de observações sequenciais. Mais especificamente, nós abordamos cenários desafiadores, tais como o aprendizado de relações temporais a partir de dados ruidosos, dados contendo valores discrepantes (outliers) e grandes conjuntos de dados. Na interface entre estatísticas, ciência da computação, análise de dados e engenharia encontra-se a comunidade de aprendizagem de máquina, que fornece ferramentas poderosas para encontrar padrões a partir de dados e fazer previsões. Nesse sentido, seguimos métodos baseados em Processos Gaussianos (PGs), uma abordagem probabilística prática para a aprendizagem de máquinas de kernel. A partir de avanços recentes em modelagem geral baseada em PGs, introduzimos novas contribuições para o exercício de modelagem dinâmica. Desse modo, propomos a nova família de modelos de Processos Gaussianos Recorrentes (RGPs, da sigla em inglês) e estendemos seu conceito para lidar com requisitos de robustez a outliers e aprendizagem estocástica escalável. A estrutura hierárquica e latente (não-observada) desses modelos impõe expressões não- analíticas, que são resolvidas com a derivação de novos algoritmos variacionais para realizar inferência determinista aproximada como um problema de otimização. As soluções apresentadas permitem a propagação da incerteza tanto no treinamento quanto no teste, com foco em realizar simulação livre. Nós avaliamos em detalhe os métodos propostos com benchmarks artificiais e reais da área de identificação de sistemas, assim como outras tarefas envolvendo dados dinâmicos. Os resultados obtidos indicam que nossas propostas são competitivas quando comparadas ao estado da arte, mesmo nos cenários que apresentam as complicações supracitadas, e que a modelagem dinâmica baseada em PGs é uma área de pesquisa promissora.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Yong. "Ensemble Kalman filter method for Gaussian and non-Gaussian priors /." Access abstract and link to full text, 2008. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/3305718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Gong, Yun. "Empirical likelihood and extremes." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43581.

Full text
Abstract:
In 1988, Owen introduced empirical likelihood as a nonparametric method for constructing confidence intervals and regions. Since then, empirical likelihood has been studied extensively in the literature due to its generality and effectiveness. It is well known that empirical likelihood has several attractive advantages comparing to its competitors such as bootstrap: determining the shape of confidence regions automatically using only the data; straightforwardly incorporating side information expressed through constraints; being Bartlett correctable. The main part of this thesis extends the empirical likelihood method to several interesting and important statistical inference situations. This thesis has four components. The first component (Chapter II) proposes a smoothed jackknife empirical likelihood method to construct confidence intervals for the receiver operating characteristic (ROC) curve in order to overcome the computational difficulty when we have nonlinear constrains in the maximization problem. The second component (Chapter III and IV) proposes smoothed empirical likelihood methods to obtain interval estimation for the conditional Value-at-Risk with the volatility model being an ARCH/GARCH model and a nonparametric regression respectively, which have applications in financial risk management. The third component(Chapter V) derives the empirical likelihood for the intermediate quantiles, which plays an important role in the statistics of extremes. Finally, the fourth component (Chapter VI and VII) presents two additional results: in Chapter VI, we present an interesting result by showing that, when the third moment is infinity, we may prefer the Student's t-statistic to the sample mean standardized by the true standard deviation; in Chapter VII, we present a method for testing a subset of parameters for a given parametric model of stationary processes.
APA, Harvard, Vancouver, ISO, and other styles
45

Padgett, Wayne Thomas. "Detection of low order nonstationary gaussian random processes." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Fasen, Vicky Maria. "Extremes of Lévy driven moving average processes with applications in finance." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=973922796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ghassemi, Nooshin Haji. "Analytic Long Term Forecasting with Periodic Gaussian Processes." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5458.

Full text
Abstract:
In many application domains such as weather forecasting, robotics and machine learning we need to model, predict and analyze the evolution of periodic systems. For instance, time series applications that follow periodic patterns appear in climatology where the CO2 emissions and temperature changes follow periodic or quasi-periodic patterns. Another example can be in robotics where the joint angle of a rotating robotic arm follows a periodic pattern. It is often very important to make long term prediction of the evolution of such systems. For modeling and prediction purposes, Gaussian processes are powerful methods, which can be adjusted based on the properties of the problem at hand. Gaussian processes belong to the class of probabilistic kernel methods, where the kernels encode the characteristics of the problems into the models. In case of the systems with periodic evolution, taking the periodicity into account can simplifies the problem considerably. The Gaussian process models can account for the periodicity by using a periodic kernel. Long term predictions need to deal with uncertain points, which can be expressed by a distribution rather than a deterministic point. Unlike the deterministic points, prediction at uncertain points is analytically intractable for the Gaussian processes. However, there are approximation methods that allow for dealing with uncertainty in an analytic closed form, such as moment matching. However, only some particular kernels allow for analytic moment matching. The standard periodic kernel does not allow for analytic moment matching when performing long term predictions. This work presents an analytic approximation method for long term forecasting in periodic systems. We present a different parametrization of the standard periodic kernel, which allows us to approximate moment matching in an analytic closed form. We evaluate our approximate method on different periodic systems. The results indicate that the proposed method is valuable for the long term forecasting of periodic processes.
APA, Harvard, Vancouver, ISO, and other styles
48

Smith, Jason Marko. "Discrete properties of continuous, non-Gaussian random processes." Thesis, University of Nottingham, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Almosallam, Ibrahim. "Heteroscedastic Gaussian processes for uncertain and incomplete data." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:6a3b600d-5759-456a-b785-5f89cf4ede6d.

Full text
Abstract:
In probabilistic inference, many implicit and explicit assumptions are taken about the nature of input noise and the function fit to either simplify the mathematics, improve the time complexity or optimise for space. It is often assumed that the inputs are noiseless or that the noise is drawn from the same distribution for all inputs, that all the variables used during training will be present during prediction and with the same degrees of uncertainties, and that the confidence about the prediction is uniform across the input space. This thesis presents a more generalised sparse Gaussian process model that relaxes these assumptions to inputs with variable degrees of uncertainty, or completeness in the input, and produces variable uncertainty estimation over the output. The capabilities of sparse Gaussian processes are further enhanced to allow for non-stationarity which minimises the number of required basis functions, a prior mean function for better extrapolation performance and cost-sensitive learning for non-uniform weighting of samples. The results are demonstrated on an astrophysical problem of estimating galactic redshifts from their photometry. This problem, by its nature, can capitalise on the features of the proposed model as the noise on the photometry can vary across different galaxies or catalogues, not all photometry might be available during prediction or shared amongst different surveys, and the input-dependent uncertainty estimation gives astrophysicists the ability to trade off completeness for accuracy to answer a range of different questions related to astronomy and cosmology.
APA, Harvard, Vancouver, ISO, and other styles
50

Fry, James Thomas. "Hierarchical Gaussian Processes for Spatially Dependent Model Selection." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84161.

Full text
Abstract:
In this dissertation, we develop a model selection and estimation methodology for nonstationary spatial fields. Large, spatially correlated data often cover a vast geographical area. However, local spatial regions may have different mean and covariance structures. Our methodology accomplishes three goals: (1) cluster locations into small regions with distinct, stationary models, (2) perform Bayesian model selection within each cluster, and (3) correlate the model selection and estimation in nearby clusters. We utilize the Conditional Autoregressive (CAR) model and Ising distribution to provide intra-cluster correlation on the linear effects and model inclusion indicators, while modeling inter-cluster correlation with separate Gaussian processes. We apply our model selection methodology to a dataset involving the prediction of Brook trout presence in subwatersheds across Pennsylvania. We find that our methodology outperforms the stationary spatial model and that different regions in Pennsylvania are governed by separate Gaussian process regression models.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography