Dissertations / Theses on the topic 'Gaussian process regression model'

To see the other types of publications on this topic, follow the link: Gaussian process regression model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Gaussian process regression model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Srinivasan, Balaji Vasan. "Gaussian process regression for model estimation." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8962.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Electrical and Computer Engineering E. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Sofro, A'yunin. "Convolved Gaussian process regression models for multivariate non-Gaussian data." Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3723.

Full text
Abstract:
Multivariate regression analysis has been developed rapidly in the last decade for dependent data. The most di cult part in multivariate cases is how to construct a crosscorrelation between response variables. We need to make sure that the covariance matrix is positive de nite which is not an easy task. Several approaches have been developed to overcome the issue. However, most of them have some limitations, such as it is hard to extend it to the case involving high dimensional variables or capture individual characteristics. It also should point out that the meaning of the cross-correlation structure for some methods is unclear. To address the issues, we propose to use convolved Gaussian process (CGP) priors (Boyle & Frean, 2005). In this dissertation, we propose a novel approach for multivariate regression using CGP priors. The approach provides a semiparametric model with multi-dimensional covariates and o ers a natural framework for modelling common mean structures and covariance structures simultaneously for multivariate dependent data. Information about observations is provided by the common mean structure while individual characteristics also can be captured by the covariance structure. At the same time, the covariance function is able to accommodate a large-dimensional covariate as well. We start to make a broader problem from a general framework of CGP proposed by Andriluka et al. (2006). We investigate some of the stationary covariance functions and the mixed forms for constructing multiple dependent Gaussian processes to solve a more complex issue. Then, we extend the idea to a multivariate non-linear regression model by using convolved Gaussian processes as priors. We then focus on an applying the idea to multivariate non-Gaussian data, i.e. multivariate Poisson, and other multivariate non-Gaussian distributions from the exponential family. We start our focus on multivariate Poisson data which are found in many problems relating to public health issues. Then nally, we provide a general framework for a multivariate binomial data and other multivariate non-Gaussian data. The de nition of the model, the inference, and the implementation, as well as its asymptotic properties, are discussed. Comprehensive numerical examples with both simulation studies and real data are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Yi, Gang. "Variable Selection with Penalized Gaussian Process Regression Models." Thesis, University of Newcastle upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, Huong. "Near-optimal designs for Gaussian Process regression models." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1533983585774383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Erich, Roger Alan. "Regression Modeling of Time to Event Data Using the Ornstein-Uhlenbeck Process." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342796812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tietze, Nils [Verfasser], Ulrich [Akademischer Betreuer] Konigorski, and Oliver [Akademischer Betreuer] Nelles. "Model-based Calibration of Engine Control Units Using Gaussian Process Regression / Nils Tietze. Betreuer: Ulrich Konigorski ; Oliver Nelles." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1111909903/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barrett, James Edward. "Gaussian process regression models for the analysis of survival data with competing risks, interval censoring and high dimensionality." Thesis, King's College London (University of London), 2015. http://kclpure.kcl.ac.uk/portal/en/theses/gaussian-process-regression-models-for-the-analysis-of-survival-data-with-competing-risks-interval-censoring-and-high-dimensionality(fe3440e1-9766-4fc3-9d23-fe4af89483b5).html.

Full text
Abstract:
We develop novel statistical methods for analysing biomedical survival data based on Gaussian process (GP) regression. GP regression provides a powerful non-parametric probabilistic method of relating inputs to outputs. We apply this to survival data which consist of time-to-event and covariate measurements. In the context of GP regression the covariates are regarded as `inputs' and the event times are the `outputs'. This allows for highly exible inference of non-linear relationships between covariates and event times. Many existing methods for analysing survival data, such as the ubiquitous Cox proportional hazards model, focus primarily on the hazard rate which is typically assumed to take some parametric or semi-parametric form. Our proposed model belongs to the class of accelerated failure time models and as such our focus is on directly characterising the relationship between the covariates and event times without any explicit assumptions on what form the hazard rates take. This provides a more direct route to connecting the covariates to survival outcomes with minimal assumptions. An application of our model to experimental data illustrates its usefulness. We then apply multiple output GP regression, which can handle multiple potentially correlated outputs for each input, to competing risks survival data where multiple event types can occur. In this case the multiple outputs correspond to the time-to-event for each risk. By tuning one of the model parameters we can control the extent to which the multiple outputs are dependent thus allowing the specication of correlated risks. However, the identiability problem, which states that it is not possible to infer whether risks are truly independent or otherwise on the basis of observed data, still holds. In spite of this fundamental limitation simulation studies suggest that in some cases assuming dependence can lead to more accurate predictions. The second part of this thesis is concerned with high dimensional survival data where there are a large number of covariates compared to relatively few individuals. This leads to the problem of overtting, where spurious relationships are inferred from the data. One strategy to tackle this problem is dimensionality reduction. The Gaussian process latent variable model (GPLVM) is a powerful method of extracting a low dimensional representation of high dimensional data. We extend the GPLVM to incorporate survival outcomes by combining the model with a Weibull proportional hazards model (WPHM). By reducing the ratio of covariates to samples we hope to diminish the eects of overtting. The combined GPLVM-WPHM model can also be used to combine several datasets by simultaneously expressing them in terms of the same low dimensional latent variables. We construct the Laplace approximation of the marginal likelihood and use this to determine the optimal number of latent variables, thereby allowing detection of intrinsic low dimensional structure. Results from both simulated and real data show a reduction in overtting and an increase in predictive accuracy after dimensionality reduction.
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Li. "Statistical Methods for Variability Management in High-Performance Computing." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104184.

Full text
Abstract:
High-performance computing (HPC) variability management is an important topic in computer science. Research topics include experimental designs for efficient data collection, surrogate models for predicting the performance variability, and system configuration optimization. Due to the complex architecture of HPC systems, a comprehensive study of HPC variability needs large-scale datasets, and experimental design techniques are useful for improved data collection. Surrogate models are essential to understand the variability as a function of system parameters, which can be obtained by mathematical and statistical models. After predicting the variability, optimization tools are needed for future system designs. This dissertation focuses on HPC input/output (I/O) variability through three main chapters. After the general introduction in Chapter 1, Chapter 2 focuses on the prediction models for the scalar description of I/O variability. A comprehensive comparison study is conducted, and major surrogate models for computer experiments are investigated. In addition, a tool is developed for system configuration optimization based on the chosen surrogate model. Chapter 3 conducts a detailed study for the multimodal phenomena in I/O throughput distribution and proposes an uncertainty estimation method for the optimal number of runs for future experiments. Mixture models are used to identify the number of modes for throughput distributions at different configurations. This chapter also addresses the uncertainty in parameter estimation and derives a formula for sample size calculation. The developed method is then applied to HPC variability data. Chapter 4 focuses on the prediction of functional outcomes with both qualitative and quantitative factors. Instead of a scalar description of I/O variability, the distribution of I/O throughput provides a comprehensive description of I/O variability. We develop a modified Gaussian process for functional prediction and apply the developed method to the large-scale HPC I/O variability data. Chapter 5 contains some general conclusions and areas for future work.
Doctor of Philosophy
This dissertation focuses on three projects that are all related to statistical methods in performance variability management in high-performance computing (HPC). HPC systems are computer systems that create high performance by aggregating a large number of computing units. The performance of HPC is measured by the throughput of a benchmark called the IOZone Filesystem Benchmark. The performance variability is the variation among throughputs when the system configuration is fixed. Variability management involves studying the relationship between performance variability and the system configuration. In Chapter 2, we use several existing prediction models to predict the standard deviation of throughputs given different system configurations and compare the accuracy of predictions. We also conduct HPC system optimization using the chosen prediction model as the objective function. In Chapter 3, we use the mixture model to determine the number of modes in the distribution of throughput under different system configurations. In addition, we develop a model to determine the number of additional runs for future benchmark experiments. In Chapter 4, we develop a statistical model that can predict the throughout distributions given the system configurations. We also compare the prediction of summary statistics of the throughput distributions with existing prediction models.
APA, Harvard, Vancouver, ISO, and other styles
9

Edwards, Adam Michael. "Precision Aggregated Local Models." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102125.

Full text
Abstract:
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements.
Doctor of Philosophy
Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
APA, Harvard, Vancouver, ISO, and other styles
10

Chu, Shuyu. "Change Detection and Analysis of Data with Heterogeneous Structures." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78613.

Full text
Abstract:
Heterogeneous data with different characteristics are ubiquitous in the modern digital world. For example, the observations collected from a process may change on its mean or variance. In numerous applications, data are often of mixed types including both discrete and continuous variables. Heterogeneity also commonly arises in data when underlying models vary across different segments. Besides, the underlying pattern of data may change in different dimensions, such as in time and space. The diversity of heterogeneous data structures makes statistical modeling and analysis challenging. Detection of change-points in heterogeneous data has attracted great attention from a variety of application areas, such as quality control in manufacturing, protest event detection in social science, purchase likelihood prediction in business analytics, and organ state change in the biomedical engineering. However, due to the extraordinary diversity of the heterogeneous data structures and complexity of the underlying dynamic patterns, the change-detection and analysis of such data is quite challenging. This dissertation aims to develop novel statistical modeling methodologies to analyze four types of heterogeneous data and to find change-points efficiently. The proposed approaches have been applied to solve real-world problems and can be potentially applied to a broad range of areas.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

De, lozzo Matthias. "Modèles de substitution spatio-temporels et multifidélité : Application à l'ingénierie thermique." Thesis, Toulouse, INSA, 2013. http://www.theses.fr/2013ISAT0027/document.

Full text
Abstract:
Cette thèse porte sur la construction de modèles de substitution en régimes transitoire et permanent pour la simulation thermique, en présence de peu d'observations et de plusieurs sorties.Nous proposons dans un premier temps une construction robuste de perceptron multicouche bouclé afin d'approcher une dynamique spatio-temporelle. Ce modèle de substitution s'obtient par une moyennisation de réseaux de neurones issus d'une procédure de validation croisée, dont le partitionnement des observations associé permet d'ajuster les paramètres de chacun de ces modèles sur une base de test sans perte d'information. De plus, la construction d'un tel perceptron bouclé peut être distribuée selon ses sorties. Cette construction est appliquée à la modélisation de l'évolution temporelle de la température en différents points d'une armoire aéronautique.Nous proposons dans un deuxième temps une agrégation de modèles par processus gaussien dans un cadre multifidélité où nous disposons d'un modèle d'observation haute-fidélité complété par plusieurs modèles d'observation de fidélités moindres et non comparables. Une attention particulière est portée sur la spécification des tendances et coefficients d'ajustement présents dans ces modèles. Les différents krigeages et co-krigeages sont assemblés selon une partition ou un mélange pondéré en se basant sur une mesure de robustesse aux points du plan d'expériences les plus fiables. Cette approche est employée pour modéliser la température en différents points de l'armoire en régime permanent.Nous proposons dans un dernier temps un critère pénalisé pour le problème de la régression hétéroscédastique. Cet outil est développé dans le cadre des estimateurs par projection et appliqué au cas particulier des ondelettes de Haar. Nous accompagnons ces résultats théoriques de résultats numériques pour un problème tenant compte de différentes spécifications du bruit et de possibles dépendances dans les observations
This PhD thesis deals with the construction of surrogate models in transient and steady states in the context of thermal simulation, with a few observations and many outputs.First, we design a robust construction of recurrent multilayer perceptron so as to approach a spatio-temporal dynamic. We use an average of neural networks resulting from a cross-validation procedure, whose associated data splitting allows to adjust the parameters of these models thanks to a test set without any information loss. Moreover, the construction of this perceptron can be distributed according to its outputs. This construction is applied to the modelling of the temporal evolution of the temperature at different points of an aeronautical equipment.Then, we proposed a mixture of Gaussian process models in a multifidelity framework where we have a high-fidelity observation model completed by many observation models with lower and no comparable fidelities. A particular attention is paid to the specifications of trends and adjustement coefficients present in these models. Different kriging and co-krigings models are put together according to a partition or a weighted aggregation based on a robustness measure associated to the most reliable design points. This approach is used in order to model the temperature at different points of the equipment in steady state.Finally, we propose a penalized criterion for the problem of heteroscedastic regression. This tool is build in the case of projection estimators and applied with the Haar wavelet. We also give some numerical results for different noise specifications and possible dependencies in the observations
APA, Harvard, Vancouver, ISO, and other styles
12

Le, Gratiet Loic. "Multi-fidelity Gaussian process regression for computer experiments." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00866770.

Full text
Abstract:
This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (ie the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based metamodels with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (ie when the process is in fact finite-dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework.
APA, Harvard, Vancouver, ISO, and other styles
13

Grande, Robert Conlin. "Computationally efficient Gaussian Process changepoint detection and regression." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90670.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 150-160).
Most existing GP regression algorithms assume a single generative model, leading to poor performance when data are nonstationary, i.e. generated from multiple switching processes. Existing methods for GP regression over non-stationary data include clustering and change-point detection algorithms. However, these methods require significant computation, do not come with provable guarantees on correctness and speed, and most algorithms only work in batch settings. This thesis presents an efficient online GP framework, GP-NBC, that leverages the generalized likelihood ratio test to detect changepoints and learn multiple Gaussian Process models from streaming data. Furthermore, GP-NBC can quickly recognize and reuse previously seen models. The algorithm is shown to be theoretically sample efficient in terms of limiting mistaken predictions. Our empirical results on two real-world datasets and one synthetic dataset show GP-NBC outperforms state of the art methods for nonstationary regression in terms of regression error and computational efficiency. The second part of the thesis introduces a Reinforcement Learning (RL) algorithm, UCRL-GP-CPD, for multi-task Reinforcement Learning when the reward function is nonstationary. First, a novel algorithm UCRL-GP is introduced for stationary reward functions. Then, UCRL-GP is combined with GP-NBC to create UCRL-GP-CPD, which is an algorithm for nonstationary reward functions. Unlike previous work in the literature, UCRL-GP-CPD does not make distributional assumptions about task generation, does not assume changepoint times are known, and does not assume that all tasks have been experienced a priori in a training phase. It is proven that UCRL-GP-CPD is sample efficient in the stationary case, will detect changepoints in the environment with high probability, and is theoretically guaranteed to prevent negative transfer. UCRL-GP-CPD is demonstrated empirically on a variety of simulated and real domains.
by Robert Conlin Grande.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

Aguilar, Fargas Joan. "Prediction interval modeling using Gaussian process quantile regression." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100361.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 62-65).
In this thesis a methodology to construct prediction intervals for a generic black-box point forecast model is presented. The prediction intervals are learned from the forecasts of the black-box model and the actual realizations of the forecasted variable by using quantile regression on the observed prediction error distribution, the distribution of which is not assumed. An independent meta-model that runs in parallel to the original point forecast model is responsible for learning and generating the prediction intervals, thus requiring no modification to the original setup. This meta-model uses both the inputs and output of the black-box model and calculates a lower and an upper bound for each of its forecasts with the goal that a predefined percentage of future realizations are included in the interval formed by both bounds. Metrics for the performance of the meta-model are established, paying special attention to the conditional interval coverage with respect to both time and the inputs. A series of cases studies are performed to determine the capabilities of this approach and to compare it to standard practices.
by Joan Aguilar Fargas.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
15

Marque-Pucheu, Sophie. "Gaussian process regression of two nested computer codes." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC155/document.

Full text
Abstract:
Cette thèse traite de la métamodélisation (ou émulation) par processus gaussien de deux codes couplés. Le terme « deux codes couplés » désigne ici un système de deux codes chaînés : la sortie du premier code est une des entrées du second code. Les deux codes sont coûteux. Afin de réaliser une analyse de sensibilité de la sortie du code couplé, on cherche à construire un métamodèle de cette sortie à partir d'un faible nombre d'observations. Trois types d'observations du système existent : celles de la chaîne complète, celles du premier code uniquement, celles du second code uniquement.Le métamodèle obtenu doit être précis dans les zones les plus probables de l'espace d'entrée.Les métamodèles sont obtenus par krigeage universel, avec une approche bayésienne.Dans un premier temps, le cas sans information intermédiaire, avec sortie scalaire, est traité. Une méthode innovante de définition de la fonction de la moyenne du processus gaussien, basée sur le couplage de deux polynômes, est proposée. Ensuite le cas avec information intermédiaire est traité. Un prédicteur basé sur le couplage des prédicteurs gaussiens associés aux deux codes est proposé. Des méthodes pour évaluer rapidement la moyenne et la variance du prédicteur obtenu sont proposées. Les résultats obtenus pour le cas scalaire sont ensuite étendus au cas où les deux codes sont à sortie de grande dimension. Pour ce faire, une méthode de réduction de dimension efficace de la variable intermédiaire de grande dimension est proposée pour faciliter la régression par processus gaussien du deuxième code.Les méthodes proposées sont appliquées sur des exemples numériques
Three types of observations of the system exist: those of the chained code, those of the first code only and those of the second code only. The surrogate model has to be accurate on the most likely regions of the input domain of the nested code.In this work, the surrogate models are constructed using the Universal Kriging framework, with a Bayesian approach.First, the case when there is no information about the intermediary variable (the output of the first code) is addressed. An innovative parametrization of the mean function of the Gaussian process modeling the nested code is proposed. It is based on the coupling of two polynomials.Then, the case with intermediary observations is addressed. A stochastic predictor based on the coupling of the predictors associated with the two codes is proposed.Methods aiming at computing quickly the mean and the variance of this predictor are proposed. Finally, the methods obtained for the case of codes with scalar outputs are extended to the case of codes with high dimensional vectorial outputs.We propose an efficient dimension reduction method of the high dimensional vectorial input of the second code in order to facilitate the Gaussian process regression of this code. All the proposed methods are applied to numerical examples
APA, Harvard, Vancouver, ISO, and other styles
16

Kamrath, Matthew. "Extending standard outdoor noise propagation models to complex geometries." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1038/document.

Full text
Abstract:
Les méthodes d'ingénierie acoustique (e.g. ISO 9613-2 ou CNOSSOS-EU) approchent efficacement les niveaux de bruit générés par les routes, les voies ferrées et les sources industrielles en milieu urbain. Cependant, ces approches d'ingénierie sont limitées à des géométries de forme simple, le plus souvent de section rectangulaire. Ce mémoire développe donc, et valide, une approche hybride permettant l'extension des méthodes d'ingénierie à des formes plus complexes, en introduisant un terme d’atténuation supplémentaire qui représente l'effet d'un objet réel comparé à un objet simple.Le calcul de cette atténuation supplémentaire nécessite des calculs de référence, permettant de quantifier la différence entre objets simple et complexe. Dans la mesure où il est trop onéreux, numériquement, '’effectuer ce calcul pour tous les chemins de propagation, l'atténuation supplémentaire est obtenue par interpolation de données stockées dans un tableau et évaluées pour un large jeu de positions de sources, de récepteurs et de fréquences. Dans notre approche, le calcul de référence utilise la méthode BEM en 2.5D, et permet ainsi de produire les niveaux de référence pour les géométries simple et complexe, tout en tabulant leur écart. Sur le principe, d'autres approches de référence pourraient être utilisées.Ce travail valide cette approche hybride pour un écran en forme de T avec un sol rigide, un sol absorbant et un cas avec bâtiments. Ces trois cas démontrent que l'approche hybride est plus précise que l'approche d’ingénierie standard dans des cas complexes
Noise engineering methods (e.g. ISO 9613-2 or CNOSSOS-EU) efficiently approximate sound levels from roads, railways, and industrial sources in cities. However, engineering methods are limited to only simple box-shaped geometries. This dissertation develops and validates a hybrid method to extend the engineering methods to more complicated geometries by introducing an extra attenuation term that represents the influence of a real object compared to a simplified object.Calculating the extra attenuation term requires reference calculations to quantify the difference between the complex and simplified objects. Since performing a reference computation for each path is too computationally expensive, the extra attenuation term is linearly interpolated from a data table containing the corrections for many source and receiver positions and frequencies. The 2.5D boundary element method produces the levels for the real complex geometry and a simplified geometry, and subtracting these levels yields the corrections in the table.This dissertation validates this hybrid method for a T-barrier with hard ground, soft ground, and buildings. All three cases demonstrate that the hybrid method is more accurate than standard engineering methods for complex cases
APA, Harvard, Vancouver, ISO, and other styles
17

Davies, Alexander James. "Effective implementation of Gaussian process regression for machine learning." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Urry, Matthew. "Learning curves for Gaussian process regression on random graphs." Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/learning-curves-for-gaussian-process-regression-on-random-graphs(c1f5f395-0426-436c-989c-d0ade913423e).html.

Full text
Abstract:
Gaussian processes are a non-parametric method that can be used to learn both regression and classification rules from examples for arbitrary input spaces using the ’kernel trick’. They are well understood for inputs from Euclidean spaces, however, much less research has focused on other spaces. In this thesis I aim to at least partially resolve this. In particular I focus on the case where inputs are defined on the vertices of a graph and the task is to learn a function defined on the vertices from noisy examples, i.e. a regression problem. A challenging problem in the area of non-parametric learning is to predict the general-isation error as a function of the number of examples or learning curve. I show that, unlike in the Euclidean case where predictions are either quantitatively accurate for a few specific cases or only qualitatively accurate for a broader range of situations, I am able to derive accurate learning curves for Gaussian processes on graphs for a wide range of input spaces given by ensembles of random graphs. I focus on the random walk kernel but my results generalise to any kernel that can be written as a truncated sum of powers of the normalised graph Laplacian. I begin first with a discussion of the properties of the random walk kernel, which can be viewed as an approximation of the ubiquitous squared exponential kernel in continuous spaces. I show that compared to the squared exponential kernel, the random walk kernel has some surprising properties which includes a non-trivial limiting form for some types of graphs. After investigating the limiting form of the kernel I then study its use as a prior. I propose a solution to this in the form of a local normalisation, where the prior scale at each vertex is normalised locally as desired. To drive home the point about kernel normalisation I then examine the differences between the two kernels when they are used as a Gaussian process prior over functions defined on the vertices of a graph. I show using numerical simulations that the locally normalised kernel leads to a probabilistically more plausible Gaussian process prior. After investigating the properties of the random walk kernel I then discuss the learning curves of a Gaussian process with a random walk kernel for both kernel normalisations in a matched scenario (where student and teacher are both Gaussian processes with matching hyperparameters). I show that by using the cavity method I can derive accu-rate predictions along the whole length of the learning curve that dramatically improves upon previously derived approximations for continuous spaces suitably extended to the discrete graph case. The derivation of the learning curve for the locally normalised kernel required an addi-tional approximation in the resulting cavity equations. I subsequently, therefore, investi-gate this approximation in more detail using the replica method. I show that the locally normalised kernel leads to a highly non-trivial replica calculation, that eventually shows that the approximation used in the cavity analysis amounts to ignoring some consistency requirements between incoming cavity distributions. I focus in particular on a teacher distribution that is given by a Gaussian process with a random walk kernel but different hyperparameters. I show that in this case, by applying the cavity method, I am able once more to calculate accurate predictions of the learning curve. The resulting equations resemble the matched case over an inflated number of variables. To finish this thesis I examine the learning curves for varying degrees of model mismatch.
APA, Harvard, Vancouver, ISO, and other styles
19

Shah, Siddharth S. "Robust Heart Rate Variability Analysis using Gaussian Process Regression." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1293737259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Zexun. "Gaussian process regression methods and extensions for stock market prediction." Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40502.

Full text
Abstract:
Gaussian process regression (GPR) is a kernel-based nonparametric method that has been proved to be effective and powerful in many areas, including time series prediction. In this thesis, we focus on GPR and its extensions and then apply them to financial time series prediction. We first review GPR, followed by a detailed discussion about model structure, mean functions, kernels and hyper-parameter estimations. After that, we study the sensitivity of hyper-parameter and performance of GPR to the prior distribution for the initial values, and find that the initial hyper-parameters’ estimates depend on the choice of the specific kernels, with the priors having little influence on the performance of GPR in terms of predictability. Furthermore, GPR with Student-t process (GPRT) and Student-t process regression (TPR), are introduced. All the above models as well as autoregressive moving average (ARMA) model are applied to predict equity indices. We find that GPR and TPR shows relatively considerable capability of predicting equity indices so that both of them are extended to state-space GPR (SSGPR) and state-space TPR (SSTPR) models, respectively. The overall results are that SSTPR outperforms SSGPR for the equity index prediction. Based on the detailed results, a brief market efficiency analysis confirms that the developed markets are unpredictable on the whole. Finally, we propose and test the multivariate GPR (MV-GPR) and multivariate TPR (MV-TPR) for multi-output prediction, where the model settings, derivations and computations are all directly performed in matrix form, rather than vectorising the matrices involved in the existing method of GPR for multi-output prediction. The effectiveness of the proposed methods is illustrated through a simulated example. The proposed methods are then applied to stock market modelling in which the Buy&Sell strategies generated by our proposed methods are shown to be profitable in the equity investment.
APA, Harvard, Vancouver, ISO, and other styles
21

Wan, Zhong Yi Ph D. Massachusetts Institute of Technology. "Reduced-space Gaussian process regression forecast for nonlinear dynamical systems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104565.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-97).
In this thesis work, we formulate a reduced-order data-driven strategy for the efficient probabilistic forecast of complex high-dimensional dynamical systems for which data-streams are available. The first step of this method consists of the reconstruction of the vector field in a reduced-order subspace of interest using Gaussian Process Regression (GPR). GPR simultaneously allows for the reconstruction of the vector field, as well as the estimation of the local uncertainty. The latter is due to i) the local interpolation error and ii) due to the truncation of the high-dimensional phase space and it analytically quantified in terms of the GPR hyperparameters. The second step involves the formulation of stochastic models that explicitly take into account the reconstructed dynamics and their uncertainty. For regions of the attractor where the training data points are not sufficiently dense for GPR to be effective an adaptive blended scheme is formulated that guarantees correct statistical steady state properties. We examine the effectiveness of the proposed method to complex systems including the Lorenz 63, Lorenz 96, the Kuramoto-Sivashinsky, as well as a prototype climate model. We also study the performance of the proposed approach as the intrinsic dimensionality of the system attractor increases in highly turbulent regimes.
by Zhong Yi Wan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
22

Kortesalmi, Linus. "Gaussian Process Regression-based GPS Variance Estimation and Trajectory Forecasting." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153126.

Full text
Abstract:
Spatio-temporal data is a commonly used source of information. Using machine learning to analyse this kind of data can lead to many interesting and useful insights. In this thesis project, a novel public transportation spatio-temporal dataset is explored and analysed. The dataset contains 282 GB of positional events, spanning two weeks of time, from all public transportation vehicles in Östergötland county, Sweden.  From the data exploration, three high-level problems are formulated: bus stop detection, GPS variance estimation, and arrival time prediction, also called trajectory forecasting. The bus stop detection problem is briefly discussed and solutions are proposed. Gaussian process regression is an effective method for solving regression problems. The GPS variance estimation problem is solved via the use of a mixture of Gaussian processes. A mixture of Gaussian processes is also used to predict the arrival time for public transportation buses. The arrival time prediction is from one bus stop to the next, not for the whole trajectory.  The result from the arrival time prediction is a distribution of arrival times, which can easily be applied to determine the earliest and latest expected arrival to the next bus stop, alongside the most probable arrival time. The naïve arrival time prediction model implemented has a root mean square error of 5 to 19 seconds. In general, the absolute error of the prediction model decreases over time in each respective segment. The results from the GPS variance estimation problem is a model which can compare the variance for different environments along the route on a given trajectory.
APA, Harvard, Vancouver, ISO, and other styles
23

Szlachta, Wojciech Jerzy. "First principles interatomic potential for tungsten based on Gaussian process regression." Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wågberg, Johan, and Viklund Emanuel Walldén. "Continuous Occupancy Mapping Using Gaussian Processes." Thesis, Linköpings universitet, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81464.

Full text
Abstract:
The topic of this thesis is occupancy mapping for mobile robots, with an emphasis on a novel method for continuous occupancy mapping using Gaussian processes. In the new method, spatial correlation is accounted for in a natural way, and an a priori discretization of the area to be mapped is not necessary as within most other common methods. The main contribution of this thesis is the construction of a Gaussian process library for C++, and the use of this library to implement the continuous occupancy mapping algorithm. The continuous occupancy mapping is evaluated using both simulated and real world experimental data. The main result is that the method, in its current form, is not fit for online operations due to its computational complexity. By using approximations and ad hoc solutions, the method can be run in real time on a mobile robot, though not without losing many of its benefits.
APA, Harvard, Vancouver, ISO, and other styles
25

Hoolohan, Victoria Ruth. "The use of Gaussian process regression for wind forecasting in the UK." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/21544/.

Full text
Abstract:
Wind energy has experienced remarkable growth in recent years, both globally and in the UK. As a low carbon source of electricity this progress has been, and continues to be, encouraged through legally binding targets and government policy. However, wind energy is non-dispatchable and difficult to predict in advance. In order to support continued development in the wind industry, increasingly accurate prediction techniques are sought to provide forecasts of wind speed and power output. This thesis develops and tests a hybrid numerical weather prediction (NWP) and Gaussian process regression (GPR) model for the prediction of wind speed and power output from 3 hours to 72 hours in advance and considers the impact of incorporating atmospheric stability in the prediction model. In addition to this, the validity of the model as a probabilistic technique for wind power output forecasting is tested and the economic value of a forecast in the UK electricity market is discussed. To begin with, the hybrid NWP and GPR model is developed and tested for prediction of 10 m wind speeds at 15 sites across the UK and hub height wind speeds at 1 site. Atmospheric stability is incorporated in the prediction model first by subdividing input data by Pasquill-Gifford-Turner (PGT) stability class, and then by using the predicted Obukhov length stability parameter as an input in the model. The model is developed further to provide wind power output predictions, both for a single turbine and for 22 wind farms distributed across the UK. This shows that the hybrid NWP and GPR model provide good predictions for wind power output in comparison to other methods. The hybrid NWP and GPR model for the prediction of near-surface wind speeds leads to a reduction in mean absolute percentage error (MAPE) of approximately 2% in comparison to the Met office NWP model. Furthermore, the use of the Obukhov length stability parameter as an input reduces wind power prediction errors in comparison to the same model without this parameter for the single turbine and for offshore wind farms but not for onshore wind farms. The inclusion of the Obukhov length stability parameter in the hub height wind speed prediction model leads to a reduction in MAPE of between 2 and iv 5%, dependent on the forecast horizon, over the model where Obukhov length is omitted. For the prediction of wind power at offshore wind farms, the inclusion of the Obukhov length stability parameter in the hybrid NWP and GPR model leads to a reduction in normalised mean absolute error (NMAE) of between 0.5 and 2%. The performance of the hybrid NWP and GPR model is also evaluated from a probabilistic perspective, with a particular focus on the appropriate likelihood function for the GPR model. The results suggest that using a beta likelihood function in the hybrid model for wind power prediction leads to better probabilistic predictions than implementing the same model with a Gaussian likelihood function. The results suggest an improvement of approximately 1% in continuous ranked probability score (CRPS) when the beta likelihood function is used rather than the Gaussian likelihood function. After considering new techniques for the prediction of wind speed and power output, the final chapter in this thesis considers the economic benefit of implementing a forecast. The economic value of a wind power forecast is evaluated from the perspective of a wind generator participating in the UK electricity market. The impact of forecast accuracy and the change from a dual imbalance price to a single imbalance price is investigated. The results show that a reduction in random error in a wind power forecast does not have a large impact on the average price per MWh generated. However, it has a more significant impact on the variation in price received on an hourly basis. When the systematic bias in a forecast was zero, a forecast with NMAE of 20% of capacity results in less than £0.05 deviation in mean price per MWh in comparison with a perfect forecast. However, the same forecast leads to an increase in standard deviation of up to £21/MWh. This indicates that whilst a reduction in random error in a forecast might not lead to an improvement in mean price per MWh, it can lead to a more stable income stream. In addition to this, Chapter 6 considers the use of the probabilistic and deterministic forecasts developed throughout this thesis to choose an appropriate value to bid in the UK electricity market. This shows that using a probabilistic forecast can limit a generator’s exposure to variable prices and decrease the standard deviation in hourly prices.
APA, Harvard, Vancouver, ISO, and other styles
26

Alvarez, Mauricio A. "Convolved Gaussian process priors for multivariate regression with applications to dynamical systems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/convolved-gaussian-process-priors-for-multivariate-regression-with-applications-to-dynamical-systems(0fe42df3-6dce-48ec-a74d-a6ecaf249d74).html.

Full text
Abstract:
In this thesis we address the problem of modeling correlated outputs using Gaussian process priors. Applications of modeling correlated outputs include the joint prediction of pollutant metals in geostatistics and multitask learning in machine learning. Defining a Gaussian process prior for correlated outputs translates into specifying a suitable covariance function that captures dependencies between the different output variables. Classical models for obtaining such a covariance function include the linear model of coregionalization and process convolutions. We propose a general framework for developing multiple output covariance functions by performing convolutions between smoothing kernels particular to each output and covariance functions that are common to all outputs. Both the linear model of coregionalization and the process convolutions turn out to be special cases of this framework. Practical aspects of the proposed methodology are studied in this thesis. They involve the use of domain-specific knowledge for defining relevant smoothing kernels, efficient approximations for reducing computational complexity and a novel method for establishing a general class of nonstationary covariances with applications in robotics and motion capture data.Reprints of the publications that appear at the end of this document, report case studies and experimental results in sensor networks, geostatistics and motion capture data that illustrate the performance of the different methods proposed.
APA, Harvard, Vancouver, ISO, and other styles
27

Seidu, Mohammed Nazib. "Predicting Bankruptcy Risk: A Gaussian Process Classifciation Model." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119120.

Full text
Abstract:
This thesis develops a Gaussian processes model for bankruptcy risk classification and prediction in a Bayesian framework. Gaussian processes and linear logistic models are discriminative methods used for classification and prediction purposes. The Gaussian processes model is a much more flexible model than the linear logistic model with smoothness encoded in the kernel with the potential to improve the modeling of the highly nonlinear relationships between accounting ratios and bankruptcy risk. We compare the linear logistic regression with the Gaussian process classification model in the context of bankruptcy prediction. The posterior distributions of the GPs are non-Gaussian, and we investigate the effectiveness of the Laplace approximation and the expectation propagation approximation across several different kernels for the Gaussian process. The approximate methods are compared to the gold standard of Markov Chain Monte Carlo (MCMC) sampling from the posterior. The dataset is an unbalanced panel consisting of 21846 yearly observations for about 2000 corporate firms in Sweden recorded between 1991−2008. We used 5000 observations to train the models and the rest for evaluating the predictions. We find that the choice of covariance kernel affects the GP model’s performance and we find support for the squared exponential covariance function (SEXP) as an optimal kernel. The empirical evidence suggests that a multivariate Gaussian processes classifier with squared exponential kernel can effectively improve bankruptcy risk prediction with high accuracy (90.19 percent) compared to the linear logistic model (83.25 percent).
APA, Harvard, Vancouver, ISO, and other styles
28

Adamou, Maria. "Bayesian optimal designs for the Gaussian Process Model." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/373881/.

Full text
Abstract:
This thesis is concerned with methodology for finding Bayesian optimal designs for the Gaussian process model when the aim is precise prediction at unobserved points. The fundamental problem addressed is that the design selection criterion obtained from the Bayesian decision theoretic approach is often, in practice, computationally infeasible to apply. We propose an approximation to the objective function in the criterion and develop this approximation for spatial and spatio-temporal studies, and for computer experiments. We provide empirical evidence and theoretical insights to support the approximation. For spatial studies, we use the approximation to find optimal designs for the general sensor placement problem, and also to find the best sensors to remove from an existing monitoring network. We assess the performance of the criterion using a prospective study and also from a retrospective study based on an air pollution dataset. We investigate the robustness of designs to misspecification of the mean function and correlation function in the model through a factorial sensitivity study that compares the performance of optimal designs for the sensor placement problem under different assumptions. In computer experiments, using a Gaussian process model as a surrogate for the output from a computer model, we find optimal designs for prediction using the proposed approximation. A comparison is made of optimal designs obtained from commonly used model-free methods such as the maximin criterion and Latin hypercube sampling via both the space-filling and prediction properties of the designs. For spatio-temporal studies, we extend our proposed approximation to include both space and time dependency and investigate the approximation for a particular choice of separable spatio-temporal correlation function. Two cases are considered: (i) the temporal design is fixed and an optimal spatial design is found; (ii) both optimal temporal and spatial designs are found. For all three of the application areas, we found that the choice of optimal design depends on the degree and the range of the correlation in the Gaussian process model.
APA, Harvard, Vancouver, ISO, and other styles
29

Kapat, Prasenjit. "Role of Majorization in Learning the Kernel within a Gaussian Process Regression Framework." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1316521301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wikland, Love. "Early-Stage Prediction of Lithium-Ion Battery Cycle Life Using Gaussian Process Regression." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273619.

Full text
Abstract:
Data-driven prediction of battery health has gained increased attention over the past couple of years, in both academia and industry. Accurate early-stage predictions of battery performance would create new opportunities regarding production and use. Using data from only the first 100 cycles, in a data set of 124 cells where lifetimes span between 150 and 2300 cycles, this work combines parametric linear models with non-parametric Gaussian process regression to achieve cycle lifetime predictions with an overall accuracy of 8.8% mean error. This work presents a relevant contribution to current research as this combination of methods is previously unseen when regressing battery lifetime on a high dimensional feature space. The study and the results presented further show that Gaussian process regression can serve as a valuable contributor in future data-driven implementations of battery health predictions.
Datadriven prediktion av batterihälsa har fått ökad uppmärksamhet under de senaste åren, både inom akademin och industrin. Precisa prediktioner i tidigt stadium av batteriprestanda skulle kunna skapa nya möjligheter för produktion och användning. Genom att använda data från endast de första 100 cyklerna, i en datamängd med 124 celler där livslängden sträcker sig mellan 150 och 2300 cykler, kombinerar denna uppsats parametriska linjära modeller med ickeparametrisk Gaussisk processregression för att uppnå livstidsprediktioner med en genomsnittlig noggrannhet om 8.8% fel. Studien utgör ett relevant bidrag till den aktuella forskningen eftersom den använda kombinationen av metoder inte tidigare utnyttjats för regression av batterilivslängd med ett högdimensionellt variabelrum. Studien och de erhållna resultaten visar att regression med hjälp av Gaussiska processer kan bidra i framtida datadrivna implementeringar av prediktion för batterihälsa.
APA, Harvard, Vancouver, ISO, and other styles
31

Fry, James Thomas. "Hierarchical Gaussian Processes for Spatially Dependent Model Selection." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84161.

Full text
Abstract:
In this dissertation, we develop a model selection and estimation methodology for nonstationary spatial fields. Large, spatially correlated data often cover a vast geographical area. However, local spatial regions may have different mean and covariance structures. Our methodology accomplishes three goals: (1) cluster locations into small regions with distinct, stationary models, (2) perform Bayesian model selection within each cluster, and (3) correlate the model selection and estimation in nearby clusters. We utilize the Conditional Autoregressive (CAR) model and Ising distribution to provide intra-cluster correlation on the linear effects and model inclusion indicators, while modeling inter-cluster correlation with separate Gaussian processes. We apply our model selection methodology to a dataset involving the prediction of Brook trout presence in subwatersheds across Pennsylvania. We find that our methodology outperforms the stationary spatial model and that different regions in Pennsylvania are governed by separate Gaussian process regression models.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Xiaoke. "Fault-tolerant predictive control : a Gaussian process model based approach." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Su, Weiji. "Flexible Joint Hierarchical Gaussian Process Model for Longitudinal and Recurrent Event Data." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1595850414934069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Rezvani, Arany Roushan. "Gaussian Process Model Predictive Control for Autonomous Driving in Safety-Critical Scenarios." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161430.

Full text
Abstract:
This thesis is concerned with model predictive control (MPC) within the field of autonomous driving. MPC requires a model of the system to be controlled. Since a vehicle is expected to handle a wide range of driving conditions, it is crucial that the model of the vehicle dynamics is able to account for this. Differences in road grip caused by snowy, icy or muddy roads change the driving dynamics and relying on a single model, based on ideal conditions, could possibly lead to dangerous behaviour. This work investigates the use of Gaussian processes for learning a model that can account for varying road friction coefficients. This model is incorporated as an extension to a nominal vehicle model. A double lane change scenario is considered and the aim is to learn a GP model of the disturbance based on previous driving experiences with a road friction coefficient of 0.4 and 0.6 performed with a regular MPC controller. The data is then used to train a GP model. The GPMPC controller is then compared with the regular MPC controller in the case of trajectory tracking. The results show that the obtained GP models in most cases correctly predict the model error in one prediction step. For multi-step predictions, the results vary more with some cases showing an improved prediction with a GP model compared to the nominal model. In all cases, the GPMPC controller gives a better trajectory tracking than the MPC controller while using less control input.
APA, Harvard, Vancouver, ISO, and other styles
35

Parker, Benjamin W. (Benjamin Wade). "An automatic, multi-fidelity framework for optimizing the performance of super-cavitating hydrofoils using Gaussian process regression and Bayesian optimization." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118719.

Full text
Abstract:
Thesis: Nav. E., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-100).
Computer automated design of complex physical systems is often limited by the computational resources required for the high precision solvers. Determining an optimum design necessitates high accuracy simulations due to the multi-dimensional design space and the interconnectedness of the constraint and objective quantities. This paper will present an automated framework for iterating through a design loop that includes both physics-based computer simulations and surrogate model training using machine learning techniques. To alleviate the computation burden and efficiently explore the design space, a surrogate model for each quantity of interest that cannot be found deterministically will be utilized. Further reduction of the computational cost is accomplished by utilizing both low- and high-fidelity data to build the response surfaces. These response surface models will be trained using multi-fidelity Gaussian process regression. The models will be iteratively improved using Bayesian optimization and additional high-fidelity simulations that are automatically initiated within the design loop. In addition, Bayesian optimization will be used to automatically determine the optimum kernel for the Gaussian regression model. The feasibility of this framework is demonstrated by designing a 2D super-cavitating hydrofoil and comparing the optimum shape found with a known benchmark design. This automated multi-fidelity Bayesian optimization framework can aid in taking the human out of the design loop, thus freeing manpower resources and removing potential human bias.
by Benjamin W. Parker.
Nav. E.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
36

Tavares, Ivo Alberto Valente. "Uncertainty quantification with a Gaussian Process Prior : an example from macroeconomics." Doctoral thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21444.

Full text
Abstract:
Doutoramento em Matemática Aplicada à Economia e Gestão
This thesis may be broadly divided into 4 parts. In the first part, we do a literature review of the state of the art in misspecification in Macroeconomics, and what so far has been the contribution of a relatively new area of research called Uncertainty Quantification to the Macroeconomics subject. These reviews are essential to contextualize the contribution of this thesis in the furthering of research dedicated to correcting non-linear misspecifications, and to account for several other sources of uncertainty, when modelling from an economic perspective. In the next three parts, we give an example, using the same simple DSGE model from macroeconomic theory, of how researchers may quantify uncertainty in a State-Space Model using a discrepancy term with a Gaussian Process prior. The second part of the thesis, we used a full Gaussian Process (GP) prior on the discrepancy term. Our experiments showed that despite the heavy computational constraints of our full GP method, we still managed to obtain a very interesting forecasting performance with such a restricted sample size, when compared with similar uncorrected DSGE models, or corrected DSGE models using state of the art methods for time series, such as imposing a VAR on the observation error of the state-space model. In the third part of our work, we improved on the computational performance of our previous method, using what has been referred in the literature as Hilbert Reduced Rank GP. This method has close links to Functional Analysis, and the Spectral Theorem for Normal Operators, and Partial Differential Equations. It indeed improved the computational processing time, albeit just slightly, and was accompanied with a similarly slight decrease in the forecasting performance. The fourth part of our work delved into how our method would account for model uncertainty just prior, and during, the great financial crisis of 2007-2009. Our technique allowed us to capture the crisis, albeit at a reduced applicability possibly due to computational constraints. This latter part also was used to deepen the understanding of our model uncertainty quantification technique with a GP. Identifiability issues were also studied. One of our overall conclusions was that more research is needed until this uncertainty quantification technique may be used in as part of the toolbox of central bankers and researchers for forecasting economic fluctuations, specially regarding the computational performance of either method.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
37

Rezaie, Reza. "Gaussian Conditionally Markov Sequences: Theory with Application." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2679.

Full text
Abstract:
Markov processes have been widely studied and used for modeling problems. A Markov process has two main components (i.e., an evolution law and an initial distribution). Markov processes are not suitable for modeling some problems, for example, the problem of predicting a trajectory with a known destination. Such a problem has three main components: an origin, an evolution law, and a destination. The conditionally Markov (CM) process is a powerful mathematical tool for generalizing the Markov process. One class of CM processes, called $CM_L$, fits the above components of trajectories with a destination. The CM process combines the Markov property and conditioning. The CM process has various classes that are more general and powerful than the Markov process, are useful for modeling various problems, and possess many Markov-like attractive properties. Reciprocal processes were introduced in connection to a problem in quantum mechanics and have been studied for years. But the existing viewpoint for studying reciprocal processes is not revealing and may lead to complicated results which are not necessarily easy to apply. We define and study various classes of Gaussian CM sequences, obtain their models and characterizations, study their relationships, demonstrate their applications, and provide general guidelines for applying Gaussian CM sequences. We develop various results about Gaussian CM sequences to provide a foundation and tools for general application of Gaussian CM sequences including trajectory modeling and prediction. We initiate the CM viewpoint to study reciprocal processes, demonstrate its significance, obtain simple and easy to apply results for Gaussian reciprocal sequences, and recommend studying reciprocal processes from the CM viewpoint. For example, we present a relationship between CM and reciprocal processes that provides a foundation for studying reciprocal processes from the CM viewpoint. Then, we obtain a model for nonsingular Gaussian reciprocal sequences with white dynamic noise, which is easy to apply. Also, this model is extended to the case of singular sequences and its application is demonstrated. A model for singular sequences has not been possible for years based on the existing viewpoint for studying reciprocal processes. This demonstrates the significance of studying reciprocal processes from the CM viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
38

Heller, Collin M. "A computational model of engineering decision making." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50272.

Full text
Abstract:
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects. The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
APA, Harvard, Vancouver, ISO, and other styles
39

Nagy, Béla. "Valid estimation and prediction inference in analysis of a computer model." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1561.

Full text
Abstract:
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
APA, Harvard, Vancouver, ISO, and other styles
40

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Full text
Abstract:
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
APA, Harvard, Vancouver, ISO, and other styles
41

Tran, Giang Thanh. "Developing a multi-level Gaussian process emulator of an Atmospheric General Circulation Model for palaeoclimate modelling." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/412553/.

Full text
Abstract:
The study of past climates provides a unique opportunity to test our understanding of the Earth system and our confidence in climate models. The nature of this subject requires a fine balance between complexity and efficiency. While comprehensive models can capture the system’s behaviour more realistically, fast but less accurate models are capable of integrating on the long timescale associated with Palaeoclimatology. In this thesis, a statistical approach is proposed to address the limitation of our simple atmospheric module in simulating glacial climates by incorporating a statistical surrogate of a general circulation model of the atmosphere into our Earth system modelling framework, GENIE. To utilise the available model spectrum of different complexities, a multi-level Gaussian Process (GP) emulation technique is proposed to established the link between a computationally expensive atmospheric model, PLASIM (Planet Simulator), and a cheaper model, EMBM (energy-moisture balance model). The method is first demonstrated by emulating a scalar summary quantity. A dimensional reduction technique is then introduced, allowing the high-dimensional model outputs to be emulated as functions of high-dimensional boundary forcing inputs. Even though the two atmospheric models chosen are structurally unrelated, GP emulators of PLASIM atmospheric variables are successfully constructed using EMBM as a fast approximation. With the extra information gained from the cheap model, the emulators of PLASIM’s 2-D surface output fields, are built at a reduced computational cost. The emulated quantities are validated against simulated values, showing that the ensemble-wide behaviour of the spatial fields is well captured. Finally, the emulator of PLASIM’s wind field is incorporated into GENIE, providing an interactive statistical wind field which responds to changes in the bound- ary condition described by the ocean module. While exhibiting certain limitation due to the structural bias in PLASIM’s wind, the new hybrid model introduces additional variations to the over-diffusive spatial outputs of EMBM without incurring a substantial computational cost.
APA, Harvard, Vancouver, ISO, and other styles
42

Cheng, Si. "Hierarchical Nearest Neighbor Co-kriging Gaussian Process For Large And Multi-Fidelity Spatial Dataset." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613750570927821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Xie, Guangrui. "Robust and Data-Efficient Metamodel-Based Approaches for Online Analysis of Time-Dependent Systems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98806.

Full text
Abstract:
Metamodeling is regarded as a powerful analysis tool to learn the input-output relationship of a system based on a limited amount of data collected when experiments with real systems are costly or impractical. As a popular metamodeling method, Gaussian process regression (GPR), has been successfully applied to analyses of various engineering systems. However, GPR-based metamodeling for time-dependent systems (TDSs) is especially challenging due to three reasons. First, TDSs require an appropriate account for temporal effects, however, standard GPR cannot address temporal effects easily and satisfactorily. Second, TDSs typically require analytics tools with a sufficiently high computational efficiency to support online decision making, but standard GPR may not be adequate for real-time implementation. Lastly, reliable uncertainty quantification is a key to success for operational planning of TDSs in real world, however, research on how to construct adequate error bounds for GPR-based metamodeling is sparse. Inspired by the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs), this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing the computational and statistical efficiencies of GPR-based metamodeling to meet the requirements of practical implementations. Furthermore, an in-depth investigation on building uniform error bounds for stochastic kriging is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of TDSs under the impact of strong heteroscedasticity.
Ph.D.
Metamodeling has been regarded as a powerful analysis tool to learn the input-output relationship of an engineering system with a limited amount of experimental data available. As a popular metamodeling method, Gaussian process regression (GPR) has been widely applied to analyses of various engineering systems whose input-output relationships do not depend on time. However, GPR-based metamodeling for time-dependent systems (TDSs), whose input-output relationships depend on time, is especially challenging due to three reasons. First, standard GPR cannot properly address temporal effects for TDSs. Second, standard GPR is typically not computationally efficient enough for real-time implementations in TDSs. Lastly, research on how to adequately quantify the uncertainty associated with the performance of GPR-based metamodeling is sparse. To fill this knowledge gap, this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing standard GPR to meet the requirements of practical implementations for TDSs. Effective solutions are provided to address the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs). Furthermore, an in-depth investigation on quantifying the uncertainty associated with the performance of stochastic kriging (a variant of standard GPR) is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of more complex TDSs.
APA, Harvard, Vancouver, ISO, and other styles
44

Cardamone, Salvatore. "An interacting quantum atoms approach to constructing a conformationally dependent biomolecular force field by Gaussian process regression : potential energy surface sampling and validation." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/an-interacting-quantum-atoms-approach-to-constructing-a-conformationally-dependent-biomolecular-force-field-by-gaussian-process-regression-potential-energy-surface-sampling-and-validation(508ed450-9033-4bc9-8522-690d5a7909eb).html.

Full text
Abstract:
The energetics of chemical systems are quantum mechanical in origin and dependent upon the internal molecular conformational degrees of freedom. "Classical force field" strategies are inadequate approximations to these energetics owing to a plethora of simplifications- both conceptual and mathematical. These simplifications have been employed to make the in silico modelling of molecular systems computationally tractable, but are also subject to both qualitative and quantitative errors. In spite of these shortcomings, classical force fields have become entrenched as a cornerstone of computational chemistry. The Quantum Chemical Topological Force Field (QCTFF) has been a central re search theme within our group for a number of years, and has been designed to ameliorate the shortcomings of classical force fields. Within its framework, one can undertake a full spatial decomposition of a chemical system into a set of finite atoms. Atomic properties are subsequently obtained by a rigorous quantum mechanical treatment of the resultant atomic domains through the theory of Interacting Quantum Atoms (IQA). Conformational dependence is accounted for in theQCTFF by use of Gaussian Process Regression, a machine learning technique. In so doing, one constructs an analytical function to provide a mapping from a molecular conformation to a set of atomic energetic quantities. One can subsequently conduct dynamics with these energetic quantities. The notion of "conformational sampling" is shown to be of key importance to the proper construction of the QCTFF. Conformational sampling is a key theme in this work, and a subject that we will expatiate. We suggest a novel conformational sampling scheme, and attempt a number of conformer subset selection strategies to construct optimal machine learning models. The QCTFF is then applied to carbohydrates for the first time, and shown to produce results well within the commonly invoked threshold of "chemical accuracy"- O(β^{-1}), where β is the thermodynamic beta. Finally, we present a number of methodological developments to aid in both the accuracy and tractability of predicting ab initio vibrational spectroscopies.
APA, Harvard, Vancouver, ISO, and other styles
45

Sjödin, Hällstrand Andreas. "Bilinear Gaussian Radial Basis Function Networks for classification of repeated measurements." Thesis, Linköpings universitet, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170850.

Full text
Abstract:
The Growth Curve Model is a bilinear statistical model which can be used to analyse several groups of repeated measurements. Normally the Growth Curve Model is defined in such a way that the permitted sampling frequency of the repeated measurement is limited by the number of observed individuals in the data set.In this thesis, we examine the possibilities of utilizing highly frequently sampled measurements to increase classification accuracy for real world data. That is, we look at the case where the regular Growth Curve Model is not defined due to the relationship between the sampling frequency and the number of observed individuals. When working with this high frequency data, we develop a new method of basis selection for the regression analysis which yields what we call a Bilinear Gaussian Radial Basis Function Network (BGRBFN), which we then compare to more conventional polynomial and trigonometrical functional bases. Finally, we examine if Tikhonov regularization can be used to further increase the classification accuracy in the high frequency data case.Our findings suggest that the BGRBFN performs better than the conventional methods in both classification accuracy and functional approximability. The results also suggest that both high frequency data and furthermore Tikhonov regularization can be used to increase classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Tong, Xiao Thomas. "Statistical Learning of Some Complex Systems: From Dynamic Systems to Market Microstructure." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10917.

Full text
Abstract:
A complex system is one with many parts, whose behaviors are strongly dependent on each other. There are two interesting questions about complex systems. One is to understand how to recover the true structure of a complex system from noisy data. The other is to understand how the system interacts with its environment. In this thesis, we address these two questions by studying two distinct complex systems: dynamic systems and market microstructure. To address the first question, we focus on some nonlinear dynamic systems. We develop a novel Bayesian statistical method, Gaussian Emulator, to estimate the parameters of dynamic systems from noisy data, when the data are either fully or partially observed. Our method shows that estimation accuracy is substantially improved and computation is faster, compared to the numerical solvers. To address the second question, we focus on the market microstructure of hidden liquidity. We propose some statistical models to explain the hidden liquidity under different market conditions. Our statistical results suggest that hidden liquidity can be reliably predicted given the visible state of the market.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
47

Hernandez, Moreno Andres Felipe. "A metamodeling approach for approximation of multivariate, stochastic and dynamic simulations." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43690.

Full text
Abstract:
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error. The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhou, Yifan. "Asset life prediction and maintenance decision-making using a non-linear non-Gaussian state space model." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41696/1/Yifan_Zhou_Thesis.pdf.

Full text
Abstract:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
49

Lu, Min. "A Study of the Calibration Regression Model with Censored Lifetime Medical Cost." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/14.

Full text
Abstract:
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
APA, Harvard, Vancouver, ISO, and other styles
50

Han, Gang. "Modeling the output from computer experiments having quantitative and qualitative input variables and its applications." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1228326460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography