Дисертації з теми "Gaussian Regression Processes"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Gaussian Regression Processes".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Beck, Daniel Emilio. "Gaussian processes for text regression." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/17619/.
Повний текст джерелаGibbs, M. N. "Bayesian Gaussian processes for regression and classification." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599379.
Повний текст джерелаWågberg, Johan, and Viklund Emanuel Walldén. "Continuous Occupancy Mapping Using Gaussian Processes." Thesis, Linköpings universitet, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81464.
Повний текст джерелаDavies, Alexander James. "Effective implementation of Gaussian process regression for machine learning." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708909.
Повний текст джерела余瑞心 and Sui-sum Amy Yu. "Application of Markov regression models in non-Gaussian time series analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31976840.
Повний текст джерелаRasmussen, Carl Edward. "Evaluation of Gaussian processes and other methods for non-linear regression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq28300.pdf.
Повний текст джерелаSun, Furong. "Some Advances in Local Approximate Gaussian Processes." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/97245.
Повний текст джерелаDoctor of Philosophy
In many real-life settings, we want to understand a physical relationship/phenomenon. Due to limited resources and/or ethical reasons, it is impossible to perform physical experiments to collect data, and therefore, we have to rely upon computer experiments, whose evaluation usually requires expensive simulation, involving complex mathematical equations. To reduce computational efforts, we are looking for a relatively cheap alternative, which is called an emulator, to serve as a surrogate model. Gaussian process (GP) is such an emulator, and has been very popular due to fabulous out-of-sample predictive performance and appropriate uncertainty quantification. However, due to computational complexity, full GP modeling is not suitable for “big data” settings. Gramacy and Apley (2015) proposed local approximate GP (laGP), the core idea of which is to use a subset of the data for inference and further prediction at unobserved inputs. This dissertation provides several extensions of laGP, which are applied to several real-life “big data” settings. The first application, detailed in Chapter 3, is to emulate satellite drag from large simulation experiments. A smart way is figured out to capture global input information in a comprehensive way by using a small subset of the data, and local prediction is performed subsequently. This method is called “multilevel GP modeling”, which is also deployed to synthesize field measurements and computational outputs of solar irradiance across the continental United States, illustrated in Chapter 4, and to emulate daytime land surface temperatures estimated by satellites, discussed in Chapter 5.
Zertuche, Federico. "Utilisation de simulateurs multi-fidélité pour les études d'incertitudes dans les codes de caclul." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM069/document.
Повний текст джерелаA very important tool used by applied mathematicians and engineers to model the behavior of a system are computer simulations. They have become increasingly more precise but also more complicated. So much, that they are very slow to produce an output and thus difficult to sample so that many aspects of these simulations are not very well understood. For example, in many cases they depend on parameters whose value isA metamodel is a reconstruction of the simulation. It requires much less time to produce an output that is close to what the simulation would. By using it, some aspects of the original simulation can be studied. It is built with very few samples and its purpose is to replace the simulation.This thesis is concerned with the construction of a metamodel in a particular context called multi-fidelity. In multi-fidelity the metamodel is constructed using the data from the target simulation along other samples that are related. These approximate samples can come from a degraded version of the simulation; an old version that has been studied extensively or a another simulation in which a part of the description is simplified.By learning the difference between the samples it is possible to incorporate the information of the approximate data and this may lead to an enhanced metamodel. In this manuscript two approaches that do this are studied: one based on Gaussian process modeling and another based on a coarse to fine Wavelet decomposition. The fist method shows how by estimating the relationship between two data sets it is possible to incorporate data that would be useless otherwise. In the second method an adaptive procedure to add data systematically to enhance the metamodel is proposed.The object of this work is to better our comprehension of how to incorporate approximate data to enhance a metamodel. Working with a multi-fidelity metamodel helps us to understand in detail the data that nourish it. At the end a global picture of the elements that compose it is formed: the relationship and the differences between all the data sets become clearer
Wikland, Love. "Early-Stage Prediction of Lithium-Ion Battery Cycle Life Using Gaussian Process Regression." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273619.
Повний текст джерелаDatadriven prediktion av batterihälsa har fått ökad uppmärksamhet under de senaste åren, både inom akademin och industrin. Precisa prediktioner i tidigt stadium av batteriprestanda skulle kunna skapa nya möjligheter för produktion och användning. Genom att använda data från endast de första 100 cyklerna, i en datamängd med 124 celler där livslängden sträcker sig mellan 150 och 2300 cykler, kombinerar denna uppsats parametriska linjära modeller med ickeparametrisk Gaussisk processregression för att uppnå livstidsprediktioner med en genomsnittlig noggrannhet om 8.8% fel. Studien utgör ett relevant bidrag till den aktuella forskningen eftersom den använda kombinationen av metoder inte tidigare utnyttjats för regression av batterilivslängd med ett högdimensionellt variabelrum. Studien och de erhållna resultaten visar att regression med hjälp av Gaussiska processer kan bidra i framtida datadrivna implementeringar av prediktion för batterihälsa.
Persson, Lejon Ludvig, and Fredrik Berntsson. "Regression Analysis on NBA Players Background and Performance using Gaussian Processes : Can NBA-drafts be improved by taking socioeconomic background into consideration?" Thesis, KTH, Skolan för teknikvetenskap (SCI), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-153767.
Повний текст джерелаLiu, Xuyuan. "Statistical validation and calibration of computer models." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39478.
Повний текст джерелаLinton, Thomas. "Forecasting hourly electricity consumption for sets of households using machine learning algorithms." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186592.
Повний текст джерелаFör att ta itu med ineffektivitet, avfall, och de negativa konsekvenserna av elproduktion så vill företag och myndigheter se beteendeförändringar bland hushållskonsumenter. För att skapa beteendeförändringar så behöver konsumenterna bättre återkoppling när det gäller deras elförbrukning. Den nuvarande återkopplingen i en månads- eller kvartalsfaktura ger konsumenten nästan ingen användbar information om hur deras beteenden relaterar till deras konsumtion. Smarta mätare finns nu överallt i de utvecklade länderna och de kan ge en mängd information om bostäders konsumtion, men denna data används främst som underlag för fakturering och inte som ett verktyg för att hjälpa konsumenterna att minska sin konsumtion. En komponent som krävs för att leverera innovativa återkopplingsmekanismer är förmågan att förutse elförbrukningen på hushållsskala. Arbetet som presenteras i denna avhandling är en utvärdering av noggrannheten hos ett urval av kärnbaserad maskininlärningsmetoder för att förutse den sammanlagda förbrukningen för olika stora uppsättningar av hushåll. Arbetet i denna avhandling visar att "k-Nearest Neighbour Regression" och "Gaussian Process Regression" är de mest exakta metoder inom problemets begränsningar. Förutom noggrannhet, så görs en utvärdering av fördelar, nackdelar och prestanda hos varje maskininlärningsmetod.
Ashrafi, Parivash. "Predicting the absorption rate of chemicals through mammalian skin using machine learning algorithms." Thesis, University of Hertfordshire, 2016. http://hdl.handle.net/2299/17310.
Повний текст джерелаKortesalmi, Linus. "Gaussian Process Regression-based GPS Variance Estimation and Trajectory Forecasting." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153126.
Повний текст джерелаCoufal, Martin. "Hyper-optimalizace neuronových sítí založená na Gaussovských procesech." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417223.
Повний текст джерелаLe, Gratiet Loic. "Multi-fidelity Gaussian process regression for computer experiments." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00866770.
Повний текст джерелаRaulli, Vittoria. "Processi gaussiani nella regressione." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18256/.
Повний текст джерелаMolada, Tebar Adolfo. "Colorimetric and spectral analysis of rock art by means of the characterization of digital sensors." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/160386.
Повний текст джерела[CA] Les tasques de documentació gràfica d'art rupestre són àrdues i delicades, on el color compleix un paper fonamental, proporcionant informació vital a nivell descriptiu, t\`ecnic i quantitatiu.Tradicionalment els mètodes de documentació en arqueologia quedaven restringits a procediments estrictament subjectius, comportant limitacions pràctiques i tècniques, afectant els resultats obtinguts en la determinació de la color. L'ús combinat de tècniques geomàtiques, com la fotogrametria o el làser escàner, juntament amb tècniques de processament i realç d'imatges digitals, ha suposat un notable avanç. Tot i que les imatges digitals permeten capturar el color de forma ràpida, senzilla, i no invasiva, les dades RGB proporcionades per la càmera no tenen un sentit colorimètric rigorós. Es requereix l'aplicació d'un procés rigorós de transformació que permeti obtenir dades fidedignes de la color a través d'imatges digitals. En aquesta tesi es proposa una solució científica innovadora i d'avantguarda, en la qual es persegueix integrar l'anàlisi espectrofotomètric i colorimètric com a complement a tècniques fotogramètriques que permetin una millora en la identificació de la color i representació de pigments amb màxima fiabilitat en aixecaments, models i reconstruccions tridimensionals 3D. La metodologia proposada es basa en la caracterització colorimètrica de sensors digitals, que és de novell aplicació en pintures rupestres. La caracterització pretén obtenir les equacions de transformació entre les dades de color registrats per la càmera, dependents d'el dispositiu, i espais de color independents, de base física, com els establerts per la Commission Internationale de l'Éclairage (CIE). Per al tractament de dades colorimètriques i espectrals de forma rigorosa es requereix disposar d'un programari de característiques tècniques molt específiques. Encara que hi ha programari comercial, fan per separat el tractament digital d'imatges i les operacions colorimètriques. No hi ha programari que integri totes dues, ni que permeti dur a terme la caracterització. Com a aspecte addicional i fonamental, vam presentar el programari propi que s'ha desenvolupat, denominat pyColourimetry, segons les recomanacions publicades per la CIE, de codi obert, i adaptat al flux metodológic proposat, de manera que faciliti la independència i el progrés científic sense lligams comercials, permetent el tractament de dades colorimètriques i espectrals, i conferint a l'usuari ple control del procés i la gestió de les dades obtingudes. A més, s'exposa l'anàlisi dels principals factors que afecten la caracterització tals com el sensor emprat, els paràmetres de la càmera durant la presa, il¿luminació, el model de regressió, i el conjunt de dades emprades com a entrenament d'el model. S'ha aplicat un model de regressió basat en processos Gaussians, i s'han comparat els resultats obtinguts mitjançant polinomis. També vam presentar un nou esquema de treball que permet la selecció automàtica de mostres de color, adaptat a la franja cromàtica de l'escena, que s'ha anomenat P-ASK, basat en l'algoritme de classificació K-means. Els resultats obtinguts en aquesta tesi demostren que el procés metodològic de caracterització proposat és altament aplicable en tasques de documentació i preservació de el patrimoni cultural en general, i en art rupestre en particular. Es tracta d'una metodologia de baix cost, no invasiva, que permet obtenir el registre colorimètric d'escenes completes. Un cop caracteritzada, una càmera digital convencional pot emprar-se per a la determinació de la color de forma rigorosa, simulant un colorímetre, el que permetrà treballar en un espai de color de base física, independent d'el dispositiu i comparable amb dades obtingudes mitjançant altres càmeres que tambè estiguin caracteritzades.
[EN] Cultural heritage documentation and preservation is an arduous and delicate task in which color plays a fundamental role. The correct determination of color provides vital information on a descriptive, technical and quantitative level. Classical color documentation methods in archaeology were usually restricted to strictly subjective procedures. However, this methodology has practical and technical limitations, affecting the results obtained in the determination of color. Nowadays, it is frequent to support classical methods with geomatics techniques, such as photogrammetry or laser scanning, together with digital image processing. Although digital images allow color to be captured quickly, easily, and in a non-invasive way, the RGB data provided by the camera does not itself have a rigorous colorimetric sense. Therefore, a rigorous transformation process to obtain reliable color data from digital images is required. This thesis proposes a novel technical solution, in which the integration of spectrophotometric and colorimetric analysis is intended as a complement to photogrammetric techniques that allow an improvement in color identification and representation of pigments with maximum reliability in 3D surveys, models and reconstructions. The proposed methodology is based on the colorimetric characterization of digital sensors, which is of novel application in cave paintings. The characterization aims to obtain the transformation equations between the device-dependent color data recorded by the camera and the independent, physically-based color spaces, such as those established by the Commission Internationale de l'Éclairage (CIE). The rigorous processing of color and spectral data requires software packages with specific colorimetric functionalities. Although there are different commercial software options, they do not integrate the digital image processing and colorimetric computations together. And more importantly, they do not allow the camera characterization to be carried out. Therefore, as a key aspect in this thesis is our in-house pyColourimetry software that was developed and tested taking into account the recommendations published by the CIE. pyColourimetry is an open-source code, independent without commercial ties; it allows the treatment of colorimetric and spectral data and the digital image processing, and gives full control of the characterization process and the management of the obtained data to the user. On the other hand, this study presents a further analysis of the main factors affecting the characterization, such as the camera built-in sensor, the camera parameters, the illuminant, the regression model, and the data set used for model training. For computing the transformation equations, the literature recommends the use of polynomial equations as a regression model. Thus, polynomial models are considered as a starting point in this thesis. Additionally, a regression model based on Gaussian processes has been applied, and the results obtained by means of polynomials have been compared. Also, a new working scheme was reported which allows the automatic selection of color samples, adapted to the chromatic range of the scene. This scheme is called P-ASK, based on the K-means classification algorithm. The results achieved in this thesis show that the proposed framework for camera characterization is highly applicable in documentation and conservation tasks in general cultural heritage applications, and particularly in rock art painting. It is a low-cost and non-invasive methodology that allows for the colorimetric recording from complete image scenes. Once characterized, a conventional digital camera can be used for rigorous color determination, simulating a colorimeter. Thus, it is possible to work in a physical color space, independent of the device used, and comparable with data obtained from other cameras that are also characterized.
Thanks to the Universitat Politècnica de València for the FPI scholarship
Molada Tebar, A. (2020). Colorimetric and spectral analysis of rock art by means of the characterization of digital sensors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160386
TESIS
Nguyen, Huong. "Near-optimal designs for Gaussian Process regression models." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1533983585774383.
Повний текст джерелаUrry, Matthew. "Learning curves for Gaussian process regression on random graphs." Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/learning-curves-for-gaussian-process-regression-on-random-graphs(c1f5f395-0426-436c-989c-d0ade913423e).html.
Повний текст джерелаShah, Siddharth S. "Robust Heart Rate Variability Analysis using Gaussian Process Regression." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1293737259.
Повний текст джерелаMarque-Pucheu, Sophie. "Gaussian process regression of two nested computer codes." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC155/document.
Повний текст джерелаThree types of observations of the system exist: those of the chained code, those of the first code only and those of the second code only. The surrogate model has to be accurate on the most likely regions of the input domain of the nested code.In this work, the surrogate models are constructed using the Universal Kriging framework, with a Bayesian approach.First, the case when there is no information about the intermediary variable (the output of the first code) is addressed. An innovative parametrization of the mean function of the Gaussian process modeling the nested code is proposed. It is based on the coupling of two polynomials.Then, the case with intermediary observations is addressed. A stochastic predictor based on the coupling of the predictors associated with the two codes is proposed.Methods aiming at computing quickly the mean and the variance of this predictor are proposed. Finally, the methods obtained for the case of codes with scalar outputs are extended to the case of codes with high dimensional vectorial outputs.We propose an efficient dimension reduction method of the high dimensional vectorial input of the second code in order to facilitate the Gaussian process regression of this code. All the proposed methods are applied to numerical examples
Alvarez, Mauricio A. "Convolved Gaussian process priors for multivariate regression with applications to dynamical systems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/convolved-gaussian-process-priors-for-multivariate-regression-with-applications-to-dynamical-systems(0fe42df3-6dce-48ec-a74d-a6ecaf249d74).html.
Повний текст джерелаKapat, Prasenjit. "Role of Majorization in Learning the Kernel within a Gaussian Process Regression Framework." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1316521301.
Повний текст джерелаMaatouk, Hassan. "Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0791/document.
Повний текст джерелаThis thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities
Barrett, James Edward. "Gaussian process regression models for the analysis of survival data with competing risks, interval censoring and high dimensionality." Thesis, King's College London (University of London), 2015. http://kclpure.kcl.ac.uk/portal/en/theses/gaussian-process-regression-models-for-the-analysis-of-survival-data-with-competing-risks-interval-censoring-and-high-dimensionality(fe3440e1-9766-4fc3-9d23-fe4af89483b5).html.
Повний текст джерелаLiang, Ke. "Oculométrie Numérique Economique : modèle d'apparence et apprentissage par variétés." Thesis, Paris, EPHE, 2015. http://www.theses.fr/2015EPHE3020/document.
Повний текст джерелаGaze tracker offers a powerful tool for diverse study fields, in particular eye movement analysis. In this thesis, we present a new appearance-based real-time gaze tracking system with only a remote webcam and without infra-red illumination. Our proposed gaze tracking model has four components: eye localization, eye feature extraction, eye manifold learning and gaze estimation. Our research focuses on the development of methods on each component of the system. Firstly, we propose a hybrid method to localize in real time the eye region in the frames captured by the webcam. The eye can be detected by Active Shape Model and EyeMap in the first frame where eye occurs. Then the eye can be tracked through a stochastic method, particle filter. Secondly, we employ the Center-Symmetric Local Binary Patterns for the detected eye region, which has been divided into blocs, in order to get the eye features. Thirdly, we introduce manifold learning technique, such as Laplacian Eigen-maps, to learn different eye movements by a set of eye images collected. This unsupervised learning helps to construct an automatic and correct calibration phase. In the end, as for the gaze estimation, we propose two models: a semi-supervised Gaussian Process Regression prediction model to estimate the coordinates of eye direction; and a prediction model by spectral clustering to classify different eye movements. Our system with 5-points calibration can not only reduce the run-time cost, but also estimate the gaze accurately. Our experimental results show that our gaze tracking model has less constraints from the hardware settings and it can be applied efficiently in different real-time applications
Ploé, Patrick. "Surrogate-based optimization of hydrofoil shapes using RANS simulations." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0012/document.
Повний текст джерелаThis thesis presents a practical hydrodynamic optimization framework for hydrofoil shape design. Automated simulation based optimization of hydrofoil is a challenging process. It may involve conflicting optimization objectives, but also impose a trade-off between the cost of numerical simulations and the limited budgets available for ship design. The optimization frameworkis based on sequential sampling and surrogate modeling. Gaussian Process Regression (GPR) is used to build a predictive model based on data issued from fluid simulations of selected hydrofoil geometries. The GPR model is then combined with other criteria into an acquisition function that isevaluated over the design space, to define new querypoints that are added to the data set in order to improve the model. A custom acquisition function is developed, based on GPR variance and cross validation of the data.A hydrofoil geometric modeler is also developed to automatically create the hydrofoil shapes based on the parameters determined by the optimizer. To complete the optimization loop, FINE/Marine, a RANS flow solver, is embedded into the framework to perform the fluid simulations. Optimization capabilities are tested on analytical test cases. The results show that the custom function is more robust than other existing acquisition functions when tested on difficult functions. The entire optimization framework is then tested on 2D hydrofoil sections and 3D hydrofoil optimization cases with free surface. In both cases, the optimization process performs well, resulting in optimized hydrofoil shapes and confirming the results obtained from the analytical test cases. However, the optimum is shown to be sensitive to operating conditions
Dubourg, Vincent. "Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00697026.
Повний текст джерелаXie, Guangrui. "Robust and Data-Efficient Metamodel-Based Approaches for Online Analysis of Time-Dependent Systems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98806.
Повний текст джерелаPh.D.
Metamodeling has been regarded as a powerful analysis tool to learn the input-output relationship of an engineering system with a limited amount of experimental data available. As a popular metamodeling method, Gaussian process regression (GPR) has been widely applied to analyses of various engineering systems whose input-output relationships do not depend on time. However, GPR-based metamodeling for time-dependent systems (TDSs), whose input-output relationships depend on time, is especially challenging due to three reasons. First, standard GPR cannot properly address temporal effects for TDSs. Second, standard GPR is typically not computationally efficient enough for real-time implementations in TDSs. Lastly, research on how to adequately quantify the uncertainty associated with the performance of GPR-based metamodeling is sparse. To fill this knowledge gap, this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing standard GPR to meet the requirements of practical implementations for TDSs. Effective solutions are provided to address the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs). Furthermore, an in-depth investigation on quantifying the uncertainty associated with the performance of stochastic kriging (a variant of standard GPR) is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of more complex TDSs.
Vives, Maria Carola Alfaro. "Modelo de Gauss-Markov de regressão : adequação de normalidade e inferencia na escala original, apos transformação." [s.n.], 1994. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306852.
Повний текст джерелаDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Ciencia da Computação
Made available in DSpace on 2018-07-20T08:05:15Z (GMT). No. of bitstreams: 1 Vives_MariaCarolaAlfaro_M.pdf: 2360924 bytes, checksum: 7a09c2138cc3e2e7899e8ad23c36a5a4 (MD5) Previous issue date: 1994
Resumo: Não informado.
Abstract: Not informed.
Mestrado
Mestre em Estatística
Erich, Roger Alan. "Regression Modeling of Time to Event Data Using the Ornstein-Uhlenbeck Process." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342796812.
Повний текст джерелаDe, lozzo Matthias. "Modèles de substitution spatio-temporels et multifidélité : Application à l'ingénierie thermique." Thesis, Toulouse, INSA, 2013. http://www.theses.fr/2013ISAT0027/document.
Повний текст джерелаThis PhD thesis deals with the construction of surrogate models in transient and steady states in the context of thermal simulation, with a few observations and many outputs.First, we design a robust construction of recurrent multilayer perceptron so as to approach a spatio-temporal dynamic. We use an average of neural networks resulting from a cross-validation procedure, whose associated data splitting allows to adjust the parameters of these models thanks to a test set without any information loss. Moreover, the construction of this perceptron can be distributed according to its outputs. This construction is applied to the modelling of the temporal evolution of the temperature at different points of an aeronautical equipment.Then, we proposed a mixture of Gaussian process models in a multifidelity framework where we have a high-fidelity observation model completed by many observation models with lower and no comparable fidelities. A particular attention is paid to the specifications of trends and adjustement coefficients present in these models. Different kriging and co-krigings models are put together according to a partition or a weighted aggregation based on a robustness measure associated to the most reliable design points. This approach is used in order to model the temperature at different points of the equipment in steady state.Finally, we propose a penalized criterion for the problem of heteroscedastic regression. This tool is build in the case of projection estimators and applied with the Haar wavelet. We also give some numerical results for different noise specifications and possible dependencies in the observations
Tong, Xiao Thomas. "Statistical Learning of Some Complex Systems: From Dynamic Systems to Market Microstructure." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10917.
Повний текст джерелаStatistics
Cardamone, Salvatore. "An interacting quantum atoms approach to constructing a conformationally dependent biomolecular force field by Gaussian process regression : potential energy surface sampling and validation." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/an-interacting-quantum-atoms-approach-to-constructing-a-conformationally-dependent-biomolecular-force-field-by-gaussian-process-regression-potential-energy-surface-sampling-and-validation(508ed450-9033-4bc9-8522-690d5a7909eb).html.
Повний текст джерелаFroicu, Dragos Vasile. "Modeling and learning-based calibration of residual current devices." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Знайти повний текст джерелаTurner, Jacob E. "Improving the Sensitivity of a Pulsar Timing Array: Correcting for Interstellar Scattering Delays." Oberlin College Honors Theses / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1495573098864359.
Повний текст джерелаEdwards, Adam Michael. "Precision Aggregated Local Models." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102125.
Повний текст джерелаDoctor of Philosophy
Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
Ptáček, Martin. "Spatial Function Estimation with Uncertain Sensor Locations." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449288.
Повний текст джерелаXu, Li. "Statistical Methods for Variability Management in High-Performance Computing." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104184.
Повний текст джерелаDoctor of Philosophy
This dissertation focuses on three projects that are all related to statistical methods in performance variability management in high-performance computing (HPC). HPC systems are computer systems that create high performance by aggregating a large number of computing units. The performance of HPC is measured by the throughput of a benchmark called the IOZone Filesystem Benchmark. The performance variability is the variation among throughputs when the system configuration is fixed. Variability management involves studying the relationship between performance variability and the system configuration. In Chapter 2, we use several existing prediction models to predict the standard deviation of throughputs given different system configurations and compare the accuracy of predictions. We also conduct HPC system optimization using the chosen prediction model as the objective function. In Chapter 3, we use the mixture model to determine the number of modes in the distribution of throughput under different system configurations. In addition, we develop a model to determine the number of additional runs for future benchmark experiments. In Chapter 4, we develop a statistical model that can predict the throughout distributions given the system configurations. We also compare the prediction of summary statistics of the throughput distributions with existing prediction models.
Kamrath, Matthew. "Extending standard outdoor noise propagation models to complex geometries." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1038/document.
Повний текст джерелаNoise engineering methods (e.g. ISO 9613-2 or CNOSSOS-EU) efficiently approximate sound levels from roads, railways, and industrial sources in cities. However, engineering methods are limited to only simple box-shaped geometries. This dissertation develops and validates a hybrid method to extend the engineering methods to more complicated geometries by introducing an extra attenuation term that represents the influence of a real object compared to a simplified object.Calculating the extra attenuation term requires reference calculations to quantify the difference between the complex and simplified objects. Since performing a reference computation for each path is too computationally expensive, the extra attenuation term is linearly interpolated from a data table containing the corrections for many source and receiver positions and frequencies. The 2.5D boundary element method produces the levels for the real complex geometry and a simplified geometry, and subtracting these levels yields the corrections in the table.This dissertation validates this hybrid method for a T-barrier with hard ground, soft ground, and buildings. All three cases demonstrate that the hybrid method is more accurate than standard engineering methods for complex cases
Guss, Herman, and Linus Rustas. "Applying Machine Learning Algorithms for Anomaly Detection in Electricity Data : Improving the Energy Efficiency of Residential Buildings." Thesis, Uppsala universitet, Byggteknik och byggd miljö, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-415507.
Повний текст джерелаBalmand, Samuel. "Quelques contributions à l'estimation de grandes matrices de précision." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1024/document.
Повний текст джерелаUnder the Gaussian assumption, the relationship between conditional independence and sparsity allows to justify the construction of estimators of the inverse of the covariance matrix -- also called precision matrix -- from regularized approaches. This thesis, originally motivated by the problem of image classification, aims at developing a method to estimate the precision matrix in high dimension, that is when the sample size $n$ is small compared to the dimension $p$ of the model. Our approach relies basically on the connection of the precision matrix to the linear regression model. It consists of estimating the precision matrix in two steps. The off-diagonal elements are first estimated by solving $p$ minimization problems of the type $ell_1$-penalized square-root of least-squares. The diagonal entries are then obtained from the result of the previous step, by residual analysis of likelihood maximization. This various estimators of the diagonal entries are compared in terms of estimation risk. Moreover, we propose a new estimator, designed to consider the possible contamination of data by outliers, thanks to the addition of a $ell_2/ell_1$ mixed norm regularization term. The nonasymptotic analysis of the consistency of our estimator points out the relevance of our method
Park, Jangho. "Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling Method." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91911.
Повний текст джерелаDoctor of Philosophy
In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
Chu, Shuyu. "Change Detection and Analysis of Data with Heterogeneous Structures." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78613.
Повний текст джерелаPh. D.
Hassani, Mujtaba. "CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49007.
Повний текст джерелаSebastian, Maria Treesa. "Modelling Bitcell Behaviour." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166218.
Повний текст джерелаMosallam, Ahmed. "Remaining useful life estimation of critical components based on Bayesian Approaches." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2069/document.
Повний текст джерелаConstructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL
Karmann, Clémence. "Inférence de réseaux pour modèles inflatés en zéro." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0146/document.
Повний текст джерелаNetwork inference has more and more applications, particularly in human health and environment, for the study of micro-biological and genomic data. Networks are indeed an appropriate tool to represent, or even study, relationships between entities. Many mathematical estimation techniques have been developed, particularly in the context of Gaussian graphical models, but also in the case of binary or mixed data. The processing of abundance data (of microorganisms such as bacteria for example) is particular for two reasons: on the one hand they do not directly reflect reality because a sequencing process takes place to duplicate species and this process brings variability, on the other hand a species may be absent in some samples. We are then in the context of zero-inflated data. Many graph inference methods exist for Gaussian, binary and mixed data, but zero-inflated models are rarely studied, although they reflect the structure of many data sets in a relevant way. The objective of this thesis is to infer networks for zero-inflated models. In this thesis, we will restrict to conditional dependency graphs. The work presented in this thesis is divided into two main parts. The first one concerns graph inference methods based on the estimation of neighbourhoods by a procedure combining ordinal regression models and variable selection methods. The second one focuses on graph inference in a model where the variables are Gaussian zero-inflated by double truncation (right and left)
Fang, Yizhou. "Inference for Continuous Stochastic Processes Using Gaussian Process Regression." Thesis, 2014. http://hdl.handle.net/10012/8159.
Повний текст джерела