Academic literature on the topic 'Constrained Gaussian processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Constrained Gaussian processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Constrained Gaussian processes":

1

Wang, Xiaojing, and James O. Berger. "Estimating Shape Constrained Functions Using Gaussian Processes." SIAM/ASA Journal on Uncertainty Quantification 4, no. 1 (January 2016): 1–25. http://dx.doi.org/10.1137/140955033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Graf, Siegfried, and Harald Luschgy. "Entropy-constrained functional quantization of Gaussian processes." Proceedings of the American Mathematical Society 133, no. 11 (May 2, 2005): 3403–9. http://dx.doi.org/10.1090/s0002-9939-05-07888-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Niu, Mu, Pokman Cheung, Lizhen Lin, Zhenwen Dai, Neil Lawrence, and David Dunson. "Intrinsic Gaussian processes on complex constrained domains." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 81, no. 3 (April 19, 2019): 603–27. http://dx.doi.org/10.1111/rssb.12320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Girbés-Juan, Vicent, Joaquín Moll, Antonio Sala, and Leopoldo Armesto. "Cautious Bayesian Optimization: A Line Tracker Case Study." Sensors 23, no. 16 (August 18, 2023): 7266. http://dx.doi.org/10.3390/s23167266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a procedure for experimental optimization under safety constraints, to be denoted as constraint-aware Bayesian Optimization, is presented. The basic ingredients are a performance objective function and a constraint function; both of them will be modeled as Gaussian processes. We incorporate a prior model (transfer learning) used for the mean of the Gaussian processes, a semi-parametric Kernel, and acquisition function optimization under chance-constrained requirements. In this way, experimental fine-tuning of a performance objective under experiment-model mismatch can be safely carried out. The methodology is illustrated in a case study on a line-follower application in a CoppeliaSim environment.
5

Yang, Shihao, Samuel W. K. Wong, and S. C. Kou. "Inference of dynamic systems from noisy and sparse data via manifold-constrained Gaussian processes." Proceedings of the National Academy of Sciences 118, no. 15 (April 9, 2021): e2020397118. http://dx.doi.org/10.1073/pnas.2020397118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Parameter estimation for nonlinear dynamic system models, represented by ordinary differential equations (ODEs), using noisy and sparse data, is a vital task in many fields. We propose a fast and accurate method, manifold-constrained Gaussian process inference (MAGI), for this task. MAGI uses a Gaussian process model over time series data, explicitly conditioned on the manifold constraint that derivatives of the Gaussian process must satisfy the ODE system. By doing so, we completely bypass the need for numerical integration and achieve substantial savings in computational time. MAGI is also suitable for inference with unobserved system components, which often occur in real experiments. MAGI is distinct from existing approaches as we provide a principled statistical construction under a Bayesian framework, which incorporates the ODE system through the manifold constraint. We demonstrate the accuracy and speed of MAGI using realistic examples based on physical experiments.
6

Rattunde, Leonhard, Igor Laptev, Edgar D. Klenske, and Hans-Christian Möhring. "Safe optimization for feedrate scheduling of power-constrained milling processes by using Gaussian processes." Procedia CIRP 99 (2021): 127–32. http://dx.doi.org/10.1016/j.procir.2021.03.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schweidtmann, Artur M., Dominik Bongartz, Daniel Grothe, Tim Kerkenhoff, Xiaopeng Lin, Jaromił Najman, and Alexander Mitsos. "Deterministic global optimization with Gaussian processes embedded." Mathematical Programming Computation 13, no. 3 (June 25, 2021): 553–81. http://dx.doi.org/10.1007/s12532-021-00204-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractGaussian processes (Kriging) are interpolating data-driven models that are frequently applied in various disciplines. Often, Gaussian processes are trained on datasets and are subsequently embedded as surrogate models in optimization problems. These optimization problems are nonconvex and global optimization is desired. However, previous literature observed computational burdens limiting deterministic global optimization to Gaussian processes trained on few data points. We propose a reduced-space formulation for deterministic global optimization with trained Gaussian processes embedded. For optimization, the branch-and-bound solver branches only on the free variables and McCormick relaxations are propagated through explicit Gaussian process models. The approach also leads to significantly smaller and computationally cheaper subproblems for lower and upper bounding. To further accelerate convergence, we derive envelopes of common covariance functions for GPs and tight relaxations of acquisition functions used in Bayesian optimization including expected improvement, probability of improvement, and lower confidence bound. In total, we reduce computational time by orders of magnitude compared to state-of-the-art methods, thus overcoming previous computational burdens. We demonstrate the performance and scaling of the proposed method and apply it to Bayesian optimization with global optimization of the acquisition function and chance-constrained programming. The Gaussian process models, acquisition functions, and training scripts are available open-source within the “MeLOn—MachineLearning Models for Optimization” toolbox (https://git.rwth-aachen.de/avt.svt/public/MeLOn).
8

Li, Ming, Xiafei Tang, Qichun Zhang, and Yiqun Zou. "Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints." Applied Sciences 12, no. 19 (October 4, 2022): 9975. http://dx.doi.org/10.3390/app12199975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For the bearing-only target motion analysis (TMA), the pseudolinear Kalman filter (PLKF) solves the complex nonlinear estimation of the motion model parameters but suffers serious bias problems. The pseudolinear Kalman filter under the minimum mean square error framework (PL-MMSE) has a more accurate tracking ability and higher stability compared to the PLKF. Since the bearing signals are corrupted by non-Gaussian noise in practice, we reconstruct the PL-MMSE under Gaussian mixture noise. If some prior information, such as state constraints, is available, the performance of the PL-MMSE can be further improved by incorporating state constraints in the filtering process. In this paper, the mean square and estimation projection methods are used to incorporate PL-MMSE with linear constraints, respectively. Then, the linear approximation and second-order approximation methods are applied to merge PL-MMSE with nonlinear constraints, respectively. Simulation results demonstrate that the constrained PL-MMSE algorithms result in lower mean square errors and bias norms, which demonstrates the superiority of the constrained algorithms.
9

Salmon, John. "Generation of Correlated and Constrained Gaussian Stochastic Processes for N-Body Simulations." Astrophysical Journal 460 (March 1996): 59. http://dx.doi.org/10.1086/176952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rocher, Antoine, Vanina Ruhlmann-Kleider, Etienne Burtin, and Arnaud de Mattia. "Halo occupation distribution of Emission Line Galaxies: fitting method with Gaussian processes." Journal of Cosmology and Astroparticle Physics 2023, no. 05 (May 1, 2023): 033. http://dx.doi.org/10.1088/1475-7516/2023/05/033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The halo occupation distribution (HOD) framework is an empirical method to describe the connection between dark matter halos and galaxies, which is constrained by small scale clustering data. Efficient fitting procedures are required to scan the HOD parameter space. This paper describes such a method based on Gaussian Processes to iteratively build a surrogate model of the posterior of the likelihood surface from a reasonable amount of likelihood computations, typically two orders of magnitude less than standard Monte Carlo Markov chain algorithms. Errors in the likelihood computation due to stochastic HOD modelling are also accounted for in the method we propose. We report results of reproducibility, accuracy and stability tests of the method derived from simulation, taking as a test case star-forming emission line galaxies, which constitute the main tracer of the Dark Energy Spectroscopic Instrument and have so far a poorly constrained galaxy-halo connection from observational data.

Dissertations / Theses on the topic "Constrained Gaussian processes":

1

Tran, Tien-Tam. "Constrained and Low Rank Gaussian Process on some Manifolds." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2023. https://theses.hal.science/tel-04529284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La thèse est divisée en trois parties principales, nous résumerons les principales contributions de la thèse comme suit. Processus gaussiens à faible complexité : la régression par processus gaussien s'échelonne généralement en $O(n^3)$ en termes de calcul et en $O(n^2)$ en termes d'exigences de mémoire, où $n$ représente le nombre d'observations. Cette limitation devient inapplicable pour de nombreux problèmes lorsque $n$ est grand. Dans cette thèse, nous étudions l'expansion de Karhunen-Loève des processus gaussiens, qui présente plusieurs avantages par rapport aux techniques de compression à faible rang. En tronquant l'expansion de Karhunen-Loève, nous obtenons une approximation explicite à faible rang de la matrice de covariance, simplifiant considérablement l'inférence statistique lorsque le nombre de troncatures est faible par rapport à $n$.Ensuite, nous fournissons des solutions explicites pour les processus gaussiens à faible complexité. Tout d'abord, nous cherchons des expansions de Karhunen-Loève en résolvant les paires propres d'un opérateur différentiel où la fonction de covariance sert de fonction de Green. Nous offrons des solutions explicites pour l'opérateur différentiel de Matérn et pour les opérateurs différentiels dont les fonctions propres sont représentées par des polynômes classiques. Dans la section expérimentale, nous comparons nos méthodes proposées à des approches alternatives, révélant ainsi leur capacité améliorée à capturer des motifs complexes.Processus gaussiens contraints:Cette thèse introduit une approche novatrice utilisant des processus gaussiens contraints pour approximer une fonction de densité basée sur des observations. Pour traiter ces contraintes, notre approche consiste à modéliser la racine carrée de la fonction de densité inconnue réalisée comme un processus gaussien. Dans ce travail, nous adoptons une version tronquée de l'expansion de Karhunen-Loève comme méthode d'approximation. Un avantage notable de cette approche est que les coefficients sont gaussiens et indépendants, les contraintes sur les fonctions réalisées étant entièrement dictées par les contraintes sur les coefficients aléatoires. Après conditionnement sur les données disponibles et les contraintes, la distribution postérieure des coefficients est une distribution normale contrainte à la sphère unité. Cette distribution pose des difficultés analytiques, nécessitant des méthodes numériques d'approximation. À cette fin, cette thèse utilise l'échantillonnage Hamiltonien Monte Carlo sphérique (HMC). L'efficacité du cadre proposé est validée au moyen d'une série d'expériences, avec des comparaisons de performances par rapport à des méthodes alternatives.Enfin, nous introduisons des modèles d'apprentissage par transfert dans l'espace des mesures de probabilité finies, désigné sous le nom de $mathcal{P}_+(I)$. Dans notre étude, nous dotons l'espace $mathcal{P}_+(I)$ de la métrique de Fisher-Rao, le transformant en une variété riemannienne. Cette variété riemannienne, $mathcal{P}_+(I)$, occupe une place significative en géométrie de l'information et possède de nombreuses applications. Au sein de cette thèse, nous fournissons des formules détaillées pour les géodésiques, la fonction exponentielle, la fonction logarithmique et le transport parallèle sur $mathcal{P}_+(I)$.Notre exploration s'étend aux modèles statistiques situés au sein de $mathcal{P}_+(I)$, généralement réalisés dans l'espace tangent de cette variété. Avec un ensemble complet d'outils géométriques, nous introduisons des modèles d'apprentissage par transfert facilitant le transfert de connaissances entre ces espaces tangents. Des algorithmes détaillés pour l'apprentissage par transfert, comprenant l'Analyse en Composantes Principales (PCA) et les modèles de régression linéaire, sont présentés. Pour étayer ces concepts, nous menons une série d'expériences, fournissant des preuves empiriques de leur efficacité
The thesis is divided into three main parts, we will summarize the major contributions of the thesis as follows.Low complexity Gaussian processes:Gaussian process regression usually scales as $O(n^3)$ for computation and $O(n^2)$ for memory requirements, where $n$ represents the number of observations. This limitation becomes unfeasible for many problems when $n$ is large. In this thesis, we investigate the Karhunen-Loève expansion of Gaussian processes, which offers several advantages over low-rank compression techniques. By truncating the Karhunen-Loève expansion, we obtain an explicit low-rank approximation of the covariance matrix (Gram matrix), greatly simplifying statistical inference when the number of truncations is small relative to $n$.We then provide explicit solutions for low complexity Gaussian processes. We seek Karhunen-Loève expansions, by solving for eigenpaires of a differential operator where the covariance function serves as the Green function. We offer explicit solutions for the Matérn differential operator and for differential operators with eigenfunctions represented by classical polynomials. In the experimental section, we compare our proposed methods with alternative approaches, revealing their enhanced capability in capturing intricate patterns.Constrained Gaussian processes:This thesis introduces a novel approach used constrained Gaussian processes to approximate a density function based on observations. To address these constraints, our approach involves modeling square root of unknown density function realized as a Gaussian process. In this work, we adopt a truncated version of the Karhunen-Loève expansion as the approximation method. A notable advantage of this approach is that the coefficients are Gaussian and independent, with the constraints on the realized functions entirely dictated by the constraints on the random coefficients. After conditioning on both available data and constraints, the posterior distribution of the coefficients is a normal distribution constrained to the unit sphere. This distribution poses analytical intractability, necessitating numerical methods for approximation. To this end, this thesis employs spherical Hamiltonian Monte Carlo (HMC). The efficacy of the proposed framework is validated through a series of experiments, with performance comparisons against alternative methods.Transfer learning on the manifold of finite probability measures:Finally, we introduce transfer learning models in the space of finite probability measures, denoted as $mathcal{P}_+(I)$. In our investigation, we endow the space $mathcal{P}_+(I)$ with the Fisher-Rao metric, transforming it into a Riemannian manifold. This Riemannian manifold, $mathcal{P}_+(I)$, holds a significant place in Information Geometry and has numerous applications. Within this thesis, we provide detailed formulas for geodesics, the exponential map, the log map, and parallel transport on $mathcal{P}_+(I)$.Our exploration extends to statistical models situated within $mathcal{P}_+(I)$, typically conducted within the tangent space of this manifold. With a comprehensive set of geometric tools, we introduce transfer learning models facilitating knowledge transfer between these tangent spaces. Detailed algorithms for transfer learning encompassing Principal Component Analysis (PCA) and linear regression models are presented. To substantiate these concepts, we conduct a series of experiments, offering empirical evidence of their efficacy
2

Lu, Zhengdong. "Constrained clustering and cognitive decline detection /." Full text open access at:, 2008. http://content.ohsu.edu/u?/etd,650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cothren, Jackson D. "Reliability in constrained Gauss-Markov models an analytical and differential approach with applications in photogrammetry /." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1085689960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xii, 119 p.; also includes graphics (some col.). Includes bibliographical references (p. 106-109). Available online via OhioLINK's ETD Center
4

Lopez, lopera Andres Felipe. "Gaussian Process Modelling under Inequality Constraints." Thesis, Lyon, 2019. https://tel.archives-ouvertes.fr/tel-02863891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le conditionnement de processus gaussiens (PG) par des contraintes d’inégalité permet d’obtenir des modèles plus réalistes. Cette thèse s’intéresse au modèle de type PG proposé par maatouk (2015), obtenu par approximation finie, qui garantit que les contraintes sont satisfaites dans tout l’espace. Plusieurs contributions sont apportées. Premièrement, nous étudions l’emploi de méthodes de monte carlo par chaı̂nes de markov pour des lois multinormales tronquées. Elles fournissent un échantillonnage efficacpour des contraintes d’inégalité linéaires. Deuxièmement, nous explorons l’extension du modèle, jusque-làlimité à la dimension trois, à de plus grandes dimensions. Nous remarquons que l’introduction d’un bruit d’observations permet de monter à la dimension cinq. Nous proposons un algorithme d’insertion des nœuds, qui concentre le budget de calcul sur les dimensions les plus actives. Nous explorons aussi la triangulation de delaunay comme alternative à la tensorisation. Enfin, nous étudions l’utilisation de modèles additifs dans ce contexte, théoriquement et sur des problèmes de plusieurs centaines de variables. Troisièmement, nous donnons des résultats théoriques sur l’inférence sous contraintes d’inégalité. La consistance et la normalité asymptotique d’estimateurs par maximum de vraisemblance sont établies. L’ensemble des travaux a fait l’objet d’un développement logiciel en R. Ils sont appliqués à des problèmes de gestion des risques en sûreté nucléaire et inondations côtières, avec des contraintes de positivité et monotonie. Comme ouverture, nous montrons que la méthodologie fournit un cadre original pour l’étude de processus de Poisson d’intensité stochastique
Conditioning Gaussian processes (GPs) by inequality constraints gives more realistic models. This thesis focuses on the finite-dimensional approximation of GP models proposed by Maatouk (2015), which satisfies the constraints everywhere in the input space. Several contributions are provided. First, we study the use of Markov chain Monte Carlo methods for truncated multinormals. They result in efficient sampling for linear inequality constraints. Second, we explore the extension of the model, previously limited up tothree-dimensional spaces, to higher dimensions. The introduction of a noise effect allows us to go up to dimension five. We propose a sequential algorithm based on knot insertion, which concentrates the computational budget on the most active dimensions. We also explore the Delaunay triangulation as an alternative to tensorisation. Finally, we study the case of additive models in this context, theoretically and on problems involving hundreds of input variables. Third, we give theoretical results on inference under inequality constraints. The asymptotic consistency and normality of maximum likelihood estimators are established. The main methods throughout this manuscript are implemented in R language programming.They are applied to risk assessment problems in nuclear safety and coastal flooding, accounting for positivity and monotonicity constraints. As a by-product, we also show that the proposed GP approach provides an original framework for modelling Poisson processes with stochastic intensities
5

Brahmantio, Bayu Beta. "Efficient Sampling of Gaussian Processes under Linear Inequality Constraints." Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, newer Markov Chain Monte Carlo (MCMC) algorithms are implemented and compared in terms of their efficiency in the context of sampling from Gaussian processes under linear inequality constraints. Extending the framework of Gaussian process that uses Gibbs sampler, two MCMC algorithms, Exact Hamiltonian Monte Carlo (HMC) and Analytic Elliptical Slice Sampling (ESS), are used to sample values of truncated multivariate Gaussian distributions that are used for Gaussian process regression models with linear inequality constraints. In terms of generating samples from Gaussian processes under linear inequality constraints, the proposed methods generally produce samples that are less correlated than samples from the Gibbs sampler. Time-wise, Analytic ESS is proven to be a faster choice while Exact HMC produces the least correlated samples.
6

Maatouk, Hassan. "Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0791/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
On s'intéresse au problème d'interpolation d'une fonction numérique d'une ou plusieurs variables réelles lorsque qu'elle est connue pour satisfaire certaines propriétés comme, par exemple, la positivité, monotonie ou convexité. Deux méthodes d'interpolation sont étudiées. D'une part, une approche déterministe conduit à un problème d'interpolation optimale sous contraintes linéaires inégalité dans un Espace de Hilbert à Noyau Reproduisant (RKHS). D'autre part, une approche probabiliste considère le même problème comme un problème d'estimation d'une fonction dans un cadre bayésien. Plus précisément, on considère la Régression par Processus Gaussien ou Krigeage pour estimer la fonction à interpoler sous les contraintes linéaires de type inégalité en question. Cette deuxième approche permet également de construire des intervalles de confiance autour de la fonction estimée. Pour cela, on propose une méthode d'approximation qui consiste à approcher un processus gaussien quelconque par un processus gaussien fini-dimensionnel. Le problème de krigeage se ramène ainsi à la simulation d'un vecteur gaussien tronqué à un espace convexe. L'analyse asymptotique permet d'établir la convergence de la méthode et la correspondance entre les deux approches déterministeet probabiliste, c'est le résultat théorique de la thèse. Ce dernier est vu comme unegénéralisation de la correspondance établie par [Kimeldorf and Wahba, 1971] entre estimateur bayésien et spline d'interpolation. Enfin, une application réelle dans le domainede l'assurance (actuariat) pour estimer une courbe d'actualisation et des probabilités dedéfaut a été développée
This thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities
7

Wang, Wayne. "Non-colliding Gaussian Process Regressions." Thesis, 2020. http://hdl.handle.net/1885/209109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Imposing constraints on models has been a way to incorporate prior information to develop more realistic models and/or compensate for the lack of data. In this thesis, we propose a way to impose a “non-colliding constraint” on Gaussian process regression when modelling multiple (unknown) functions at the same time. The non-colliding constraint prevents the situation where the predictions from different regressions intersect with each other. This is a desirable property when the physical process that we are trying to model exhibits a multi-layered structure such as in stratigraphy or when the underlying functions should not intersect, for example the highest, and lowest temperature of a given time period. We show that the non-colliding problem can be reformulated to modelling a sequence of Gaussian process regressions with inequality constraints. We then use a piecewise linear approximation approach proposed by López-Lopera et al. (2018) to achieve this. Through an extensive simulation study, we show that our method is able to produce more realistic models that reflect the prior information of no collisions, as well as smaller errors with less variability than the standard Gaussian process regression especially when the training set is small.
8

"Interval linear constraint solving in constraint logic programming." Chinese University of Hong Kong, 1994. http://library.cuhk.edu.hk/record=b5888548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
by Chong-kan Chiu.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1994.
Includes bibliographical references (leaves 97-103).
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Related Work --- p.2
Chapter 1.2 --- Organizations of the Dissertation --- p.4
Chapter 1.3 --- Notations --- p.4
Chapter 2 --- Overview of ICLP(R) --- p.6
Chapter 2.1 --- Basics of Interval Arithmetic --- p.6
Chapter 2.2 --- Relational Interval Arithmetic --- p.8
Chapter 2.2.1 --- Interval Reduction --- p.8
Chapter 2.2.2 --- Arithmetic Primitives --- p.10
Chapter 2.2.3 --- Interval Narrowing and Interval Splitting --- p.13
Chapter 2.3 --- Syntax and Semantics --- p.16
Chapter 3 --- Limitations of Interval Narrowing --- p.18
Chapter 3.1 --- Computation Inefficiency --- p.18
Chapter 3.2 --- Inability to Detect Inconsistency --- p.23
Chapter 3.3 --- The Newton Language --- p.27
Chapter 4 --- Design of CIAL --- p.30
Chapter 4.1 --- The CIAL Architecture --- p.30
Chapter 4.2 --- The Inference Engine --- p.31
Chapter 4.2.1 --- Interval Variables --- p.31
Chapter 4.2.2 --- Extended Unification Algorithm --- p.33
Chapter 4.3 --- The Solver Interface and Constraint Decomposition --- p.34
Chapter 4.4 --- The Linear and the Non-linear Solvers --- p.37
Chapter 5 --- The Linear Solver --- p.40
Chapter 5.1 --- An Interval Gaussian Elimination Solver --- p.41
Chapter 5.1.1 --- Naive Interval Gaussian Elimination --- p.41
Chapter 5.1.2 --- Generalized Interval Gaussian Elimination --- p.43
Chapter 5.1.3 --- Incrementality of Generalized Gaussian Elimination --- p.47
Chapter 5.1.4 --- Solvers Interaction --- p.50
Chapter 5.2 --- An Interval Gauss-Seidel Solver --- p.52
Chapter 5.2.1 --- Interval Gauss-Seidel Method --- p.52
Chapter 5.2.2 --- Preconditioning --- p.55
Chapter 5.2.3 --- Increment ality of Preconditioned Gauss-Seidel Method --- p.58
Chapter 5.2.4 --- Solver Interaction --- p.71
Chapter 5.3 --- Comparisons --- p.72
Chapter 5.3.1 --- Time Complexity --- p.72
Chapter 5.3.2 --- Storage Complexity --- p.73
Chapter 5.3.3 --- Others --- p.74
Chapter 6 --- Benchmarkings --- p.76
Chapter 6.1 --- Mortgage --- p.78
Chapter 6.2 --- Simple Linear Simultaneous Equations --- p.79
Chapter 6.3 --- Analysis of DC Circuit --- p.80
Chapter 6.4 --- Inconsistent Simultaneous Equations --- p.82
Chapter 6.5 --- Collision Problem --- p.82
Chapter 6.6 --- Wilkinson Polynomial --- p.85
Chapter 6.7 --- Summary and Discussion --- p.86
Chapter 6.8 --- Large System of Simultaneous Equations --- p.87
Chapter 6.9 --- Comparisons Between the Incremental and the Non-Incremental Preconditioning --- p.89
Chapter 7 --- Concluding Remarks --- p.93
Chapter 7.1 --- Summary and Contributions --- p.93
Chapter 7.2 --- Future Work --- p.95
Bibliography --- p.97
9

Koyejo, Oluwasanmi Oluseye. "Constrained relative entropy minimization with applications to multitask learning." 2013. http://hdl.handle.net/2152/20793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation addresses probabilistic inference via relative entropy minimization subject to expectation constraints. A canonical representation of the solution is determined without the requirement for convexity of the constraint set, and is given by members of an exponential family. The use of conjugate priors for relative entropy minimization is proposed, and a class of conjugate prior distributions is introduced. An alternative representation of the solution is provided as members of the prior family when the prior distribution is conjugate. It is shown that the solutions can be found by direct optimization with respect to members of such parametric families. Constrained Bayesian inference is recovered as a special case with a specific choice of constraints induced by observed data. The framework is applied to the development of novel probabilistic models for multitask learning subject to constraints determined by domain expertise. First, a model is developed for multitask learning that jointly learns a low rank weight matrix and the prior covariance structure between different tasks. The multitask learning approach is extended to a class of nonparametric statistical models for transposable data, incorporating side information such as graphs that describe inter-row and inter-column similarity. The resulting model combines a matrix-variate Gaussian process prior with inference subject to nuclear norm expectation constraints. In addition, a novel nonparametric model is proposed for multitask bipartite ranking. The proposed model combines a hierarchical matrix-variate Gaussian process prior with inference subject to ordering constraints and nuclear norm constraints, and is applied to disease gene prioritization. In many of these applications, the solution is found to be unique. Experimental results show substantial performance improvements as compared to strong baseline models.
text
10

"Bayesian-Entropy Method for Probabilistic Diagnostics and Prognostics of Engineering Systems." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.62900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
abstract: Information exists in various forms and a better utilization of the available information can benefit the system awareness and response predictions. The focus of this dissertation is on the fusion of different types of information using Bayesian-Entropy method. The Maximum Entropy method in information theory introduces a unique way of handling information in the form of constraints. The Bayesian-Entropy (BE) principle is proposed to integrate the Bayes’ theorem and Maximum Entropy method to encode extra information. The posterior distribution in Bayesian-Entropy method has a Bayesian part to handle point observation data, and an Entropy part that encodes constraints, such as statistical moment information, range information and general function between variables. The proposed method is then extended to its network format as Bayesian Entropy Network (BEN), which serves as a generalized information fusion tool for diagnostics, prognostics, and surrogate modeling. The proposed BEN is demonstrated and validated with extensive engineering applications. The BEN method is first demonstrated for diagnostics of gas pipelines and metal/composite plates for damage diagnostics. Both empirical knowledge and physics model are integrated with direct observations to improve the accuracy for diagnostics and to reduce the training samples. Next, the BEN is demonstrated in prognostics and safety assessment in air traffic management system. Various information types, such as human concepts, variable correlation functions, physical constraints, and tendency data, are fused in BEN to enhance the safety assessment and risk prediction in the National Airspace System (NAS). Following this, the BE principle is applied in surrogate modeling. Multiple algorithms are proposed based on different type of information encoding, such as Bayesian-Entropy Linear Regression (BELR), Bayesian-Entropy Semiparametric Gaussian Process (BESGP), and Bayesian-Entropy Gaussian Process (BEGP) are demonstrated with numerical toy problems and practical engineering analysis. The results show that the major benefits are the superior prediction/extrapolation performance and significant reduction of training samples by using additional physics/knowledge as constraints. The proposed BEN offers a systematic and rigorous way to incorporate various information sources. Several major conclusions are drawn based on the proposed study.
Dissertation/Thesis
Doctoral Dissertation Mechanical Engineering 2020

Book chapters on the topic "Constrained Gaussian processes":

1

Arcucci, Rossella, Douglas McIlwraith, and Yi-Ke Guo. "Scalable Weak Constraint Gaussian Processes." In Lecture Notes in Computer Science, 111–25. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22747-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kottakki, Krishna Kumar, Sharad Bhartiya, and Mani Bhushan. "Optimization Based Constrained Unscented Gaussian Sum Filter." In 12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering, 1715–20. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-444-63577-8.50131-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sood, Anoop Kumar. "Finite Element-Based Optimization of Additive Manufacturing Process Using Statistical Modelling and League of Champion Algorithm." In Machine Learning Applications in Non-Conventional Machining Processes, 215–34. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3624-7.ch014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The study develops a 2D (two-dimensional) finite element model with a Gaussian heat source to simulate powder bed-based laser additive manufacturing process of Ti6Al4V alloy. The modelling approach provides insight into the process by correlating laser power and scan speed with melt pool temperature distribution and size. To tackle the FEA result in optimization environment, statistical approach of data normalization and regression modelling is adopted. Statistical treatment is not only able to deduce the interdependence of various objectives consider but also make the representation of objectives and constraint computationally simple. Adoption of a new stochastic algorithm namely league of a champion algorithm (LCA) together with penalty function approach for non-linear constraint handling reduces the effort required and computational complexity involved in determining the optimum parameter setting.
4

Mei, Yongsheng, Tian Lan, Mahdi Imani, and Suresh Subramaniam. "A Bayesian Optimization Framework for Finding Local Optima in Expensive Multimodal Functions." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bayesian optimization (BO) is a popular global optimization scheme for sample-efficient optimization in domains with expensive function evaluations. The existing BO techniques are capable of finding a single global optimum solution. However, finding a set of global and local optimum solutions is crucial in a wide range of real-world problems, as implementing some of the optimal solutions might not be feasible due to various practical restrictions (e.g., resource limitation, physical constraints, etc.). In such domains, if multiple solutions are known, the implementation can be quickly switched to another solution, and the best possible system performance can still be obtained. This paper develops a multimodal BO framework to effectively find a set of local/global solutions for expensive-to-evaluate multimodal objective functions. We consider the standard BO setting with Gaussian process regression representing the objective function. We analytically derive the joint distribution of the objective function and its first-order derivatives. This joint distribution is used in the body of the BO acquisition functions to search for local optima during the optimization process. We introduce variants of the well-known BO acquisition functions to the multimodal setting and demonstrate the performance of the proposed framework in locating a set of local optimum solutions using multiple optimization problems.
5

Pawlowsky-Glahn, Vera, and Richardo A. Olea. "Cokriging." In Geostatistical Analysis of Compositional Data. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780195171662.003.0011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of estimation of a coregionalization of size q using cokriging will be discussed in this chapter. Cokriging—a multivariate extension of kriging—is the usual procedure applied to multivariate regionalized problems within the framework of geostatistics. Its fundament is a distribution-free, linear, unbiased estimator with minimum estimation variance, although the absence of constraints on the estimator is an implicit assumption that the multidimensional real space is the sample space of the variables under consideration. If a multivariate normal distribution can be assumed for the vector random function, then the simple kriging estimator is identical with the conditional expectation, given a sample of size N. See Journel (1977, pp. 576-577), Journel (1980, pp. 288-290), Cressie (1991, p. 110), and Diggle, Tawn, and Moyeed (1998, p. 300) for further details. This estimator is in general the best possible linear estimator, as it is unbiased and has minimum estimation variance, but it is not very robust in the face of strong departures from normality. Therefore, for the estimation of regionalized compositions other distributions must also be taken into consideration. Recall that compositions cannot follow a multivariate normal distribution by definition, their sample space being the simplex. Consequently, regionalized compositions in general cannot be modeled under explicit or implicit assumptions of multivariate Gaussian processes. Here only the multivariate lognormal and additive logistic normal distributions will be addressed. Besides the logarithmic and additive logratio transformations, others can be applied, such as the multivariate Box-Cox transformation, as stated by Andrews et al. (1971), Rayens and Srinivasan (1991), and Barcelo-Vidal (1996). Furthermore, distributions such as the multiplicative logistic normal distribution introduced by Aitchison (1986, p. 131) or the additive logistic skew-normal distribution defined by Azzalini and Dalla Valle (1996) can be investigated in a similar fashion. References to the literature for the fundamental principles of the theory discussed in this chapter were given in Chapter 2. Among those, special attention is drawn to the work of Myers (1982), where matrix formulation of cokriging was first presented and the properties included in the first section of this chapter were stated.
6

Arsenio, Artur Miguel. "Intelligent Approaches for Adaptation and Distribution of Personalized Multimedia Content." In Intelligent Multimedia Technologies for Networking Applications, 197–224. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2833-5.ch008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Telecommunication operators need to deliver their clients not only new profitable services, but also good quality and interactive content. Some of this content, such as advertisements, generate revenues, while other contents generate revenues associated to a service, such as Video on Demand (VoD). One of the main concerns for current multimedia platforms is therefore the provisioning of content to end-users that generates revenue. Alternatives currently being explored include user-content generation as the content source (the prosumer model). However, a large source of revenue has pretty much been neglected, which corresponds to the capability of transforming, adapting content produced either by Content Providers (CPs) or by the end-user according to different categories, such as client location, personal settings, or business considerations, and to distribute such modified content. This chapter discusses and addresses this gap, proposing a content customization and distribution system for changing content consumption, by adapting content according to target end-user profiles (such as end-user personal tastes or its local social or geographic community). The aim is to give CPs ways to allow users and/or Service Providers (SPs) to configure contents according to different criteria, improving users’ quality of experience and SPs’ revenues generation, and to possibly charge users and SPs (e.g. advertisers) for such functionalities. The authors propose to employ artificial intelligence techniques, such as mixture of Gaussians, to learn the functional constraints faced by people, objects, or even scenes on a movie stream in order to support the content modification process. The solutions reported will allow SPs to provide the end-user with automatic ways to adapt and configure the (on-line, live) content to their tastes—and even more—to manipulate the content of live (or off-line) video streams (in the way that photo editing did for images or video editing, to a certain extent, did for off-line videos).

Conference papers on the topic "Constrained Gaussian processes":

1

Traganitis, Panagiotis A., and Georgios B. Giannakis. "Constrained Clustering using Gaussian Processes." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Swiler, Laura, Mamikon Gulian, Ari Frankel, Cosmin Safta, and John Jakeman. "Constrained Gaussian Processes: A Survey." In Proposed for presentation at the SIAM Computational Science and Engineering Conference 2021 held March 1 - February 5, 2021 in virtual. US DOE, 2021. http://dx.doi.org/10.2172/1847480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jetton, Cole, Chengda Li, and Christopher Hoyle. "Constrained Bayesian Optimization Methods Using Regression and Classification Gaussian Processes As Constraints." In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-109993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In Bayesian optimization, which is used for optimizing computationally expensive black-box problems, constraints need to be considered to arrive at solutions that are both optimal and feasible. Techniques to deal with black-box constraints have been studied extensively, often exploiting the constraint model to sample points that have a high probability of being feasible. These constraints are usually modeled using regression models if the output is a known value on a continuous real scale. However, if the constraint or set of constraints is not modeled explicitly, but rather it is only known whether a design vector is either feasible or infeasible, then we treat the constraints as a categorical constraint. Because of this, these constraints should be modeled using classification models rather than regression methods, which have also been studied. Because of the variety of approaches to handling constraints, there is a need to compare methods for handling both classification constraints as well as continuous constraints modeled with individual regression models. This paper explores and compares four main methods, with two additional ones specifically for classification constraints; these methods handle black-box constraints in Bayesian optimization by modeling the constraints with both regression and classification Gaussian processes. We found that the problem type and number of constraints influence the effectiveness of different approaches with statistical differences in convergence. Regression models can be advantageous in terms of model fit time; however, this is also a function of the total number of constraints. Generally, regression constraints outperformed classification surrogates in terms of minimizing computational time, but the latter was still effective. Overall, this study provides valuable insights into the performance and implementation of different constrained Bayesian optimization techniques. This can help inform engineers on which method is most suitable for their problem, what issues they may encounter during implementation, and give a general understanding of the differences between using regression and classification Gaussian processes as constraints.
4

Gulian, Mamikon. "Gaussian Process Regression Constrained by Boundary Value Problems." In Proposed for presentation at the IMSI Workshop: Expressing and Exploiting Structure in Modeling, Theory, and Computation with Gaussian Processes in ,. US DOE, 2022. http://dx.doi.org/10.2172/2004489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zinnen, Andreas, and Thomas Engel. "Deadline constrained scheduling in hybrid clouds with Gaussian processes." In Simulation (HPCS). IEEE, 2011. http://dx.doi.org/10.1109/hpcsim.2011.5999837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kang, Hyuk, and F. C. Park. "Configuration space learning for constrained manipulation tasks using Gaussian processes." In 2014 IEEE-RAS 14th International Conference on Humanoid Robots (Humanoids 2014). IEEE, 2014. http://dx.doi.org/10.1109/humanoids.2014.7041500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Diwale, Sanket Sanjay, Ioannis Lymperopoulos, and Colin N. Jones. "Optimization of an Airborne Wind Energy system using constrained Gaussian Processes." In 2014 IEEE Conference on Control Applications (CCA). IEEE, 2014. http://dx.doi.org/10.1109/cca.2014.6981519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tran, Anh, Kathryn Maupin, and Theron Rodgers. "Integrated Computational Materials Engineering With Monotonic Gaussian Processes." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-89213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting machine learning model requires significantly fewer data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on two different material datasets, where one experimental and one computational dataset is used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data is scarce and noisy or when the dimensionality is high, and monotonicity is where supported by strong physical reasoning.
9

Choi, Sungjoon, Kyungjae Lee, and Songhwai Oh. "Robust learning from demonstration using leveraged Gaussian processes and sparse-constrained optimization." In 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016. http://dx.doi.org/10.1109/icra.2016.7487168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mitrovic, Mile, Aleksandr Lukashevich, Petr Vorobev, Vladimir Terzija, Yury Maximov, and Deepjyoti Deka. "Fast Data-Driven Chance Constrained AC-OPF Using Hybrid Sparse Gaussian Processes." In 2023 IEEE Belgrade PowerTech. IEEE, 2023. http://dx.doi.org/10.1109/powertech55446.2023.10202724.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Constrained Gaussian processes":

1

Swiler, Laura, Mamikon Gulian, Ari Frankel, John Jakeman, and Cosmin Safta. LDRD Project Summary: Incorporating physical constraints into Gaussian process surrogate models. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1668928.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography