Journal articles on the topic 'Constrained Gaussian processes'

To see the other types of publications on this topic, follow the link: Constrained Gaussian processes.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Constrained Gaussian processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Xiaojing, and James O. Berger. "Estimating Shape Constrained Functions Using Gaussian Processes." SIAM/ASA Journal on Uncertainty Quantification 4, no. 1 (January 2016): 1–25. http://dx.doi.org/10.1137/140955033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Graf, Siegfried, and Harald Luschgy. "Entropy-constrained functional quantization of Gaussian processes." Proceedings of the American Mathematical Society 133, no. 11 (May 2, 2005): 3403–9. http://dx.doi.org/10.1090/s0002-9939-05-07888-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Niu, Mu, Pokman Cheung, Lizhen Lin, Zhenwen Dai, Neil Lawrence, and David Dunson. "Intrinsic Gaussian processes on complex constrained domains." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 81, no. 3 (April 19, 2019): 603–27. http://dx.doi.org/10.1111/rssb.12320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Girbés-Juan, Vicent, Joaquín Moll, Antonio Sala, and Leopoldo Armesto. "Cautious Bayesian Optimization: A Line Tracker Case Study." Sensors 23, no. 16 (August 18, 2023): 7266. http://dx.doi.org/10.3390/s23167266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a procedure for experimental optimization under safety constraints, to be denoted as constraint-aware Bayesian Optimization, is presented. The basic ingredients are a performance objective function and a constraint function; both of them will be modeled as Gaussian processes. We incorporate a prior model (transfer learning) used for the mean of the Gaussian processes, a semi-parametric Kernel, and acquisition function optimization under chance-constrained requirements. In this way, experimental fine-tuning of a performance objective under experiment-model mismatch can be safely carried out. The methodology is illustrated in a case study on a line-follower application in a CoppeliaSim environment.
5

Yang, Shihao, Samuel W. K. Wong, and S. C. Kou. "Inference of dynamic systems from noisy and sparse data via manifold-constrained Gaussian processes." Proceedings of the National Academy of Sciences 118, no. 15 (April 9, 2021): e2020397118. http://dx.doi.org/10.1073/pnas.2020397118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Parameter estimation for nonlinear dynamic system models, represented by ordinary differential equations (ODEs), using noisy and sparse data, is a vital task in many fields. We propose a fast and accurate method, manifold-constrained Gaussian process inference (MAGI), for this task. MAGI uses a Gaussian process model over time series data, explicitly conditioned on the manifold constraint that derivatives of the Gaussian process must satisfy the ODE system. By doing so, we completely bypass the need for numerical integration and achieve substantial savings in computational time. MAGI is also suitable for inference with unobserved system components, which often occur in real experiments. MAGI is distinct from existing approaches as we provide a principled statistical construction under a Bayesian framework, which incorporates the ODE system through the manifold constraint. We demonstrate the accuracy and speed of MAGI using realistic examples based on physical experiments.
6

Rattunde, Leonhard, Igor Laptev, Edgar D. Klenske, and Hans-Christian Möhring. "Safe optimization for feedrate scheduling of power-constrained milling processes by using Gaussian processes." Procedia CIRP 99 (2021): 127–32. http://dx.doi.org/10.1016/j.procir.2021.03.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schweidtmann, Artur M., Dominik Bongartz, Daniel Grothe, Tim Kerkenhoff, Xiaopeng Lin, Jaromił Najman, and Alexander Mitsos. "Deterministic global optimization with Gaussian processes embedded." Mathematical Programming Computation 13, no. 3 (June 25, 2021): 553–81. http://dx.doi.org/10.1007/s12532-021-00204-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractGaussian processes (Kriging) are interpolating data-driven models that are frequently applied in various disciplines. Often, Gaussian processes are trained on datasets and are subsequently embedded as surrogate models in optimization problems. These optimization problems are nonconvex and global optimization is desired. However, previous literature observed computational burdens limiting deterministic global optimization to Gaussian processes trained on few data points. We propose a reduced-space formulation for deterministic global optimization with trained Gaussian processes embedded. For optimization, the branch-and-bound solver branches only on the free variables and McCormick relaxations are propagated through explicit Gaussian process models. The approach also leads to significantly smaller and computationally cheaper subproblems for lower and upper bounding. To further accelerate convergence, we derive envelopes of common covariance functions for GPs and tight relaxations of acquisition functions used in Bayesian optimization including expected improvement, probability of improvement, and lower confidence bound. In total, we reduce computational time by orders of magnitude compared to state-of-the-art methods, thus overcoming previous computational burdens. We demonstrate the performance and scaling of the proposed method and apply it to Bayesian optimization with global optimization of the acquisition function and chance-constrained programming. The Gaussian process models, acquisition functions, and training scripts are available open-source within the “MeLOn—MachineLearning Models for Optimization” toolbox (https://git.rwth-aachen.de/avt.svt/public/MeLOn).
8

Li, Ming, Xiafei Tang, Qichun Zhang, and Yiqun Zou. "Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints." Applied Sciences 12, no. 19 (October 4, 2022): 9975. http://dx.doi.org/10.3390/app12199975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For the bearing-only target motion analysis (TMA), the pseudolinear Kalman filter (PLKF) solves the complex nonlinear estimation of the motion model parameters but suffers serious bias problems. The pseudolinear Kalman filter under the minimum mean square error framework (PL-MMSE) has a more accurate tracking ability and higher stability compared to the PLKF. Since the bearing signals are corrupted by non-Gaussian noise in practice, we reconstruct the PL-MMSE under Gaussian mixture noise. If some prior information, such as state constraints, is available, the performance of the PL-MMSE can be further improved by incorporating state constraints in the filtering process. In this paper, the mean square and estimation projection methods are used to incorporate PL-MMSE with linear constraints, respectively. Then, the linear approximation and second-order approximation methods are applied to merge PL-MMSE with nonlinear constraints, respectively. Simulation results demonstrate that the constrained PL-MMSE algorithms result in lower mean square errors and bias norms, which demonstrates the superiority of the constrained algorithms.
9

Salmon, John. "Generation of Correlated and Constrained Gaussian Stochastic Processes for N-Body Simulations." Astrophysical Journal 460 (March 1996): 59. http://dx.doi.org/10.1086/176952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rocher, Antoine, Vanina Ruhlmann-Kleider, Etienne Burtin, and Arnaud de Mattia. "Halo occupation distribution of Emission Line Galaxies: fitting method with Gaussian processes." Journal of Cosmology and Astroparticle Physics 2023, no. 05 (May 1, 2023): 033. http://dx.doi.org/10.1088/1475-7516/2023/05/033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The halo occupation distribution (HOD) framework is an empirical method to describe the connection between dark matter halos and galaxies, which is constrained by small scale clustering data. Efficient fitting procedures are required to scan the HOD parameter space. This paper describes such a method based on Gaussian Processes to iteratively build a surrogate model of the posterior of the likelihood surface from a reasonable amount of likelihood computations, typically two orders of magnitude less than standard Monte Carlo Markov chain algorithms. Errors in the likelihood computation due to stochastic HOD modelling are also accounted for in the method we propose. We report results of reproducibility, accuracy and stability tests of the method derived from simulation, taking as a test case star-forming emission line galaxies, which constitute the main tracer of the Dark Energy Spectroscopic Instrument and have so far a poorly constrained galaxy-halo connection from observational data.
11

Lindberg, Christina Willecke, Daniela Huppenkothen, R. Lynne Jones, Bryce T. Bolin, Mario Jurić, V. Zach Golkhou, Eric C. Bellm, et al. "Characterizing Sparse Asteroid Light Curves with Gaussian Processes." Astronomical Journal 163, no. 1 (December 21, 2021): 29. http://dx.doi.org/10.3847/1538-3881/ac3079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In the era of wide-field surveys like the Zwicky Transient Facility and the Rubin Observatory’s Legacy Survey of Space and Time, sparse photometric measurements constitute an increasing percentage of asteroid observations, particularly for asteroids newly discovered in these large surveys. Follow-up observations to supplement these sparse data may be prohibitively expensive in many cases, so to overcome these sampling limitations, we introduce a flexible model based on Gaussian processes to enable Bayesian parameter inference of asteroid time-series data. This model is designed to be flexible and extensible, and can model multiple asteroid properties such as the rotation period, light-curve amplitude, changing pulse profile, and magnitude changes due to the phase-angle evolution at the same time. Here, we focus on the inference of rotation periods. Based on both simulated light curves and real observations from the Zwicky Transient Facility, we show that the new model reliably infers rotational periods from sparsely sampled light curves and generally provides well-constrained posterior probability densities for the model parameters. We propose this framework as an intermediate method between fast but very limited-period detection algorithms and much more comprehensive but computationally expensive shape-modeling based on ray-tracing codes.
12

Wang, Shengbo, and Ke Li. "Constrained Bayesian Optimization under Partial Observations: Balanced Improvements and Provable Convergence." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15607–15. http://dx.doi.org/10.1609/aaai.v38i14.29488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The partially observable constrained optimization problems (POCOPs) impede data-driven optimization techniques since an infeasible solution of POCOPs can provide little information about the objective as well as the constraints. We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization. Our method consists of two key components. Firstly, we present an improved design of the acquisition functions that introduce balanced exploration during optimization. We rigorously study the convergence properties of this design to demonstrate its effectiveness. Secondly, we propose Gaussian processes embedding different likelihoods as the surrogate model for partially observable constraints. This model leads to a more accurate representation of the feasible regions compared to traditional classification-based models. Our proposed method is empirically studied on both synthetic and real-world problems. The results demonstrate the competitiveness of our method for solving POCOPs.
13

Schrouff, Jessica, Caroline Kussé, Louis Wehenkel, Pierre Maquet, and Christophe Phillips. "Decoding Semi-Constrained Brain Activity from fMRI Using Support Vector Machines and Gaussian Processes." PLoS ONE 7, no. 4 (April 26, 2012): e35860. http://dx.doi.org/10.1371/journal.pone.0035860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Wei, Philippe Ciais, Shushi Peng, Chao Yue, Yilong Wang, Martin Thurner, Sassan S. Saatchi, et al. "Land-use and land-cover change carbon emissions between 1901 and 2012 constrained by biomass observations." Biogeosciences 14, no. 22 (November 14, 2017): 5053–67. http://dx.doi.org/10.5194/bg-14-5053-2017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. The use of dynamic global vegetation models (DGVMs) to estimate CO2 emissions from land-use and land-cover change (LULCC) offers a new window to account for spatial and temporal details of emissions and for ecosystem processes affected by LULCC. One drawback of LULCC emissions from DGVMs, however, is lack of observation constraint. Here, we propose a new method of using satellite- and inventory-based biomass observations to constrain historical cumulative LULCC emissions (ELUCc) from an ensemble of nine DGVMs based on emerging relationships between simulated vegetation biomass and ELUCc. This method is applicable on the global and regional scale. The original DGVM estimates of ELUCc range from 94 to 273 PgC during 1901–2012. After constraining by current biomass observations, we derive a best estimate of 155 ± 50 PgC (1σ Gaussian error). The constrained LULCC emissions are higher than prior DGVM values in tropical regions but significantly lower in North America. Our emergent constraint approach independently verifies the median model estimate by biomass observations, giving support to the use of this estimate in carbon budget assessments. The uncertainty in the constrained ELUCc is still relatively large because of the uncertainty in the biomass observations, and thus reduced uncertainty in addition to increased accuracy in biomass observations in the future will help improve the constraint. This constraint method can also be applied to evaluate the impact of land-based mitigation activities.
15

Duecker, Daniel Andre, Andreas Rene Geist, Edwin Kreuzer, and Eugen Solowjow. "Learning Environmental Field Exploration with Computationally Constrained Underwater Robots: Gaussian Processes Meet Stochastic Optimal Control." Sensors 19, no. 9 (May 6, 2019): 2094. http://dx.doi.org/10.3390/s19092094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Autonomous exploration of environmental fields is one of the most promising tasks to be performed by fleets of mobile underwater robots. The goal is to maximize the information gain during the exploration process by integrating an information-metric into the path-planning and control step. Therefore, the system maintains an internal belief representation of the environmental field which incorporates previously collected measurements from the real field. In contrast to surface robots, mobile underwater systems are forced to run all computations on-board due to the limited communication bandwidth in underwater domains. Thus, reducing the computational cost of field exploration algorithms constitutes a key challenge for in-field implementations on micro underwater robot teams. In this work, we present a computationally efficient exploration algorithm which utilizes field belief models based on Gaussian Processes, such as Gaussian Markov random fields or Kalman regression, to enable field estimation with constant computational cost over time. We extend the belief models by the use of weighted shape functions to directly incorporate spatially continuous field observations. The developed belief models function as information-theoretic value functions to enable path planning through stochastic optimal control with path integrals. We demonstrate the efficiency of our exploration algorithm in a series of simulations including the case of a stationary spatio-temporal field.
16

Minkova, Leda D. "A stochastic model for the financial market with discontinuous prices." Journal of Applied Mathematics and Stochastic Analysis 9, no. 3 (January 1, 1996): 271–80. http://dx.doi.org/10.1155/s1048953396000263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper models some situations occurring in the financial market. The asset prices evolve according to a stochastic integral equation driven by a Gaussian martingale. A portfolio process is constrained in such a way that the wealth process covers some obligation. A solution to a linear stochastic integral equation is obtained in a class of cadlag stochastic processes.
17

Li, Lei, Zhen Gao, Yu-Tian Wang, Ming-Wen Zhang, Jian-Cheng Ni, Chun-Hou Zheng, and Yansen Su. "SCMFMDA: Predicting microRNA-disease associations based on similarity constrained matrix factorization." PLOS Computational Biology 17, no. 7 (July 12, 2021): e1009165. http://dx.doi.org/10.1371/journal.pcbi.1009165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
miRNAs belong to small non-coding RNAs that are related to a number of complicated biological processes. Considerable studies have suggested that miRNAs are closely associated with many human diseases. In this study, we proposed a computational model based on Similarity Constrained Matrix Factorization for miRNA-Disease Association Prediction (SCMFMDA). In order to effectively combine different disease and miRNA similarity data, we applied similarity network fusion algorithm to obtain integrated disease similarity (composed of disease functional similarity, disease semantic similarity and disease Gaussian interaction profile kernel similarity) and integrated miRNA similarity (composed of miRNA functional similarity, miRNA sequence similarity and miRNA Gaussian interaction profile kernel similarity). In addition, the L2 regularization terms and similarity constraint terms were added to traditional Nonnegative Matrix Factorization algorithm to predict disease-related miRNAs. SCMFMDA achieved AUCs of 0.9675 and 0.9447 based on global Leave-one-out cross validation and five-fold cross validation, respectively. Furthermore, the case studies on two common human diseases were also implemented to demonstrate the prediction accuracy of SCMFMDA. The out of top 50 predicted miRNAs confirmed by experimental reports that indicated SCMFMDA was effective for prediction of relationship between miRNAs and diseases.
18

DEKKER, H., and A. MAASSEN VAN DEN BRINK. "TRANSITION STATE THEORY IN EXTENDED PHASE SPACE." Modern Physics Letters B 07, no. 19 (August 20, 1993): 1263–68. http://dx.doi.org/10.1142/s0217984993001284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Turnover theory (of the escape Γ) à la Grabert will be based solely on Kramers' Fokker–Planck equation for activated rate processes. No recourse to a microscope model or Langevin dynamics will be made. Apart from the unstable mode energy E, the analysis requires new theoretical concepts such as a constrained Gaussian transformation (CGT) and dynamically extended phase space (EPS).
19

Guo, Wei, Tianhong Pan, Zhengming Li, and Shan Chen. "Batch process modeling by using temporal feature and Gaussian mixture model." Transactions of the Institute of Measurement and Control 42, no. 6 (December 1, 2019): 1204–14. http://dx.doi.org/10.1177/0142331219887827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multi-model/multi-phase modeling algorithm has been widely used to monitor the product quality in complicated batch processes. Most multi-model/ multi-phase modeling methods hinge on the structure of a linearly separable space or a combination of different sub-spaces. However, it is impossible to accurately separate the overlapping region samples into different operating sub-spaces using unsupervised learning techniques. A Gaussian mixture model (GMM) using temporal features is proposed in the work. First, the number of sub-model is estimated by using the maximum interval process trend analysis algorithm. Then, the GMM parameters constrained with the temporal value are identified by using the expectation maximization (EM) algorithm, which minimizes confusion in overlapping regions of different Gaussian processes. A numerical example and a penicillin fermentation process demonstrate the effectiveness of the proposed algorithm.
20

Livieris, Ioannis E., Emmanuel Pintelas, Theodore Kotsilieris, Stavros Stavroyiannis, and Panagiotis Pintelas. "Weight-Constrained Neural Networks in Forecasting Tourist Volumes: A Case Study." Electronics 8, no. 9 (September 8, 2019): 1005. http://dx.doi.org/10.3390/electronics8091005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tourism forecasting is a significant tool/attribute in tourist industry in order to provide for careful planning and management of tourism resources. Although accurate tourist volume prediction is a very challenging task, reliable and precise predictions offer the opportunity of gaining major profits. Thus, the development and implementation of more sophisticated and advanced machine learning algorithms can be beneficial for the tourism forecasting industry. In this work, we explore the prediction performance of Weight Constrained Neural Networks (WCNNs) for forecasting tourist arrivals in Greece. WCNNs constitute a new machine learning prediction model that is characterized by the application of box-constraints on the weights of the network. Our experimental results indicate that WCNNs outperform classical neural networks and the state-of-the-art regression models: support vector regression, k-nearest neighbor regression, radial basis function neural network, M5 decision tree and Gaussian processes.
21

Regayre, Leighton A., Lucia Deaconu, Daniel P. Grosvenor, David M. H. Sexton, Christopher Symonds, Tom Langton, Duncan Watson-Paris, et al. "Identifying climate model structural inconsistencies allows for tight constraint of aerosol radiative forcing." Atmospheric Chemistry and Physics 23, no. 15 (August 8, 2023): 8749–68. http://dx.doi.org/10.5194/acp-23-8749-2023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Aerosol radiative forcing uncertainty affects estimates of climate sensitivity and limits model skill in terms of making climate projections. Efforts to improve the representations of physical processes in climate models, including extensive comparisons with observations, have not significantly constrained the range of possible aerosol forcing values. A far stronger constraint, in particular for the lower (most-negative) bound, can be achieved using global mean energy balance arguments based on observed changes in historical temperature. Here, we show that structural deficiencies in a climate model, revealed as inconsistencies among observationally constrained cloud properties in the model, limit the effectiveness of observational constraint of the uncertain physical processes. We sample the uncertainty in 37 model parameters related to aerosols, clouds, and radiation in a perturbed parameter ensemble of the UK Earth System Model and evaluate 1 million model variants (different parameter settings from Gaussian process emulators) against satellite-derived observations over several cloudy regions. Our analysis of a very large set of model variants exposes model internal inconsistencies that would not be apparent in a small set of model simulations, of an order that may be evaluated during model-tuning efforts. Incorporating observations associated with these inconsistencies weakens any forcing constraint because they require a wider range of parameter values to accommodate conflicting information. We show that, by neglecting variables associated with these inconsistencies, it is possible to reduce the parametric uncertainty in global mean aerosol forcing by more than 50 %, constraining it to a range (around −1.3 to −0.1 W m−2) in close agreement with energy balance constraints. Our estimated aerosol forcing range is the maximum feasible constraint using our structurally imperfect model and the chosen observations. Structural model developments targeted at the identified inconsistencies would enable a larger set of observations to be used for constraint, which would then very likely narrow the uncertainty further and possibly alter the central estimate. Such an approach provides a rigorous pathway to improved model realism and reduced uncertainty that has so far not been achieved through the normal model development approach.
22

Candelieri, Antonio, Andrea Ponti, Elisabetta Fersini, Enza Messina, and Francesco Archetti. "Safe Optimal Control of Dynamic Systems: Learning from Experts and Safely Exploring New Policies." Mathematics 11, no. 20 (October 19, 2023): 4347. http://dx.doi.org/10.3390/math11204347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many real-life systems are usually controlled through policies replicating experts’ knowledge, typically favouring “safety” at the expense of optimality. Indeed, these control policies are usually aimed at avoiding a system’s disruptions or deviations from a target behaviour, leading to suboptimal performances. This paper proposes a statistical learning approach to exploit the historical safe experience—collected through the application of a safe control policy based on experts’ knowledge— to “safely explore” new and more efficient policies. The basic idea is that performances can be improved by facing a reasonable and quantifiable risk in terms of safety. The proposed approach relies on Gaussian Process regression to obtain a probabilistic model of both a system’s dynamics and performances, depending on the historical safe experience. The new policy consists of solving a constrained optimization problem, with two Gaussian Processes modelling, respectively, the safety constraints and the performance metric (i.e., objective function). As a probabilistic model, Gaussian Process regression provides an estimate of the target variable and the associated uncertainty; this property is crucial for dealing with uncertainty while new policies are safely explored. Another important benefit is that the proposed approach does not require any implementation of an expensive digital twin of the original system. Results on two real-life systems are presented, empirically proving the ability of the approach to improve performances with respect to the initial safe policy without significantly affecting safety.
23

Li, S.-S., W. Zang, A. Udalski, Y. Shvartzvald, D. Huber, C.-U. Lee, T. Sumi, et al. "OGLE-2017-BLG-1186: first application of asteroseismology and Gaussian processes to microlensing." Monthly Notices of the Royal Astronomical Society 488, no. 3 (July 10, 2019): 3308–23. http://dx.doi.org/10.1093/mnras/stz1873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract We present the analysis of the event OGLE-2017-BLG-1186 from the 2017 Spitzer microlensing campaign. This is a remarkable microlensing event because its source is photometrically bright and variable, which makes it possible to perform an asteroseismic analysis using ground-based data. We find that the source star is an oscillating red giant with average time-scale of ∼9 d. The asteroseismic analysis also provides us source properties including the source angular size (∼27 $\mu$as) and distance (∼11.5 kpc), which are essential for inferring the properties of the lens. When fitting the light curve, we test the feasibility of Gaussian processes (GPs) in handling the correlated noise caused by the variable source. We find that the parameters from the GP model are generally more loosely constrained than those from the traditional χ2 minimization method. We note that this event is the first microlensing system for which asteroseismology and GPs have been used to account for the variable source. With both finite-source effect and microlens parallax measured, we find that the lens is likely a ∼0.045 M⊙ brown dwarf at distance ∼9.0 kpc, or a ∼0.073 M⊙ ultracool dwarf at distance ∼9.8 kpc. Combining the estimated lens properties with a Bayesian analysis using a Galactic model, we find a $\sim 35{{\ \rm per\ cent}}$ probability for the lens to be a bulge object and $\sim 65{{\ \rm per\ cent}}$ to be a background disc object.
24

Wenk, Philippe, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, and Stefan Bauer. "ODIN: ODE-Informed Regression for Parameter and State Inference in Time-Continuous Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6364–71. http://dx.doi.org/10.1609/aaai.v34i04.6106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting. In this work, we introduce a novel generative modeling approach based on constrained Gaussian processes and leverage it to build a computationally and data efficient algorithm for state and parameter inference. In an extensive set of experiments, our approach outperforms the current state of the art for parameter inference both in terms of accuracy and computational cost. It also shows promising results for the much more challenging problem of model selection.
25

Ni, Jiancheng, Lei Li, Yutian Wang, Cunmei Ji, and Chunhou Zheng. "MDSCMF: Matrix Decomposition and Similarity-Constrained Matrix Factorization for miRNA–Disease Association Prediction." Genes 13, no. 6 (June 6, 2022): 1021. http://dx.doi.org/10.3390/genes13061021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
MicroRNAs (miRNAs) are small non-coding RNAs that are related to a number of complicated biological processes, and numerous studies have demonstrated that miRNAs are closely associated with many human diseases. In this study, we present a matrix decomposition and similarity-constrained matrix factorization (MDSCMF) to predict potential miRNA–disease associations. First of all, we utilized a matrix decomposition (MD) algorithm to get rid of outliers from the miRNA–disease association matrix. Then, miRNA similarity was determined by utilizing similarity kernel fusion (SKF) to integrate miRNA function similarity and Gaussian interaction profile (GIP) kernel similarity, and disease similarity was determined by utilizing SKF to integrate disease semantic similarity and GIP kernel similarity. Furthermore, we added L2 regularization terms and similarity constraint terms to non-negative matrix factorization to form a similarity-constrained matrix factorization (SCMF) algorithm, which was applied to make prediction. MDSCMF achieved AUC values of 0.9488, 0.9540, and 0.8672 based on fivefold cross-validation (5-CV), global leave-one-out cross-validation (global LOOCV), and local leave-one-out cross-validation (local LOOCV), respectively. Case studies on three common human diseases were also implemented to demonstrate the prediction ability of MDSCMF. All experimental results confirmed that MDSCMF was effective in predicting underlying associations between miRNAs and diseases.
26

Ni, Jiancheng, Lei Li, Yutian Wang, Cunmei Ji, and Chunhou Zheng. "MDSCMF: Matrix Decomposition and Similarity-Constrained Matrix Factorization for miRNA–Disease Association Prediction." Genes 13, no. 6 (June 6, 2022): 1021. http://dx.doi.org/10.3390/genes13061021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
MicroRNAs (miRNAs) are small non-coding RNAs that are related to a number of complicated biological processes, and numerous studies have demonstrated that miRNAs are closely associated with many human diseases. In this study, we present a matrix decomposition and similarity-constrained matrix factorization (MDSCMF) to predict potential miRNA–disease associations. First of all, we utilized a matrix decomposition (MD) algorithm to get rid of outliers from the miRNA–disease association matrix. Then, miRNA similarity was determined by utilizing similarity kernel fusion (SKF) to integrate miRNA function similarity and Gaussian interaction profile (GIP) kernel similarity, and disease similarity was determined by utilizing SKF to integrate disease semantic similarity and GIP kernel similarity. Furthermore, we added L2 regularization terms and similarity constraint terms to non-negative matrix factorization to form a similarity-constrained matrix factorization (SCMF) algorithm, which was applied to make prediction. MDSCMF achieved AUC values of 0.9488, 0.9540, and 0.8672 based on fivefold cross-validation (5-CV), global leave-one-out cross-validation (global LOOCV), and local leave-one-out cross-validation (local LOOCV), respectively. Case studies on three common human diseases were also implemented to demonstrate the prediction ability of MDSCMF. All experimental results confirmed that MDSCMF was effective in predicting underlying associations between miRNAs and diseases.
27

HERZOG, FLORIAN, GABRIEL DONDI, and HANS P. GEERING. "STOCHASTIC MODEL PREDICTIVE CONTROL AND PORTFOLIO OPTIMIZATION." International Journal of Theoretical and Applied Finance 10, no. 02 (March 2007): 203–33. http://dx.doi.org/10.1142/s0219024907004196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper proposes a solution method for the discrete-time long-term dynamic portfolio optimization problem with state and asset allocation constraints. We use the ideas of Model Predictive Control (MPC) to solve the constrained stochastic control problem. MPC is a solution technique which was developed to solve constrained optimal control problems for deterministic control applications. MPC solves the optimal control problem with a receding horizon where a series of consecutive open-loop optimal control problems is solved. The aim of this paper is to develop an MPC approach to the problem of long-term portfolio optimization when the expected returns of the risky assets are modeled using a factor model based on stochastic Gaussian processes. We prove that MPC is a suboptimal control strategy for stochastic systems which uses the new information advantageously and thus is better than the pure optimal open-loop control. For the open-loop optimal control optimization, we derive the conditional portfolio distribution and the corresponding conditional portfolio mean and variance. The mean and the variance depend on future decision about the asset allocation. For the dynamic portfolio optimization problem, we consider constraints on the asset allocation as well as probabilistic constraints on the attainable values of the portfolio wealth. We discuss two different objectives, a classical mean–variance objective and the objective to maximize the probability of exceeding a predetermined value of the portfolio. The dynamic portfolio optimization problem is stated, and the solution via MPC is explained in detail. The results are then illustrated in a case study.
28

Li, Kaibin, Zhiping Peng, Delong Cui, and Qirui Li. "SLA-DQTS: SLA Constrained Adaptive Online Task Scheduling Based on DDQN in Cloud Computing." Applied Sciences 11, no. 20 (October 9, 2021): 9360. http://dx.doi.org/10.3390/app11209360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Task scheduling is key to performance optimization and resource management in cloud computing systems. Because of its complexity, it has been defined as an NP problem. We introduce an online scheme to solve the problem of task scheduling under a dynamic load in the cloud environment. After analyzing the process, we propose a server level agreement constraint adaptive online task scheduling algorithm based on double deep Q-learning (SLA-DQTS) to reduce the makespan, cost, and average overdue time under the constraints of virtual machine (VM) resources and deadlines. In the algorithm, we prevent the change of the model input dimension with the number of VMs by taking the Gaussian distribution of related parameters as a part of the state space. Through the design of the reward function, the model can be optimized for different goals and task loads. We evaluate the performance of the algorithm by comparing it with three heuristic algorithms (Min-Min, random, and round robin) under different loads. The results show that the algorithm in this paper can achieve similar or better results than the comparison algorithms at a lower cost.
29

MIELCZAREK, JAKUB, and MICHAŁ KAMIONKA. "SMOOTHED QUANTUM FLUCTUATIONS AND CMB OBSERVATIONS." International Journal of Modern Physics D 21, no. 10 (October 2012): 1250080. http://dx.doi.org/10.1142/s0218271812500800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we investigate power spectrum of a smoothed scalar field. The smoothing leads to regularization of the UV divergences and can be related with the internal structure of the considered field or the space itself. We perform Gaussian smoothing to the quantum fluctuations generated during the phase of cosmic inflation. We study whether this effect can be probed observationally and conclude that the modifications of the power spectrum due to the smoothing on the Planck scale are negligible and far beyond the observational abilities. Subsequently, we investigate whether smoothing in any other form can be probed observationally. We introduce phenomenological smoothing factor e-k2σ2 to the inflationary spectrum and investigate its effects on the spectrum of CMB anisotropies and polarization. We show that smoothing can lead to suppression of high multipoles in the spectrum of the CMB. Based on seven years observations of WMAP satellite we indicate that the present scale of high multipoles suppression is constrained by σ < 3.19 Mpc (95% CL). This corresponds to the constraint σ < 100 μm at the end of inflation. Despite this value is far above the Planck scale, other processes of smoothing can be possibly studied with this constraint, as decoherence or diffusion of primordial perturbations.
30

Ramalingam, Gomathi, Selvakumaran Selvaraj, Visumathi James, Senthil Kumar Saravanaperumal, and Buvaneswari Mohanram. "Segmentation of Medical Images with Adaptable Multifunctional Discretization Bayesian Neural Networks and Gaussian Operation." International journal of electrical and computer engineering systems 14, no. 4 (April 26, 2023): 381–92. http://dx.doi.org/10.32985/ijeces.14.4.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bayesian statistics is incorporated into a neural network to create a Bayesian neural network (BNN) that adds posterior inference aims at preventing overfitting. BNNs are frequently used in medical image segmentation because they provide a stochastic viewpoint of segmentation approaches by producing a posterior probability with conventional limitations and allowing the depiction of uncertainty over following distributions. However, the actual efficacy of BNNs is constrained by the difficulty in selecting expressive discretization and accepting suitable following disseminations in a higher-order domain. Functional discretization BNN using Gaussian processes (GPs) that analyze medical image segmentation is proposed in this paper. Here, a discretization inference has been assumed in the functional domain by considering the former and dynamic consequent distributions to be GPs. An upsampling operator that utilizes a content-based feature extraction has been proposed. This is an adaptive method for extracting features after feature mapping is used in conjunction with the functional evidence lower bound and weights. This results in a loss-aware segmentation network that achieves an F1-score of 91.54%, accuracy of 90.24%, specificity of 88.54%, and precision of 80.24%.
31

Shojaie, Ali, and Emily B. Fox. "Granger Causality: A Review and Recent Advances." Annual Review of Statistics and Its Application 9, no. 1 (March 7, 2022): 289–319. http://dx.doi.org/10.1146/annurev-statistics-040120-010930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduced more than a half-century ago, Granger causality has become a popular tool for analyzing time series data in many application domains, from economics and finance to genomics and neuroscience. Despite this popularity, the validity of this framework for inferring causal relationships among time series has remained the topic of continuous debate. Moreover, while the original definition was general, limitations in computational tools have constrained the applications of Granger causality to primarily simple bivariate vector autoregressive processes. Starting with a review of early developments and debates, this article discusses recent advances that address various shortcomings of the earlier approaches, from models for high-dimensional time series to more recent developments that account for nonlinear and non-Gaussian observations and allow for subsampled and mixed-frequency time series.
32

Faria, J. P., V. Adibekyan, E. M. Amazo-Gómez, S. C. C. Barros, J. D. Camacho, O. Demangeon, P. Figueira, et al. "Decoding the radial velocity variations of HD 41248 with ESPRESSO." Astronomy & Astrophysics 635 (March 2020): A13. http://dx.doi.org/10.1051/0004-6361/201936389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context. Twenty-four years after the discoveries of the first exoplanets, the radial-velocity (RV) method is still one of the most productive techniques to detect and confirm exoplanets. But stellar magnetic activity can induce RV variations large enough to make it difficult to disentangle planet signals from the stellar noise. In this context, HD 41248 is an interesting planet-host candidate, with RV observations plagued by activity-induced signals. Aims. We report on ESPRESSO observations of HD 41248 and analyse them together with previous observations from HARPS with the goal of evaluating the presence of orbiting planets. Methods. Using different noise models within a general Bayesian framework designed for planet detection in RV data, we test the significance of the various signals present in the HD 41248 dataset. We use Gaussian processes as well as a first-order moving average component to try to correct for activity-induced signals. At the same time, we analyse photometry from the TESS mission, searching for transits and rotational modulation in the light curve. Results. The number of significantly detected Keplerian signals depends on the noise model employed, which can range from 0 with the Gaussian process model to 3 with a white noise model. We find that the Gaussian process alone can explain the RV data while allowing for the stellar rotation period and active region evolution timescale to be constrained. The rotation period estimated from the RVs agrees with the value determined from the TESS light curve. Conclusions. Based on the data that is currently available, we conclude that the RV variations of HD 41248 can be explained by stellar activity (using the Gaussian process model) in line with the evidence from activity indicators and the TESS photometry.
33

Strocchi, Marina, Stefano Longobardi, Christoph M. Augustin, Matthias A. F. Gsell, Argyrios Petras, Christopher A. Rinaldi, Edward J. Vigmond, et al. "Cell to whole organ global sensitivity analysis on a four-chamber heart electromechanics model using Gaussian processes emulators." PLOS Computational Biology 19, no. 6 (June 26, 2023): e1011257. http://dx.doi.org/10.1371/journal.pcbi.1011257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cardiac pump function arises from a series of highly orchestrated events across multiple scales. Computational electromechanics can encode these events in physics-constrained models. However, the large number of parameters in these models has made the systematic study of the link between cellular, tissue, and organ scale parameters to whole heart physiology challenging. A patient-specific anatomical heart model, or digital twin, was created. Cellular ionic dynamics and contraction were simulated with the Courtemanche-Land and the ToR-ORd-Land models for the atria and the ventricles, respectively. Whole heart contraction was coupled with the circulatory system, simulated with CircAdapt, while accounting for the effect of the pericardium on cardiac motion. The four-chamber electromechanics framework resulted in 117 parameters of interest. The model was broken into five hierarchical sub-models: tissue electrophysiology, ToR-ORd-Land model, Courtemanche-Land model, passive mechanics and CircAdapt. For each sub-model, we trained Gaussian processes emulators (GPEs) that were then used to perform a global sensitivity analysis (GSA) to retain parameters explaining 90% of the total sensitivity for subsequent analysis. We identified 45 out of 117 parameters that were important for whole heart function. We performed a GSA over these 45 parameters and identified the systemic and pulmonary peripheral resistance as being critical parameters for a wide range of volumetric and hemodynamic cardiac indexes across all four chambers. We have shown that GPEs provide a robust method for mapping between cellular properties and clinical measurements. This could be applied to identify parameters that can be calibrated in patient-specific models or digital twins, and to link cellular function to clinical indexes.
34

Owen, Nathan E., and Lorena Liuzzo. "Impact of land use on water resources via a Gaussian process emulator with dimension reduction." Journal of Hydroinformatics 21, no. 3 (March 19, 2019): 411–26. http://dx.doi.org/10.2166/hydro.2019.067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The replacement of models by emulators is becoming a frequent approach in environmental science due to the reduction of computational time, and different approaches exist in the water resources modelling literature. In this work, an emulator to mimic a hydrological model at catchment scale is proposed, taking into account the effect of land use on the hydrological processes involved in water balance. The proposed approach is novel for its combination of techniques. The dimension of the temporal model output is reduced via principal component analysis, and this reduced output is approximated using Gaussian process emulators built on a conditioned Latin hypercube design to reflect constrained land use inputs. Uncertainty from both the model approximation and the dimension reduction is propagated back to the space of the original output. The emulator has been applied to simulate river flow in a rural river basin located in south west England, the Frome at East Stoke Total, but the methodology is general. Results showed that the use of the emulator for water resources assessment at catchment scale is an effective approach, providing accurate estimates of the model output as a function of land use inputs, for a reduction of the computational burden.
35

Meyer, Antoine D., David A. van Dyk, Hyungsuk Tak, and Aneta Siemiginowska. "TD-CARMA: Painless, Accurate, and Scalable Estimates of Gravitational Lens Time Delays with Flexible CARMA Processes." Astrophysical Journal 950, no. 1 (June 1, 2023): 37. http://dx.doi.org/10.3847/1538-4357/acbea1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Cosmological parameters encoding our understanding of the expansion history of the universe can be constrained by the accurate estimation of time delays arising in gravitationally lensed systems. We propose TD-CARMA, a Bayesian method to estimate cosmological time delays by modeling observed and irregularly sampled light curves as realizations of a continuous auto-regressive moving average (CARMA) process. Our model accounts for heteroskedastic measurement errors and microlensing, an additional source of independent extrinsic long-term variability in the source brightness. The semiseparable structure of the CARMA covariance matrix allows for fast and scalable likelihood computation using Gaussian process modeling. We obtain a sample from the joint posterior distribution of the model parameters using a nested sampling approach. This allows for “painless” Bayesian computation, dealing with the expected multimodality of the posterior distribution in a straightforward manner and not requiring the specification of starting values or an initial guess for the time delay, unlike existing methods. In addition, the proposed sampling procedure automatically evaluates the Bayesian evidence, allowing us to perform principled Bayesian model selection. TD-CARMA is parsimonious, and typically includes no more than a dozen unknown parameters. We apply TD-CARMA to six doubly lensed quasars HS2209+1914, SDSS J1001+5027, SDSS J1206+4332, SDSS J1515+1511, SDSS J1455+1447, and SDSS J1349+1227, estimating their time delays as −21.96 ± 1.448, 120.93 ± 1.015, 111.51 ± 1.452, 210.80 ± 2.18, 45.36 ± 1.93, and 432.05 ± 1.950, respectively. These estimates are consistent with those derived in the relevant literature, but are typically two to four times more precise.
36

Wang, Yufei, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, and Alex Kot. "Low-Light Image Enhancement with Normalizing Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2604–12. http://dx.doi.org/10.1609/aaai.v36i3.20162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many. Previous works based on the pixel-wise reconstruction losses and deterministic processes fail to capture the complex conditional distribution of normally exposed images, which results in improper brightness, residual noise, and artifacts. In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model. An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. In this way, the conditional distribution of the normally exposed images can be well modeled, and the enhancement process, i.e., the other inference direction of the invertible network, is equivalent to being constrained by a loss function that better describes the manifold structure of natural images during the training. The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
37

Casaburo, Alessandro, Dario Magliacano, Giuseppe Petrone, Francesco Franco, and Sergio De Rosa. "Gaussian-Based Machine Learning Algorithm for the Design and Characterization of a Porous Meta-Material for Acoustic Applications." Applied Sciences 12, no. 1 (December 30, 2021): 333. http://dx.doi.org/10.3390/app12010333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The scope of this work is to consolidate research dealing with the vibroacoustics of periodic media. This investigation aims at developing and validating tools for the design and characterization of global vibroacoustic treatments based on foam cores with embedded periodic patterns, which allow passive control of acoustic paths in layered concepts. Firstly, a numerical test campaign is carried out by considering some perfectly rigid inclusions in a 3D-modeled porous structure; this causes the excitation of additional acoustic modes due to the periodic nature of the meta-core itself. Then, through the use of the Delany–Bazley–Miki equivalent fluid model, some design guidelines are provided in order to predict several possible sets of characteristic parameters (that is unit cell dimension and foam airflow resistivity) that, constrained by the imposition of the total thickness of the acoustic package, may satisfy the target functions (namely, the frequency at which the first Transmission Loss (TL) peak appears, together with its amplitude). Furthermore, when the Johnson–Champoux–Allard model is considered, a characterization task is performed, since the meta-material description is used in order to determine its response in terms of resonance frequency and the TL increase at such a frequency. Results are obtained through the implementation of machine learning algorithms, which may constitute a good basis in order to perform preliminary design considerations that could be interesting for further generalizations.
38

Vacher, Jonathan, Andrew Isaac Meso, Laurent U. Perrinet, and Gabriel Peyré. "Bayesian Modeling of Motion Perception Using Dynamical Stochastic Textures." Neural Computation 30, no. 12 (December 2018): 3355–92. http://dx.doi.org/10.1162/neco_a_01142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model. First, the composite dynamic textures are constructed by the random aggregation of warped patterns, which can be viewed as three-dimensional gaussian fields. Second, these textures are cast as solutions to a stochastic partial differential equation (sPDE). This essential step enables real-time, on-the-fly texture synthesis using time-discretized autoregressive processes. It also allows for the derivation of a local motion-energy model, which corresponds to the log likelihood of the probability density. The log likelihoods are essential for the construction of a Bayesian inference framework. We use the dynamic texture model to psychophysically probe speed perception in humans using zoom-like changes in the spatial frequency content of the stimulus. The human data replicate previous findings showing perceived speed to be positively biased by spatial frequency increments. A Bayesian observer who combines a gaussian likelihood centered at the true speed and a spatial frequency dependent width with a “slow-speed prior” successfully accounts for the perceptual bias. More precisely, the bias arises from a decrease in the observer's likelihood width estimated from the experiments as the spatial frequency increases. Such a trend is compatible with the trend of the dynamic texture likelihood width.
39

Shang, Zhenhong, Ziqi He, and Runxin Li. "A Coronal Loop Automatic Detection Method." Symmetry 16, no. 6 (June 6, 2024): 704. http://dx.doi.org/10.3390/sym16060704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Coronal loops are bright, filamentary structures formed by thermal plasmas constrained by the sun’s magnetic field. Studying coronal loops provides insights into magnetic fields and their role in coronal heating processes. We propose a new automatic coronal loop detection method to optimize the problem of existing algorithms in detecting low-intensity coronal loops. Our method employs a line-Gaussian filter to enhance the contrast between coronal loops and background pixels, facilitating the detection of low-intensity ones. Following the detection of coronal loops, each loop is extracted using a method based on approximate local direction. Compared with the classical automatic detection method, Oriented Coronal Curved Loop Tracing (OCCULT), and its improved version, OCCULT-2, the proposed method demonstrates superior accuracy and completeness in loop detection. Furthermore, testing with images from the Transition Region and Coronal Explorer (TRACE) at 173 Å, the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) at 193 Å, and the High-Resolution Coronal Imager (Hi-C) at 193 Å and 172 Å confirms the robust generalization capabilities of our method. Statistical analysis of the cross-section width of coronal loops shows that most of the loop widths are resolved in Hi-C images.
40

Pollard, Oliver G., Natasha L. M. Barlow, Lauren J. Gregoire, Natalya Gomez, Víctor Cartelle, Jeremy C. Ely, and Lachlan C. Astfalck. "Quantifying the uncertainty in the Eurasian ice-sheet geometry at the Penultimate Glacial Maximum (Marine Isotope Stage 6)." Cryosphere 17, no. 11 (November 10, 2023): 4751–77. http://dx.doi.org/10.5194/tc-17-4751-2023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. The North Sea Last Interglacial sea level is sensitive to the fingerprint of mass loss from polar ice sheets. However, the signal is complicated by the influence of glacial isostatic adjustment driven by Penultimate Glacial Period ice-sheet changes, and yet these ice-sheet geometries remain significantly uncertain. Here, we produce new reconstructions of the Eurasian ice sheet during the Penultimate Glacial Maximum (PGM) by employing large ensemble experiments from a simple ice-sheet model that depends solely on basal shear stress, ice extent, and topography. To explore the range of uncertainty in possible ice geometries, we use a parameterised shear-stress map as input that has been developed to incorporate bedrock characteristics and the influence of ice-sheet basal processes. We perform Bayesian uncertainty quantification, utilising Gaussian process emulation, to calibrate against global ice-sheet reconstructions of the Last Deglaciation and rule out combinations of input parameters that produce unrealistic ice sheets. The refined parameter space is then applied to the PGM to create an ensemble of constrained 3D Eurasian ice-sheet geometries. Our reconstructed PGM Eurasian ice-sheet volume is 48±8 m sea-level equivalent (SLE). We find that the Barents–Kara Sea region displays both the largest mean volume and volume uncertainty of 24±8 m SLE while the British–Irish sector volume of 1.7±0.2 m SLE is the smallest. Our new workflow may be applied to other locations and periods where ice-sheet histories have limited empirical data.
41

Chang, W., P. J. Applegate, M. Haran, and K. Keller. "Probabilistic calibration of a Greenland Ice Sheet model using spatially resolved synthetic observations: toward projections of ice mass loss with uncertainties." Geoscientific Model Development 7, no. 5 (September 5, 2014): 1933–43. http://dx.doi.org/10.5194/gmd-7-1933-2014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Computer models of ice sheet behavior are important tools for projecting future sea level rise. The simulated modern ice sheets generated by these models differ markedly as input parameters are varied. To ensure accurate ice sheet mass loss projections, these parameters must be constrained using observational data. Which model parameter combinations make sense, given observations? Our method assigns probabilities to parameter combinations based on how well the model reproduces the Greenland Ice Sheet profile. We improve on the previous state of the art by accounting for spatial information and by carefully sampling the full range of realistic parameter combinations, using statistically rigorous methods. Specifically, we estimate the joint posterior probability density function of model parameters using Gaussian process-based emulation and calibration. This method is an important step toward calibrated probabilistic projections of ice sheet contributions to sea level rise, in that it uses data–model fusion to learn about parameter values. This information can, in turn, be used to make projections while taking into account various sources of uncertainty, including parametric uncertainty, data–model discrepancy, and spatial correlation in the error structure. We demonstrate the utility of our method using a perfect model experiment, which shows that many different parameter combinations can generate similar modern ice sheet profiles. This result suggests that the large divergence of projections from different ice sheet models is partly due to parametric uncertainty. Moreover, our method enables insight into ice sheet processes represented by parameter interactions in the model.
42

Chang, W., P. J. Applegate, M. Haran, and K. Keller. "Probabilistic calibration of a Greenland Ice Sheet model using spatially-resolved synthetic observations: toward projections of ice mass loss with uncertainties." Geoscientific Model Development Discussions 7, no. 2 (March 25, 2014): 1905–31. http://dx.doi.org/10.5194/gmdd-7-1905-2014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Computer models of ice sheet behavior are important tools for projecting future sea level rise. The simulated modern ice sheets generated by these models differ markedly as input parameters are varied. To ensure accurate ice sheet mass loss projections, these parameters must be constrained using observational data. Which model parameter combinations make sense, given observations? Our method assigns probabilities to parameter combinations based on how well the model reproduces the Greenland Ice Sheet profile. We improve on the previous state of the art by accounting for spatial information, and by carefully sampling the full range of realistic parameter combinations, using statistically rigorous methods. Specifically, we estimate the joint posterior probability density function of model parameters using Gaussian process-based emulation and calibration. This method is an important step toward probabilistic projections of ice sheet contributions to sea level rise, in that it uses observational data to learn about parameter values. This information can, in turn, be used to make projections while taking into account various sources of uncertainty, including parametric uncertainty, data–model discrepancy, and spatial correlation in the error structure. We demonstrate the utility of our method using a perfect model experiment, which shows that many different parameter combinations can generate similar modern ice sheet profiles. This result suggests that the large divergence of projections from different ice sheet models is partly due to parametric uncertainty. Moreover, our method enables insight into ice sheet processes represented by parameter interactions in the model.
43

Smit, Merijn, Andrej Dvornik, Mario Radovich, Konrad Kuijken, Matteo Maturi, Lauro Moscardini, and Mauro Sereno. "AMICO galaxy clusters in KiDS-DR3: The impact of estimator statistics on the luminosity-mass scaling relation." Astronomy & Astrophysics 659 (March 2022): A195. http://dx.doi.org/10.1051/0004-6361/202141626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context. As modern-day precision cosmology aims for statistical uncertainties of the percent level or lower, it becomes increasingly important to reconsider estimator assumptions at each step of the process, along with their consequences on the statistical variability of the scientific results. Aims. We compare L1 regression statistics to the weighted mean, the canonical L2 method based on Gaussian assumptions, to infer the weak gravitational shear signal from a catalog of background ellipticity measurements around a sample of clusters, which has been a standard step in the processes of many recent analyses. Methods. We use the shape measurements of background sources around 6925 AMICO clusters detected in the KiDS third data release. We investigate the robustness of our results and the dependence of uncertainties on the signal-to-noise ratios of the background source detections. Using a halo model approach, we derive lensing masses from the estimated excess surface density profiles. Results. The highly significant shear signal allows us to study the scaling relation between the r-band cluster luminosity, L200, and the derived lensing mass, M200. We show the results of the scaling relations derived in 13 bins in L200, with a tightly constrained power-law slope of ∼1.24 ± 0.08. We observe a small, but significant, relative bias of a few percent in the recovered excess surface density profiles between the two regression methods, which translates to a 1σ difference in M200. The efficiency of L1 is at least that of the weighted mean and increases with higher signal-to-noise shape measurements. Concluions. Our results indicate the relevance of optimizing the estimator for inferring the gravitational shear from a distribution of background ellipticities. The interpretation of measured relative biases can be gauged by deeper observations, and the increased computation times remain feasible.
44

Dur, Tolga Hasan, Rossella Arcucci, Laetitia Mottet, Miguel Molina Solana, Christopher Pain, and Yi-Ke Guo. "Weak Constraint Gaussian Processes for optimal sensor placement." Journal of Computational Science 42 (April 2020): 101110. http://dx.doi.org/10.1016/j.jocs.2020.101110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Furmanek, Mariusz P., Madhur Mangalam, Damian G. Kelty-Stephen, and Grzegorz Juras. "Postural constraints recruit shorter-timescale processes into the non-Gaussian cascade processes." Neuroscience Letters 741 (January 2021): 135508. http://dx.doi.org/10.1016/j.neulet.2020.135508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bachoc, François, Agnès Lagnoux, and Andrés F. López-Lopera. "Maximum likelihood estimation for Gaussian processes under inequality constraints." Electronic Journal of Statistics 13, no. 2 (2019): 2921–69. http://dx.doi.org/10.1214/19-ejs1587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Williams, M., A. D. Richardson, M. Reichstein, P. C. Stoy, P. Peylin, H. Verbeeck, N. Carvalhais, et al. "Improving land surface models with FLUXNET data." Biogeosciences Discussions 6, no. 2 (March 5, 2009): 2785–835. http://dx.doi.org/10.5194/bgd-6-2785-2009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. There is a growing consensus that land surface models (LSMs) that simulate terrestrial biosphere exchanges of matter and energy must be better constrained with data to quantify and address their uncertainties. FLUXNET, an international network of sites that measure the land surface exchanges of carbon, water and energy using the eddy covariance technique, is a prime source of data for model improvement. Here we outline a multi-stage process for fusing LSMs with FLUXNET data to generate better models with quantifiable uncertainty. First, we describe FLUXNET data availability, and its random and systematic biases. We then introduce methods for assessing LSM model runs against FLUXNET observations in temporal and spatial domains. These assessments are a prelude to more formal model-data fusion (MDF). MDF links model to data, based on error weightings. In theory, MDF produces optimal analyses of the modelled system, but there are practical problems. We first discuss how to set model errors and initial conditions. In both cases incorrect assumptions will affect the outcome of the MDF. We then review the problem of equifinality, whereby multiple combinations of parameters can produce similar model output. Fusing multiple independent data provides a means to limit equifinality. We then show how parameter probability density functions (PDFs) from MDF can be used to interpret model process validity, and to propagate errors into model outputs. Posterior parameter distributions are a useful way to assess the success of MDF, combined with a determination of whether model residuals are Gaussian. If the MDF scheme provides evidence for temporal variation in parameters, then that is indicative of a critical missing dynamic process. A comparison of parameter PDFs generated with the same model from multiple FLUXNET sites can provide insights into the concept and validity of plant functional types (PFT) – we would expect similar parameter estimates among sites sharing a single PFT. We conclude by identifying five major model-data fusion challenges for the FLUXNET and LSM communities: 1) to determine appropriate use of current data and to explore the information gained in using longer time series; 2) to avoid confounding effects of missing process representation on parameter estimation; 3) to assimilate more data types, including those from earth observation; 4) to fully quantify uncertainties arising from data bias, model structure, and initial conditions problems; and 5) to carefully test current model concepts (e.g. PFTs) and guide development of new concepts.
48

Williams, M., A. D. Richardson, M. Reichstein, P. C. Stoy, P. Peylin, H. Verbeeck, N. Carvalhais, et al. "Improving land surface models with FLUXNET data." Biogeosciences 6, no. 7 (July 30, 2009): 1341–59. http://dx.doi.org/10.5194/bg-6-1341-2009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. There is a growing consensus that land surface models (LSMs) that simulate terrestrial biosphere exchanges of matter and energy must be better constrained with data to quantify and address their uncertainties. FLUXNET, an international network of sites that measure the land surface exchanges of carbon, water and energy using the eddy covariance technique, is a prime source of data for model improvement. Here we outline a multi-stage process for "fusing" (i.e. linking) LSMs with FLUXNET data to generate better models with quantifiable uncertainty. First, we describe FLUXNET data availability, and its random and systematic biases. We then introduce methods for assessing LSM model runs against FLUXNET observations in temporal and spatial domains. These assessments are a prelude to more formal model-data fusion (MDF). MDF links model to data, based on error weightings. In theory, MDF produces optimal analyses of the modelled system, but there are practical problems. We first discuss how to set model errors and initial conditions. In both cases incorrect assumptions will affect the outcome of the MDF. We then review the problem of equifinality, whereby multiple combinations of parameters can produce similar model output. Fusing multiple independent and orthogonal data provides a means to limit equifinality. We then show how parameter probability density functions (PDFs) from MDF can be used to interpret model validity, and to propagate errors into model outputs. Posterior parameter distributions are a useful way to assess the success of MDF, combined with a determination of whether model residuals are Gaussian. If the MDF scheme provides evidence for temporal variation in parameters, then that is indicative of a critical missing dynamic process. A comparison of parameter PDFs generated with the same model from multiple FLUXNET sites can provide insights into the concept and validity of plant functional types (PFT) – we would expect similar parameter estimates among sites sharing a single PFT. We conclude by identifying five major model-data fusion challenges for the FLUXNET and LSM communities: (1) to determine appropriate use of current data and to explore the information gained in using longer time series; (2) to avoid confounding effects of missing process representation on parameter estimation; (3) to assimilate more data types, including those from earth observation; (4) to fully quantify uncertainties arising from data bias, model structure, and initial conditions problems; and (5) to carefully test current model concepts (e.g. PFTs) and guide development of new concepts.
49

McClintock, Thomas, and Eduardo Rozo. "Reconstructing probability distributions with Gaussian processes." Monthly Notices of the Royal Astronomical Society 489, no. 3 (September 2, 2019): 4155–60. http://dx.doi.org/10.1093/mnras/stz2426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT Modern cosmological analyses constrain physical parameters using Markov Chain Monte Carlo (MCMC) or similar sampling techniques. Oftentimes, these techniques are computationally expensive to run and require up to thousands of CPU hours to complete. Here we present a method for reconstructing the log-probability distributions of completed experiments from an existing chain (or any set of posterior samples). The reconstruction is performed using Gaussian process regression for interpolating the log-probability. This allows for easy resampling, importance sampling, marginalization, testing different samplers, investigating chain convergence, and other operations. As an example use case, we reconstruct the posterior distribution of the most recent Planck 2018 analysis. We then resample the posterior, and generate a new chain with 40 times as many points in only 30 min. Our likelihood reconstruction tool is made publicly available online.
50

Roque, Luis, Luis Torgo, and Carlos Soares. "Automatic Hierarchical Time-Series Forecasting Using Gaussian Processes." Engineering Proceedings 5, no. 1 (July 9, 2021): 49. http://dx.doi.org/10.3390/engproc2021005049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Forecasting often involves multiple time-series that are hierarchically organized (e.g., sales by geography). In that case, there is a constraint that the bottom level forecasts add-up to the aggregated ones. Common approaches use traditional forecasting methods to predict all levels in the hierarchy and then reconcile the forecasts to satisfy that constraint. We propose a new algorithm that automatically forecasts multiple hierarchically organized time-series. We introduce a combination of additive Gaussian processes (GPs) with a hierarchical piece-wise linear function to estimate, respectively, the stationary and non-stationary components of the time-series. We define a flexible structure of additive GPs generated by each aggregated group in the hierarchy of the data. This formulation aims to capture the nested information in the hierarchy while avoiding overfitting. We extended the piece-wise linear function to be hierarchical by defining hyperparameters shared across related time-series. From our experiments, our algorithm can estimate hundreds of time-series at once. To work at this scale, the estimation of the posterior distributions of the parameters is performed using mean-field approximation. We validate the proposed method in two different real-world datasets showing its competitiveness when compared to the state-of-the-art approaches. In summary, our method simplifies the process of hierarchical forecasting as no reconciliation is required. It is easily adapted to non-Gaussian likelihoods and multiple or non-integer seasonalities. The fact that it is a Bayesian approach makes modeling uncertainty of the forecasts trivial.

To the bibliography