Literatura académica sobre el tema "Multi-output gaussian processes"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Multi-output gaussian processes".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Multi-output gaussian processes"

1

Caro, Victor, Jou-Hui Ho, Scarlet Witting y Felipe Tobar. "Modeling Neonatal EEG Using Multi-Output Gaussian Processes". IEEE Access 10 (2022): 32912–27. http://dx.doi.org/10.1109/access.2022.3159653.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ingram, Martin, Damjan Vukcevic y Nick Golding. "Multi‐output Gaussian processes for species distribution modelling". Methods in Ecology and Evolution 11, n.º 12 (15 de octubre de 2020): 1587–98. http://dx.doi.org/10.1111/2041-210x.13496.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rodrigues, Filipe, Kristian Henrickson y Francisco C. Pereira. "Multi-Output Gaussian Processes for Crowdsourced Traffic Data Imputation". IEEE Transactions on Intelligent Transportation Systems 20, n.º 2 (febrero de 2019): 594–603. http://dx.doi.org/10.1109/tits.2018.2817879.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vasudevan, Shrihari, Arman Melkumyan y Steven Scheding. "Efficacy of Data Fusion Using Convolved Multi-Output Gaussian Processes". Journal of Data Science 13, n.º 2 (8 de abril de 2021): 341–68. http://dx.doi.org/10.6339/jds.201504_13(2).0007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Truffinet, Olivier, Karim Ammar, Jean-Philippe Argaud, Nicolas Gérard Castaing y Bertrand Bouriquet. "Adaptive sampling of homogenized cross-sections with multi-output gaussian processes". EPJ Web of Conferences 302 (2024): 02010. http://dx.doi.org/10.1051/epjconf/202430202010.

Texto completo
Resumen
In another talk submitted to this conference, we presented an efficient new framework based on multi-outputs gaussian processes (MOGP) for the interpolation of few-groups homogenized cross-sections (HXS) inside deterministic core simulators. We indicated that this methodology authorized a principled selection of interpolation points through adaptive sampling. We here develop this idea by trying simple sampling schemes on our problem. In particular, we compare sample scoring functions with and without integration of leave-one-out errors, and obtained with single-output and multi-output gaussian process models. We test these methods on a realistic PWR assembly with gadolinium-added fuel rods, comparing them with non-adaptive supports. Results are promising, as the sampling algorithms allow to significantly reduce the size of interpolation supports with almost preserved accuracy. However, they exhibit phenomena of instability and stagnation, which calls for further investigation of the sampling dynamics and trying other scoring functions for the selection of samples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ramirez, Wilmer Ariza, Juš Kocijan, Zhi Quan Leong, Hung Duc Nguyen y Shantha Gamini Jayasinghe. "Dynamic System Identification of Underwater Vehicles Using Multi-Output Gaussian Processes". International Journal of Automation and Computing 18, n.º 5 (13 de julio de 2021): 681–93. http://dx.doi.org/10.1007/s11633-021-1308-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Truffinet, Olivier, Karim Ammar, Jean-Philippe Argaud, Nicolas Gérard Castaing y Bertrand Bouriquet. "Multi-output gaussian processes for the reconstruction of homogenized cross-sections". EPJ Web of Conferences 302 (2024): 02006. http://dx.doi.org/10.1051/epjconf/202430202006.

Texto completo
Resumen
Deterministic nuclear reactor simulators employing the prevalent two-step scheme often generate a substantial amount of intermediate data at the interface of their two subcodes, which can impede the overall performance of the software. The bulk of this data comprises “few-groups homogenized cross-sections” or HXS, which are stored as tabulated multivariate functions and interpolated inside the core simulator. A number of mathematical tools have been studied for this interpolation purpose over the years, but few meet all the challenging requirements of neutronics computation chains: extreme accuracy, low memory footprint, fast predictions… We here present a new framework to tackle this task, based on multi-outputs gaussian processes (MOGP). This machine learning model enables us to interpolate HXS’s with improved accuracy compared to the current multilinear standard, using only a fraction of its training data – meaning that the amount of required precomputation is reduced by a factor of several dozens. It also necessitates an even smaller fraction of its storage requirements, preserves its reconstruction speed, and unlocks new functionalities such as adaptive sampling and facilitated uncertainty quantification. We demonstrate the efficiency of this approach on a rich test case reproducing the VERA benchmark, proving in particular its scalability to datasets of millions of HXS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lu, Chi-Ken y Patrick Shafto. "Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning". Entropy 23, n.º 11 (20 de noviembre de 2021): 1545. http://dx.doi.org/10.3390/e23111545.

Texto completo
Resumen
Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out that the hierarchical structure of DGP well suited modeling the multi-fidelity regression, in which one is provided sparse observations with high precision and plenty of low fidelity observations. We propose the conditional DGP model in which the latent GPs are directly supported by the fixed lower fidelity data. Then the moment matching method is applied to approximate the marginal prior of conditional DGP with a GP. The obtained effective kernels are implicit functions of the lower-fidelity data, manifesting the expressivity contributed by distribution propagation within the hierarchy. The hyperparameters are learned via optimizing the approximate marginal likelihood. Experiments with synthetic and high dimensional data show comparable performance against other multi-fidelity regression methods, variational inference, and multi-output GP. We conclude that, with the low fidelity data and the hierarchical DGP structure, the effective kernel encodes the inductive bias for true function allowing the compositional freedom.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Torres-Valencia, Cristian, Álvaro Orozco, David Cárdenas-Peña, Andrés Álvarez-Meza y Mauricio Álvarez. "A Discriminative Multi-Output Gaussian Processes Scheme for Brain Electrical Activity Analysis". Applied Sciences 10, n.º 19 (27 de septiembre de 2020): 6765. http://dx.doi.org/10.3390/app10196765.

Texto completo
Resumen
The study of brain electrical activity (BEA) from different cognitive conditions has attracted a lot of interest in the last decade due to the high number of possible applications that could be generated from it. In this work, a discriminative framework for BEA via electroencephalography (EEG) is proposed based on multi-output Gaussian Processes (MOGPs) with a specialized spectral kernel. First, a signal segmentation stage is executed, and the channels from the EEG are used as the model outputs. Then, a novel covariance function within the MOGP known as the multispectral mixture kernel (MOSM) allows us to find and quantify the relationships between different channels. Several MOGPs are trained from different conditions grouped in bi-class problems, and the discrimination is performed based on the likelihood score of the test signals against all the models. Finally, the mean likelihood is computed to predict the correspondence of new inputs with each class’s existing models. Results show that this framework allows us to model the EEG signals adequately using generative models and allows analyzing the relationships between channels of the EEG for a particular condition. At the same time, the set of trained MOGPs is well suited to discriminate new input data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bae, Joonho y Jinkyoo Park. "Count-based change point detection via multi-output log-Gaussian Cox processes". IISE Transactions 52, n.º 9 (11 de noviembre de 2019): 998–1013. http://dx.doi.org/10.1080/24725854.2019.1676937.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Multi-output gaussian processes"

1

Parra, Vásquez Gabriel Enrique. "Spectral mixture kernels for Multi-Output Gaussian processes". Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/150553.

Texto completo
Resumen
Magíster en Ciencias de la Ingeniería, Mención Matemáticas Aplicadas. Ingeniero Civil Matemático
Multi-Output Gaussian Processes (MOGPs) are the multivariate extension of Gaussian processes (GPs \cite{Rasmussen:2006}), a Bayesian nonparametric method for univariate regression. MOGPs address the multi-channel regression problem by modeling the correlation in time and/or space (as scalar GPs do), but also across channels and thus revealing statistical dependencies among different sources of data. This is crucial in a number of real-world applications such as fault detection, data imputation and financial time-series analysis. Analogously to the univariate case, MOGPs are entirely determined by a multivariate covariance function, which in this case is matrix valued. The design of this matrix-valued covariance function is challenging, since we have to deal with the trade off between (i) choosing a broad class of cross-covariances and auto-covariances, while at the same time (ii) ensuring positive definiteness of the symmetric matrix containing these scalar-valued covariance functions. In the stationary univariate case, these difficulties can be bypassed by virtue of Bochner's theorem, that is, by building the covariance function in the spectral (Fourier) domain to then transform it to the time and/or space domain, thus yielding the (single-output) Spectral Mixture kernel \cite{Wilson:2013}. A classical approach to define multivariate covariance functions for MOGPs is through linear combinations of independent (latent) GPs; this is the case of the Linear Model of Coregionalization (LMC \cite{goo1997}) and the Convolution Model \cite{Alvarez:2008}. In these cases, the resulting multivariate covariance function is a function of both the latent-GP covariances and the linear operator considered, which usually results in symmetric cross-covariances that do not admit lags across channels. Due to their simplicity, these approaches fail to provide interpretability of the dependencies learnt and force the auto-covariances to have similar structure. The main purpose of this work is to extend the spectral mixture concept to MOGPs: We rely on Cram\'er's theorem \cite, the multivariate version of Bochner's theorem, to propose an expressive family of complex-valued square-exponential cross-spectral densities, which, through the Fourier transform yields the Multi-Output Spectral Mixture kernel (MOSM). The proposed MOSM model provides clear interpretation of all the parameters in spectral terms. Besides the theoretical presentation and interpretation of the proposed multi-output covariance kernel based on square-exponential spectral densities, we inquiry the plausibility of complex-valued t-Student cross-spectral densities. We validate our contribution experimentally through an illustrative example using a tri-variate synthetic signal, and then compare it against all the aforementioned methods on two real-world datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Malik, Obaid. "Probabilistic leak detection and quantification using multi-output Gaussian processes". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/409717/.

Texto completo
Resumen
A water distribution system WDS is often divided into smaller isolated and independent zones called district metering areas (DMA). A DMA can have anywhere from a few hundred to a few thousand properties. Normally only three locations within a district metering area are actively monitored for pressure or flow readings. These are the supply point pressure and flow and the critical point pressure which is the point of the lowest pressure in the DMA. As leakage rates are typically directly proportional to average pressures in the DMA, keeping the network pressure as low as possible while maintaining desired serviceability is an effective and widely used method for leak reduction. With advancement in technology this network pressure reduction is now done in real-time, where the network pressure is increased or decreased based on the demand. However, such real-time optimisation changes the DMA dynamics making it different from traditional unoptimised DMAs. We consider the problem of detecting and quantifying leaks in pressure optimised DMA, using only these three DMA-level hydraulic measurements. The DMA-level measurements represent the current aggregate water demand/consumption within the DMA. Detecting leaks at this point is challenging, particularly small leaks, as they do not produce a significant increase in the aggregated DMA-level measurements. Furthermore, the DMA-level data exhibits input signal dependence whereby both noise and leaks are dependent on the flow and pressure being measured, making leak detection task more difficult. To address this, we first propose a Gaussian process (GP) based approach that uses only the DMA-level flow to detect leaks (NSGP). We devise an additive diagonal noise covariance for the GP that is able to handle the input dependant noise observed in this setting. A parameterised mean step change function is used to detect and approximate leaks. As accurate leak data is often not available due to poor record keeping, we develop a detailed simulated model of a pressure optimised DMA and use it for analysing proposed leak detection methods. We show that active pressure optimisation changes the dynamics of a DMA. In light of the change in DMA dynamics, we proposed a domain specific, data driven, multi output gaussian process model, to detect and quantify leaks in pressure optimised DMAs (SMOGP). The novelty of the model is, firstly its ability to use all available information from a DMA to detect leaks, secondly the ability to model the pressure dependant leak process mathematically within the GP framework. We compare the performance of the proposed methods with the current state of the art leak detection method. We show that our proposed method out perform other approaches considerably both in terms of the accuracy of leak detection and leak magnitude estimation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Truffinet, Olivier. "Machine learning methods for cross-section reconstruction in full-core deterministic neutronics codes". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP128.

Texto completo
Resumen
Les simulateurs déterministes de neutronique pour les réacteurs nucléaires suivent aujourd'hui majoritairement un schéma multi-échelles à deux étapes. Au cours d'un calcul dit « réseau », la physique est finement résolue au niveau des motifs élémentaires du réacteur (assemblages de combustible) ; puis, ces motifs sont mis en contact dans un calcul dit « cœur », où la configuration globale est calculée de manière plus grossière. La communication entre ces deux codes se fait de manière différée par le transfert de données physiques, dont les plus importantes se nomment « sections efficaces homogénéisées » (notées ci-après HXS) et peuvent être représentées par des fonctions multivariées. Leur utilisation différée et leur dépendance à des conditions physiques variables imposent un schéma de type tabulation-interpolation : les HXS sont précalculées dans une large gamme de situations, stockées, puis approximées dans le code cœur à partir de ces données afin de correspondre à un état bien précis du réacteur. Dans un contexte d'augmentation de la finesse des simulations, les outils mathématiques actuellement utilisés pour cette étape d'approximation montrent aujourd'hui leurs limites ; la problématique de cette thèse est ainsi de leur trouver des remplaçants, capables de rendre l'interpolation des HXS plus précise, plus économe en données et en espace de stockage, et tout aussi rapide. Tout l'arsenal du machine learning, de l'approximation fonctionnelle, etc, peut être mis à contribution pour traiter ce problème.Afin de trouver un modèle d'approximation adapté au problème, l'on a commencé par une analyse des jeux de données générés pour cette thèse : corrélations entre les HXS, allure de leurs dépendances, dimension linéaire, etc. Ce dernier point s'est révélé particulièrement fructueux : les jeux de HXS s'avèrent être d'une très faible dimension effective, ce qui permet de simplifier grandement leur approximation. En particulier, l'on a développé une méthodologie innovante basée sur l'Empirical Interpolation Method (EIM), capable de remplacer la majorité des appels au code réseau par des extrapolations d'un petit volume de données, et de réduire le stockage des HXS d'un ou deux ordres de grandeur - le tout occasionnant une perte de précision négligeable. Pour conserver les avantages d'une telle méthodologie tout en répondant à la totalité de la problématique de thèse, l'on s'est ensuite tourné vers un puissant modèle de machine learning épousant la même structure de faible dimension : les processus gaussiens multi-sorties (MOGP). Procédant par étapes depuis les modèles gaussiens les plus simples (GP mono-sorties) jusqu'à de plus complexes, l'on a montré que ces outils sont pleinement adaptés au problème considéré, et permettent des gains majeurs par rapport à l'existant. De nombreux choix de modélisation ont été discutés et comparés ; les modèles ont été adaptés à des données de très grande taille, requérant une optimisation de leur implémentation ; et les fonctionnalités nouvelles qu'ils offrent ont été expérimentées, notamment la prédiction d'incertitudes et l'apprentissage actif.Enfin, un travail théorique a été accompli sur la famille de modèles étudiées - le Linear Model of Co-regionalisation (LMC) - afin d'éclairer certaines zones d'ombre de leur théorie encore jeune. Cette réflexion a mené à la définition d'un nouveau modèle, le PLMC, qui a été implémenté, optimisé et testé sur de nombreux jeux de données réelles et synthétiques. Plus simple que ses concurrents, ce modèle s'est aussi révélé autant voire plus précis et rapide, et doté de plusieurs fonctionnalités exclusives, mises à profit durant la thèse.Ce travail ouvre de multiples perspectives pour la simulation neutronique. Doté de modèles d'apprentissage puissants et flexibles, l'on peut envisager des évolutions importantes des codes : propagation systématique des incertitudes, correction de diverses approximations, prise en compte de davantage de variables…
Today, most deterministic neutronics simulators for nuclear reactors follow a two-step multi-scale scheme. In a so-called “lattice” calculation, the physics is finely resolved at the level of the elementary reactor pattern (fuel assemblies); these tiles are then brought into contact in a so-called “core” calculation, where the overall configuration is calculated more coarsely. Communication between these two codes is realized by the deferred transfer of physical data, the most important of which are called “homogenized cross sections” (hereafter referred to as HXS) and can be represented by multivariate functions. Their deferred use and dependence on variable physical conditions call for a tabulation-interpolation scheme: HXS are precalculated in a wide range of situations, stored, then approximated in the core code from the stored values to correspond to a specific reactor state. In a context of increasing simulation finesse, the mathematical tools currently used for this approximation stage are now showing their limitations. The aim of this thesis is to find replacements for them, capable of making HXS interpolation more accurate, more economical in terms of data and storage space, and just as fast. The whole arsenal of machine learning, functional approximation, etc., can be put at use to tackle this problem.In order to find a suitable approximation model, we began by analyzing the datasets generated for this thesis: correlations between HXS's, shapes of their dependencies, linear dimension, etc. This last point proved particularly fruitful: HXS sets turn out to be of very low effective dimension, which greatly simplifies their approximation. In particular, we leveraged this fact to develop an innovative methodology based on the Empirical Interpolation Method (EIM), capable of replacing the majority of lattice code calls by extrapolations from a small volume of data, and reducing HXS storage by one or two orders of magnitude - all with a negligible loss of accuracy.To retain the advantages of such a methodology while addressing the full scope of the thesis problem, we then turned to a powerful machine learning model matching the same low-dimensional structure: multi-output Gaussian processes (MOGPs). Proceeding step by step from the simplest Gaussian models (single-output GPs) to most complex ones, we showed that these tools are fully adapted to the problem under consideration, and offer major gains over current HXS interpolation routines. Numerous modeling choices were discussed and compared; models were adapted to very large data, requiring some optimization of their implementation; and the new functionalities which they offer were tested, notably uncertainty prediction and active learning.Finally, theoretical work was carried out on the studied family of models - the Linear Model of Co-regionalisation (LMC) - in order to shed light on certain grey areas in their still young theory. This led to the definition of a new model, the PLMC, which was implemented, optimized and tested on numerous real and synthetic data sets. Simpler than its competitors, this model has also proved to be just as accurate and fast if not more so, and holds a number of exclusive functionalities that were put to good use during the thesis.This work opens up many new prospects for neutronics simulation. Equipped with powerful and flexible learning models, it is possible to envisage significant evolutions for deterministic codes: systematic propagation of uncertainties, correction of various approximations, taking into account of more variables
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vestin, Albin y Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms". Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Texto completo
Resumen
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Multi-output gaussian processes"

1

Cardona, Hernán Darío Vargas, Mauricio A. Álvarez y Álvaro A. Orozco. "Convolved Multi-output Gaussian Processes for Semi-Supervised Learning". En Image Analysis and Processing — ICIAP 2015, 109–18. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23231-7_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Lui, Sin Ting, Thierry Peynot, Robert Fitch y Salah Sukkarieh. "Enhanced Stochastic Mobility Prediction on Unstructured Terrain Using Multi-output Gaussian Processes". En Intelligent Autonomous Systems 13, 173–90. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-08338-4_14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cuellar-Fierro, Jhon F., Hernán Darío Vargas-Cardona, Mauricio A. Álvarez, Andrés M. Álvarez y Álvaro A. Orozco. "Non-stationary Multi-output Gaussian Processes for Enhancing Resolution over Diffusion Tensor Fields". En Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 168–76. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75193-1_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

J., Jayapradha, Lakshmi Vadhanie, Yukta Kulkarni, T. Senthil Kumar y Uma Devi M. "Enhancing Algorithmic Resilience Against Data Poisoning Using CNN". En Risk Assessment and Countermeasures for Cybersecurity, 131–57. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2691-6.ch008.

Texto completo
Resumen
The work aims to improve model resilience and accuracy in machine learning (ML) by addressing data poisoning attacks. Data poisoning attacks are a type of adversarial attack where malicious data is injected into the training data set to manipulate the machine learning model's output, compromising model performance and security. To tackle this, a multi-faceted approach is proposed, including data assessment and cleaning, detecting attacks using outlier and anomaly detection techniques. The authors also train robust models using techniques such as adversarial training, regularization, and data diversification. Additionally, they use ensemble methods that combine the strengths of multiple models, as well as Gaussian processes and Bayesian optimization to improve resilience to attacks. The work aims to contribute to machine learning security by providing an integrated solution for addressing data poisoning attacks and advancing the understanding of adversarial attacks and defenses in the machine learning community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Simeone, Davide, Marta Lenatti, Constantino Lagoa, Karim Keshavjee, Aziz Guergachi, Fabrizio Dabbene y Alessia Paglialonga. "Multi-Input Multi-Output Dynamic Modelling of Type 2 Diabetes Progression". En Telehealth Ecosystems in Practice. IOS Press, 2023. http://dx.doi.org/10.3233/shti230784.

Texto completo
Resumen
Type 2 Diabetes Mellitus (T2D) is a chronic health condition that affects millions of people globally. Early identification of risk can support preventive intervention and therefore slow down disease progression. Risk characterization is also necessary to monitor the mechanisms behind the pathology through the analysis of the interrelationships between the predictors and their time course. In this work, a multi-input multi-output Gaussian Process model is proposed to describe the evolution of different biomarkers in patients who will/will not develop T2D considering the interdependencies between outputs. The preliminary results obtained suggest that the trends in biomarkers captured by the model are coherent with the literature and with real-world data, demonstrating the value of multi-input multi-output approaches. In future developments, the proposed method could be applied to assess how the biomarkers evolve and interact with each other in groups of patients having in common one or more risk factors.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Multi-output gaussian processes"

1

Lim, Jaehyun, Jehyun Park, Sungjae Nah y Jongeun Choi. "Multi-output Infinite Horizon Gaussian Processes". En 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. http://dx.doi.org/10.1109/icra48506.2021.9561031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Dario Vargas Cardona, Hernan, Alvaro A. Orozco y Mauricio A. Alvarez. "Multi-output Gaussian processes for enhancing resolution of diffusion tensor fields". En 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2016. http://dx.doi.org/10.1109/embc.2016.7590898.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mateo-Sanchis, Anna, Jordi Munoz-Mari, Manuel Campos-Taberner, Javier Garcia-Haro y Gustau Camps-Valls. "Gap Filling of Biophysical Parameter Time Series with Multi-Output Gaussian Processes". En IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2018. http://dx.doi.org/10.1109/igarss.2018.8519254.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ganeva, Dessislava, Milen Chanev, Darina Valcheva, Lachezar Filchev y Georgi Jelev. "MODELLING BARLEY BIOMASS FROM PHENOCAM TIME SERIES WITH MULTI-OUTPUT GAUSSIAN PROCESSES". En 22nd SGEM International Multidisciplinary Scientific GeoConference 2022. STEF92 Technology, 2022. http://dx.doi.org/10.5593/sgem2022/2.1/s08.15.

Texto completo
Resumen
Biomass is monitored in many agricultural studies because it is closely related to the growth of the crop. The technique of digital repeat photography that continuously capture images of a given area with an RGB or near-infrared enabled cameras, Phenocams, has been used for more than a decade mainly to estimate phenology. Studies have found a relationship between Phenocam data and above-ground dry biomass. In this context we investigate the modeling of barley fresh above and underground biomass with Green chromatic coordinate (Gcc) colour index, extracted from Phenocam data, and multi-output Gaussian processes (MOGP). We take advantage of the available very high temporal resolution data from the phenocam to predict the biomass. The MOGP models take into account the relationships among output variables learning a cross-domain kernel function able to transfer information between time series. Our results suggest that MOGP model is able to successfully predict the variables simultaneously in regions where no training samples are available by intrinsically exploiting the relationships between the considered output variables.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Chiplunkar, Ankit, Emmanuel Rachelson, Michele Colombo y Joseph Morlier. "Adding Flight Mechanics to Flight Loads Surrogate Model using Multi-Output Gaussian Processes". En 17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2016. http://dx.doi.org/10.2514/6.2016-4000.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ghasempour, Alireza y Manel Martínez-Ramón. "Short-Term Electric Load Prediction in Smart Grid using Multi-Output Gaussian Processes Regression". En 2023 IEEE Kansas Power and Energy Conference (KPEC). IEEE, 2023. http://dx.doi.org/10.1109/kpec58008.2023.10215490.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Osborne, M. A., S. J. Roberts, A. Rogers, S. D. Ramchurn y N. R. Jennings. "Towards Real-Time Information Processing of Sensor Network Data Using Computationally Efficient Multi-output Gaussian Processes". En 2008 7th International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 2008. http://dx.doi.org/10.1109/ipsn.2008.25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Aali, Mohammad y Jun Liu. "Learning Piecewise Residuals of Control Barrier Functions for Safety of Switching Systems using Multi-Output Gaussian Processes". En 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591208.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Geroulas, Vasileios, Zissimos P. Mourelatos, Vasiliki Tsianika y Igor Baseski. "Reliability of Nonlinear Vibratory Systems Under Non-Gaussian Loads". En ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67313.

Texto completo
Resumen
A general methodology is presented for time-dependent reliability and random vibrations of nonlinear vibratory systems with random parameters excited by non-Gaussian loads. The approach is based on Polynomial Chaos Expansion (PCE), Karhunen-Loeve (KL) expansion and Quasi Monte Carlo (QMC). The latter is used to estimate multi-dimensional integrals efficiently. The input random processes are first characterized using their first four moments (mean, standard deviation, skewness and kurtosis coefficients) and a correlation structure in order to generate sample realizations (trajectories). Characterization means the development of a stochastic metamodel. The input random variables and processes are expressed in terms of independent standard normal variables in N dimensions. The N-dimensional input space is space filled with M points. The system differential equations of motion are time integrated for each of the M points and QMC estimates the four moments and correlation structure of the output efficiently. The proposed PCE-KL-QMC approach is then used to characterize the output process. Finally, classical MC simulation estimates the time-dependent probability of failure using the developed stochastic metamodel of the output process. The proposed methodology is demonstrated with a Duffing oscillator example under non-Gaussian load.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wang, Liwei, Suraj Yerramilli, Akshay Iyer, Daniel Apley, Ping Zhu y Wei Chen. "Data-Driven Design via Scalable Gaussian Processes for Multi-Response Big Data With Qualitative Factors". En ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-71570.

Texto completo
Resumen
Abstract Scientific and engineering problems often require an inexpensive surrogate model to aid understanding and the search for promising designs. While Gaussian processes (GP) stand out as easy-to-use and interpretable learners in surrogate modeling, they have difficulties in accommodating big datasets, qualitative inputs, and multi-type responses obtained from different simulators, which has become a common challenge for a growing number of data-driven design applications. In this paper, we propose a GP model that utilizes latent variables and functions obtained through variational inference to address the aforementioned challenges simultaneously. The method is built upon the latent variable Gaussian process (LVGP) model where qualitative factors are mapped into a continuous latent space to enable GP modeling of mixed-variable datasets. By extending variational inference to LVGP models, the large training dataset is replaced by a small set of inducing points to address the scalability issue. Output response vectors are represented by a linear combination of independent latent functions, forming a flexible kernel structure to handle multi-type responses. Comparative studies demonstrate that the proposed method scales well for large datasets with over 104 data points, while outperforming state-of-the-art machine learning methods without requiring much hyperparameter tuning. In addition, an interpretable latent space is obtained to draw insights into the effect of qualitative factors, such as those associated with “building blocks” of architectures and element choices in metamaterial and materials design. Our approach is demonstrated for machine learning of ternary oxide materials and topology optimization of a multiscale compliant mechanism with aperiodic microstructures and multiple materials.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Multi-output gaussian processes"

1

Bilionis, Ilias y Nicholas Zabaras. Multi-output Local Gaussian Process Regression: Applications to Uncertainty Quantification. Fort Belvoir, VA: Defense Technical Information Center, diciembre de 2011. http://dx.doi.org/10.21236/ada554929.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía