Spis treści
Gotowa bibliografia na temat „Machine Learning Bayésien”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Machine Learning Bayésien”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Machine Learning Bayésien"
Rajaoui, Nordine. "BAYÉSIEN VERSUS CMA-ES : OPTIMISATION DES HYPERPARAMÈTRES ML". Management & Data Science, 2023. http://dx.doi.org/10.36863/mds.a.24309.
Pełny tekst źródłaRozprawy doktorskie na temat "Machine Learning Bayésien"
Zecchin, Matteo. "Robust Machine Learning Approaches to Wireless Communication Networks". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS397.pdf.
Pełny tekst źródłaArtificial intelligence is widely viewed as a key enabler of sixth generation wireless systems. In this thesis we target fundamental problems arising from the interaction between these two technologies with the end goal of paving the way towards the adoption of reliable AI in future wireless networks. We develop of distributed training algorithms that allow collaborative learning at edge of wireless networks despite communication bottlenecks, unreliability of its workers and data heterogeneity. We then take a critical look at the application of the standard frequentist learning paradigm to wireless communication problems and propose an extension of the generalized Bayesian learning, that concurrently counteracts three prominent challenges arising in application domain: data scarcity, the presence of outliers and model misspecification
Huix, Tom. "Variational Inference : theory and large scale applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX071.
Pełny tekst źródłaThis thesis explores Variational Inference methods for high-dimensional Bayesian learning. In Machine Learning, the Bayesian approach allows one to deal with epistemic uncertainty and provides and a better uncertainty quantification, which is necessary in many machine learning applications. However, Bayesian inference is often not feasible because the posterior distribution of the model parameters is generally untractable. Variational Inference (VI) allows to overcome this problem by approximating the posterior distribution with a simpler distribution called the variational distribution.In the first part of this thesis, we worked on the theoretical guarantees of Variational Inference. First, we studied VI when the Variational distribution is a Gaussian and in the overparameterized regime, i.e., when the models are high dimensional. Finally, we explore the Gaussian mixtures Variational distributions, as it is a more expressive distribution. We studied both the optimization error and the approximation error of this method.In the second part of the thesis, we studied the theoretical guarantees for contextual bandit problems using a Bayesian approach called Thompson Sampling. First, we explored the use of Variational Inference for Thompson Sampling algorithm. We notably showed that in the linear framework, this approach allows us to obtain the same theoretical guarantees as if we had access to the true posterior distribution. Finally, we consider a variant of Thompson Sampling called Feel-Good Thompson Sampling (FG-TS). This method allows to provide better theoretical guarantees than the classical algorithm. We then studied the use of a Monte Carlo Markov Chain method to approximate the posterior distribution. Specifically, we incorporated into FG-TS a Langevin Monte Carlo algorithm and a Metropolized Langevin Monte Carlo algorithm. Moreover, we obtained the same theoretical guarantees as for FG-TS when the posterior distribution is known
Jarraya, Siala Aida. "Nouvelles paramétrisations de réseaux bayésiens et leur estimation implicite : famille exponentielle naturelle et mélange infini de Gaussiennes". Phd thesis, Nantes, 2013. https://archive.bu.univ-nantes.fr/pollux/show/show?id=aef89743-c009-457d-8c27-a888655a4e58.
Pełny tekst źródłaLearning a Bayesian network consists in estimating the graph (structure) and the parameters of conditional probability distributions associated with this graph. Bayesian networks learning algorithms rely on classical Bayesian estimation approach whose a priori parameters are often determined by an expert or defined uniformly The core of this work concerns the application of several advances in the field of statistics as implicit estimation, Natural exponential families or infinite mixtures of Gaussian in order to (1) provide new parametric forms for Bayesian networks, (2) estimate the parameters of such models and (3) learn their structure
Jarraya, Siala Aida. "Nouvelles paramétrisations de réseaux Bayésiens et leur estimation implicite - Famille exponentielle naturelle et mélange infini de Gaussiennes". Phd thesis, Université de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00932447.
Pełny tekst źródłaSynnaeve, Gabriel. "Programmation et apprentissage bayésien pour les jeux vidéo multi-joueurs, application à l'intelligence artificielle de jeux de stratégies temps-réel". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00780635.
Pełny tekst źródłaGrappin, Edwin. "Model Averaging in Large Scale Learning". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG001/document.
Pełny tekst źródłaThis thesis explores properties of estimations procedures related to aggregation in the problem of high-dimensional regression in a sparse setting. The exponentially weighted aggregate (EWA) is well studied in the literature. It benefits from strong results in fixed and random designs with a PAC-Bayesian approach. However, little is known about the properties of the EWA with Laplace prior. Chapter 2 analyses the statistical behaviour of the prediction loss of the EWA with Laplace prior in the fixed design setting. Sharp oracle inequalities which generalize the properties of the Lasso to a larger family of estimators are established. These results also bridge the gap from the Lasso to the Bayesian Lasso. Chapter 3 introduces an adjusted Langevin Monte Carlo sampling method that approximates the EWA with Laplace prior in an explicit finite number of iterations for any targeted accuracy. Chapter 4 explores the statisctical behaviour of adjusted versions of the Lasso for the transductive and semi-supervised learning task in the random design setting
Grappin, Edwin. "Model Averaging in Large Scale Learning". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG001.
Pełny tekst źródłaThis thesis explores properties of estimations procedures related to aggregation in the problem of high-dimensional regression in a sparse setting. The exponentially weighted aggregate (EWA) is well studied in the literature. It benefits from strong results in fixed and random designs with a PAC-Bayesian approach. However, little is known about the properties of the EWA with Laplace prior. Chapter 2 analyses the statistical behaviour of the prediction loss of the EWA with Laplace prior in the fixed design setting. Sharp oracle inequalities which generalize the properties of the Lasso to a larger family of estimators are established. These results also bridge the gap from the Lasso to the Bayesian Lasso. Chapter 3 introduces an adjusted Langevin Monte Carlo sampling method that approximates the EWA with Laplace prior in an explicit finite number of iterations for any targeted accuracy. Chapter 4 explores the statisctical behaviour of adjusted versions of the Lasso for the transductive and semi-supervised learning task in the random design setting
Araya-López, Mauricio. "Des algorithmes presque optimaux pour les problèmes de décision séquentielle à des fins de collecte d'information". Electronic Thesis or Diss., Université de Lorraine, 2013. http://www.theses.fr/2013LORR0002.
Pełny tekst źródłaThe purpose of this dissertation is to study sequential decision problems where acquiring information is an end in itself. More precisely, it first covers the question of how to modify the POMDP formalism to model information-gathering problems and which algorithms to use for solving them. This idea is then extended to reinforcement learning problems where the objective is to actively learn the model of the system. Also, this dissertation proposes a novel Bayesian reinforcement learning algorithm that uses optimistic local transitions to efficiently gather information while optimizing the expected return. Through bibliographic discussions, theoretical results and empirical studies, it is shown that these information-gathering problems are optimally solvable in theory, that the proposed methods are near-optimal solutions, and that these methods offer comparable or better results than reference approaches. Beyond these specific results, this dissertation paves the way (1) for understanding the relationship between information-gathering and optimal policies in sequential decision processes, and (2) for extending the large body of work about system state control to information-gathering problems
Araya-López, Mauricio. "Des algorithmes presque optimaux pour les problèmes de décision séquentielle à des fins de collecte d'information". Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00943513.
Pełny tekst źródłaRahier, Thibaud. "Réseaux Bayésiens pour fusion de données statiques et temporelles". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM083/document.
Pełny tekst źródłaPrediction and inference on temporal data is very frequently performed using timeseries data alone. We believe that these tasks could benefit from leveraging the contextual metadata associated to timeseries - such as location, type, etc. Conversely, tasks involving prediction and inference on metadata could benefit from information held within timeseries. However, there exists no standard way of jointly modeling both timeseries data and descriptive metadata. Moreover, metadata frequently contains highly correlated or redundant information, and may contain errors and missing values.We first consider the problem of learning the inherent probabilistic graphical structure of metadata as a Bayesian Network. This has two main benefits: (i) once structured as a graphical model, metadata is easier to use in order to improve tasks on temporal data and (ii) the learned model enables inference tasks on metadata alone, such as missing data imputation. However, Bayesian network structure learning is a tremendous mathematical challenge, that involves a NP-Hard optimization problem. We present a tailor-made structure learning algorithm, inspired from novel theoretical results, that exploits (quasi)-determinist dependencies that are typically present in descriptive metadata. This algorithm is tested on numerous benchmark datasets and some industrial metadatasets containing deterministic relationships. In both cases it proved to be significantly faster than state of the art, and even found more performant structures on industrial data. Moreover, learned Bayesian networks are consistently sparser and therefore more readable.We then focus on designing a model that includes both static (meta)data and dynamic data. Taking inspiration from state of the art probabilistic graphical models for temporal data (Dynamic Bayesian Networks) and from our previously described approach for metadata modeling, we present a general methodology to jointly model metadata and temporal data as a hybrid static-dynamic Bayesian network. We propose two main algorithms associated to this representation: (i) a learning algorithm, which while being optimized for industrial data, is still generalizable to any task of static and dynamic data fusion, and (ii) an inference algorithm, enabling both usual tasks on temporal or static data alone, and tasks using the two types of data.%We then provide results on diverse cross-field applications such as forecasting, metadata replenishment from timeseries and alarms dependency analysis using data from some of Schneider Electric’s challenging use-cases.Finally, we discuss some of the notions introduced during the thesis, including ways to measure the generalization performance of a Bayesian network by a score inspired from the cross-validation procedure from supervised machine learning. We also propose various extensions to the algorithms and theoretical results presented in the previous chapters, and formulate some research perspectives
Książki na temat "Machine Learning Bayésien"
E, Nicholson Ann, red. Bayesian artificial intelligence. Boca Raton, Fla: Chapman & Hall/CRC, 2004.
Znajdź pełny tekst źródłaE, Nicholson Ann, red. Bayesian artificial intelligence. Wyd. 2. Boca Raton, FL: CRC Press, 2011.
Znajdź pełny tekst źródłaKorb, Kevin B., i Ann E. Nicholson. Bayesian Artificial Intelligence. Taylor & Francis Group, 2003.
Znajdź pełny tekst źródłaBayesian Artificial Intelligence. Taylor & Francis Group, 2023.
Znajdź pełny tekst źródłaNielsen, Thomas D., i Finn V. Jensen. Bayesian Networks and Decision Graphs. Springer New York, 2010.
Znajdź pełny tekst źródła