Literatura académica sobre el tema "Réseaux neuronaux bayésiens"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Réseaux neuronaux bayésiens".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Tesis sobre el tema "Réseaux neuronaux bayésiens"
Rossi, Simone. "Improving Scalability and Inference in Probabilistic Deep Models". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.
Texto completoThroughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
Labatut, Vincent. "Réseaux causaux probabilistes à grande échelle : un nouveau formalisme pour la modélisation du traitement de l'information cérébrale". Phd thesis, Université Paul Sabatier - Toulouse III, 2003. http://tel.archives-ouvertes.fr/tel-00005190.
Texto completoLiu, Haoran. "Statistical and intelligent methods for default diagnosis and loacalization in a continuous tubular reactor". Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00560886.
Texto completoKozyrskiy, Bogdan. "Exploring the Intersection of Bayesian Deep Learning and Gaussian Processes". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS064archi.pdf.
Texto completoDeep learning played a significant role in establishing machine learning as a must-have instrument in multiple areas. The use of deep learning poses several challenges. Deep learning requires a lot of computational power for training and applying models. Another problem with deep learning is its inability to estimate the uncertainty of the predictions, which creates obstacles in risk-sensitive applications. This thesis presents four projects to address these problems: We propose an approach making use of Optical Processing Units to reduce energy consumption and speed up the inference of deep models. We address the problem of uncertainty estimates for classification with Bayesian inference. We introduce techniques for deep models that decreases the cost of Bayesian inference. We developed a novel framework to accelerate Gaussian Process regression. We propose a technique to impose meaningful functional priors for deep models through Gaussian Processes
Tran, Gia-Lac. "Advances in Deep Gaussian Processes : calibration and sparsification". Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS410.pdf.
Texto completoGaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in a supervised learning problem. It is well-known that GPs are able to make inferences as well as predictive uncertainties with a firm mathematical background. However, GPs are often unfavorable by the practitioners due to their kernel's expressiveness and the computational requirements. Integration of (convolutional) neural networks and GPs are a promising solution to enhance the representational power. As our first contribution, we empirically show that these combinations are miscalibrated, which leads to over-confident predictions. We also propose a novel well-calibrated solution to merge neural structures and GPs by using random features and variational inference techniques. In addition, these frameworks can be intuitively extended to reduce the computational cost by using structural random features. In terms of computational cost, the exact Gaussian Processes require the cubic complexity to training size. Inducing point-based Gaussian Processes are a common choice to mitigate the bottleneck by selecting a small set of active points through a global distillation from available observations. However, the general case remains elusive and it is still possible that the required number of active points may exceed a certain computational budget. In our second study, we propose Sparse-within-Sparse Gaussian Processes which enable the approximation with a large number of inducing points without suffering a prohibitive computational cost
Rio, Maxime. "Modèles bayésiens pour la détection de synchronisations au sein de signaux électro-corticaux". Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00859307.
Texto completoTrinh, Quoc Anh. "Méthodes neuronales dans l'analyse de survie". Evry, Institut national des télécommunications, 2007. http://www.theses.fr/2007TELE0004.
Texto completoThis thesis proposes a generalization of the conventional survival models where the linear prdictive variables are replaced by nonlinear multi-layer perceptions of variables. This modelling by neural networks predict the survival times with talking into account the time effects and the interactions between variables. The neural network models will be validated by cross validation technique or the bayesian slection criterion based on the model's posteriori probability. The prediction is refined by a boostrap aggregating (Bagging) and bayesian models average to increase the precision. Moreower, the censoring, the particularity of the survival analysis, needs a survival model which could take into account all available knowledges on the data for estimation to obtain a better prediction. The bayesian approach is thus a proposed approach because it allows a better generalization of the neural networks because of the avoidance of the overlifting. Moreover, the hierarchical models in bayesian learning of the neural networks is appropriate perfectly for a selection of relevant variables which gives a better explanation of the times effects and the interactions between variables
Fond, Antoine. "Localisation par l'image en milieu urbain : application à la réalité augmentée". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0028/document.
Texto completoThis thesis addresses the problem of localization in urban areas. Inferring accurate positioning in the city is important in many applications such as augmented reality or mobile robotics. However, systems based on inertial sensors (IMUs) are subject to significant drifts and GPS data can suffer from a valley effect that limits their accuracy. A natural solution is to rely on the camera pose estimation in computer vision. We notice that buildings are the main visual landmarks of human beings but also objects of interest for augmented reality applications. We therefore aim to compute the camera pose relatively to a database of known reference buildings from a single image. The problem is twofold : find the visible references in the current image (place recognition) and compute the camera pose relatively to them. Conventional approaches to these two sub-problems are challenged in urban environments due to strong perspective effects, frequent repetitions and visual similarity between facades. While specific approaches to these environments have been developed that exploit the high structural regularity of such environments, they still suffer from a number of limitations in terms of detection and recognition of facades as well as pose computation through model registration. The original method developed in this thesis is part of these specific approaches and aims to overcome these limitations in terms of effectiveness and robustness to clutter and changes of viewpoints and illumination. For do so, the main idea is to take advantage of recent advances in deep learning by convolutional neural networks to extract high-level information on which geometric models can be based. Our approach is thus mixed Bottom- Up/Top-Down and is divided into three key stages. We first propose a method to estimate the rotation of the camera pose. The 3 main vanishing points of the image of urban environnement, known as Manhattan vanishing points, are detected by a convolutional neural network (CNN) that estimates both these vanishing points and the image segmentation relative to them. A second refinement step uses this information and image segmentation in a Bayesian model to estimate these points effectively and more accurately. By estimating the camera’s rotation, the images can be rectified and thus free from perspective effects to find the translation. In a second contribution, we aim to detect the facades in these rectified images to recognize them among a database of known buildings and estimate a rough translation. For the sake of efficiency, a series of cues based on facade specific characteristics (repetitions, symmetry, semantics) have been proposed to enable the fast selection of facade proposals. Then they are classified as facade or non-facade according to a new contextual CNN descriptor. Finally, the matching of the detected facades to the references is done by a nearest neighbor search using a metric learned on these descriptors. Eventually we propose a method to refine the estimation of the translation relying on the semantic segmentation inferred by a CNN for its robustness to changes of illumination ans small deformations. If we can already estimate a rough translation from these detected facades, we choose to refine this result by relying on the se- mantic segmentation of the image inferred from a CNN for its robustness to changes of illuminations and small deformations. Since the facade is identified in the previous step, we adopt a model-based approach by registration. Since the problems of registration and segmentation are linked, a Bayesian model is proposed which enables both problems to be jointly solved. This joint processing improves the results of registration and segmentation while remaining efficient in terms of computation time. These three parts have been validated on consistent community data sets. The results show that our approach is fast and more robust to changes in shooting conditions than previous methods
Tran, Ba-Hien. "Advancing Bayesian Deep Learning : Sensible Priors and Accelerated Inference". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS280.pdf.
Texto completoOver the past decade, deep learning has witnessed remarkable success in a wide range of applications, revolutionizing various fields with its unprecedented performance. However, a fundamental limitation of deep learning models lies in their inability to accurately quantify prediction uncertainty, posing challenges for applications that demand robust risk assessment. Fortunately, Bayesian deep learning provides a promising solution by adopting a Bayesian formulation for neural networks. Despite significant progress in recent years, there remain several challenges that hinder the widespread adoption and applicability of Bayesian deep learning. In this thesis, we address some of these challenges by proposing solutions to choose sensible priors and accelerate inference for Bayesian deep learning models. The first contribution of the thesis is a study of the pathologies associated with poor choices of priors for Bayesian neural networks for supervised learning tasks and a proposal to tackle this problem in a practical and effective way. Specifically, our approach involves reasoning in terms of functional priors, which are more easily elicited, and adjusting the priors of neural network parameters to align with these functional priors. The second contribution is a novel framework for conducting model selection for Bayesian autoencoders for unsupervised tasks, such as representation learning and generative modeling. To this end, we reason about the marginal likelihood of these models in terms of functional priors and propose a fully sample-based approach for its optimization. The third contribution is a novel fully Bayesian autoencoder model that treats both local latent variables and the global decoder in a Bayesian fashion. We propose an efficient amortized MCMC scheme for this model and impose sparse Gaussian process priors over the latent space to capture correlations between latent encodings. The last contribution is a simple yet effective approach to improve likelihood-based generative models through data mollification. This accelerates inference for these models by allowing accurate density-esimation in low-density regions while addressing manifold overfitting
Bourgeois, Yoann. "Les réseaux de neurones artificiels pour mesurer les risques économiques et financiers". Paris, EHESS, 2003. http://www.theses.fr/2003EHES0118.
Texto completoThe objective of this thesis is to provide complete methodologies to solve prediction and classification problems in economy and finance by using Artificial Neural networks. The plan of work shows that the thesisplays a great part in establishing in several ways a statistic methodology for neural networks. We proceed in four chapters. The first chapter describes supervised and unsupervised neural network methodology to modelize quantitative or qualitative variables. In the second chapter, we are interested by the bayesian approach for supervised neural networks and the developpement of a set of misspecification statistic tests for binary choice models. In chapter three, we show that multivariate supervised neural networks enable to take into account structural changes and the neural networks methodology is able to estimate some probabilities of exchange crisis. In chapter four, we develope a complete based neural network-GARCH model to manage a stocks portfolio. We introduce some terms as conditional returns or conditional risk for a stock or a portfolio. Next, we apply bayesian Self-Organizing Map in order to estimate the univariate probability density function of the DM/USD exchange rate
Libros sobre el tema "Réseaux neuronaux bayésiens"
Pattern recognition and neural networks. Cambridge: Cambridge University Press, 1996.
Buscar texto completoE, Nicholson Ann, ed. Bayesian artificial intelligence. 2a ed. Boca Raton, FL: CRC Press, 2011.
Buscar texto completoRéseaux bayésiens. 3a ed. Paris: Eyrolles, 2007.
Buscar texto completoPattern Recognition and Neural Networks. Cambridge University Press, 2007.
Buscar texto completoKorb, Kevin B. y Ann E. Nicholson. Bayesian Artificial Intelligence. Taylor & Francis Group, 2003.
Buscar texto completoBayesian Artificial Intelligence. Taylor & Francis Group, 2023.
Buscar texto completoBayesian Networks and Decision Graphs (Information Science and Statistics). Springer, 2007.
Buscar texto completoNielsen, Thomas D. y Finn V. Jensen. Bayesian Networks and Decision Graphs. Springer New York, 2010.
Buscar texto completo