Dissertations / Theses on the topic 'Réseaux neuronaux bayésiens'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 30 dissertations / theses for your research on the topic 'Réseaux neuronaux bayésiens.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Rossi, Simone. "Improving Scalability and Inference in Probabilistic Deep Models." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.
Full textThroughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
Labatut, Vincent. "Réseaux causaux probabilistes à grande échelle : un nouveau formalisme pour la modélisation du traitement de l'information cérébrale." Phd thesis, Université Paul Sabatier - Toulouse III, 2003. http://tel.archives-ouvertes.fr/tel-00005190.
Full textLiu, Haoran. "Statistical and intelligent methods for default diagnosis and loacalization in a continuous tubular reactor." Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00560886.
Full textKozyrskiy, Bogdan. "Exploring the Intersection of Bayesian Deep Learning and Gaussian Processes." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS064archi.pdf.
Full textDeep learning played a significant role in establishing machine learning as a must-have instrument in multiple areas. The use of deep learning poses several challenges. Deep learning requires a lot of computational power for training and applying models. Another problem with deep learning is its inability to estimate the uncertainty of the predictions, which creates obstacles in risk-sensitive applications. This thesis presents four projects to address these problems: We propose an approach making use of Optical Processing Units to reduce energy consumption and speed up the inference of deep models. We address the problem of uncertainty estimates for classification with Bayesian inference. We introduce techniques for deep models that decreases the cost of Bayesian inference. We developed a novel framework to accelerate Gaussian Process regression. We propose a technique to impose meaningful functional priors for deep models through Gaussian Processes
Tran, Gia-Lac. "Advances in Deep Gaussian Processes : calibration and sparsification." Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS410.pdf.
Full textGaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in a supervised learning problem. It is well-known that GPs are able to make inferences as well as predictive uncertainties with a firm mathematical background. However, GPs are often unfavorable by the practitioners due to their kernel's expressiveness and the computational requirements. Integration of (convolutional) neural networks and GPs are a promising solution to enhance the representational power. As our first contribution, we empirically show that these combinations are miscalibrated, which leads to over-confident predictions. We also propose a novel well-calibrated solution to merge neural structures and GPs by using random features and variational inference techniques. In addition, these frameworks can be intuitively extended to reduce the computational cost by using structural random features. In terms of computational cost, the exact Gaussian Processes require the cubic complexity to training size. Inducing point-based Gaussian Processes are a common choice to mitigate the bottleneck by selecting a small set of active points through a global distillation from available observations. However, the general case remains elusive and it is still possible that the required number of active points may exceed a certain computational budget. In our second study, we propose Sparse-within-Sparse Gaussian Processes which enable the approximation with a large number of inducing points without suffering a prohibitive computational cost
Rio, Maxime. "Modèles bayésiens pour la détection de synchronisations au sein de signaux électro-corticaux." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00859307.
Full textTrinh, Quoc Anh. "Méthodes neuronales dans l'analyse de survie." Evry, Institut national des télécommunications, 2007. http://www.theses.fr/2007TELE0004.
Full textThis thesis proposes a generalization of the conventional survival models where the linear prdictive variables are replaced by nonlinear multi-layer perceptions of variables. This modelling by neural networks predict the survival times with talking into account the time effects and the interactions between variables. The neural network models will be validated by cross validation technique or the bayesian slection criterion based on the model's posteriori probability. The prediction is refined by a boostrap aggregating (Bagging) and bayesian models average to increase the precision. Moreower, the censoring, the particularity of the survival analysis, needs a survival model which could take into account all available knowledges on the data for estimation to obtain a better prediction. The bayesian approach is thus a proposed approach because it allows a better generalization of the neural networks because of the avoidance of the overlifting. Moreover, the hierarchical models in bayesian learning of the neural networks is appropriate perfectly for a selection of relevant variables which gives a better explanation of the times effects and the interactions between variables
Fond, Antoine. "Localisation par l'image en milieu urbain : application à la réalité augmentée." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0028/document.
Full textThis thesis addresses the problem of localization in urban areas. Inferring accurate positioning in the city is important in many applications such as augmented reality or mobile robotics. However, systems based on inertial sensors (IMUs) are subject to significant drifts and GPS data can suffer from a valley effect that limits their accuracy. A natural solution is to rely on the camera pose estimation in computer vision. We notice that buildings are the main visual landmarks of human beings but also objects of interest for augmented reality applications. We therefore aim to compute the camera pose relatively to a database of known reference buildings from a single image. The problem is twofold : find the visible references in the current image (place recognition) and compute the camera pose relatively to them. Conventional approaches to these two sub-problems are challenged in urban environments due to strong perspective effects, frequent repetitions and visual similarity between facades. While specific approaches to these environments have been developed that exploit the high structural regularity of such environments, they still suffer from a number of limitations in terms of detection and recognition of facades as well as pose computation through model registration. The original method developed in this thesis is part of these specific approaches and aims to overcome these limitations in terms of effectiveness and robustness to clutter and changes of viewpoints and illumination. For do so, the main idea is to take advantage of recent advances in deep learning by convolutional neural networks to extract high-level information on which geometric models can be based. Our approach is thus mixed Bottom- Up/Top-Down and is divided into three key stages. We first propose a method to estimate the rotation of the camera pose. The 3 main vanishing points of the image of urban environnement, known as Manhattan vanishing points, are detected by a convolutional neural network (CNN) that estimates both these vanishing points and the image segmentation relative to them. A second refinement step uses this information and image segmentation in a Bayesian model to estimate these points effectively and more accurately. By estimating the camera’s rotation, the images can be rectified and thus free from perspective effects to find the translation. In a second contribution, we aim to detect the facades in these rectified images to recognize them among a database of known buildings and estimate a rough translation. For the sake of efficiency, a series of cues based on facade specific characteristics (repetitions, symmetry, semantics) have been proposed to enable the fast selection of facade proposals. Then they are classified as facade or non-facade according to a new contextual CNN descriptor. Finally, the matching of the detected facades to the references is done by a nearest neighbor search using a metric learned on these descriptors. Eventually we propose a method to refine the estimation of the translation relying on the semantic segmentation inferred by a CNN for its robustness to changes of illumination ans small deformations. If we can already estimate a rough translation from these detected facades, we choose to refine this result by relying on the se- mantic segmentation of the image inferred from a CNN for its robustness to changes of illuminations and small deformations. Since the facade is identified in the previous step, we adopt a model-based approach by registration. Since the problems of registration and segmentation are linked, a Bayesian model is proposed which enables both problems to be jointly solved. This joint processing improves the results of registration and segmentation while remaining efficient in terms of computation time. These three parts have been validated on consistent community data sets. The results show that our approach is fast and more robust to changes in shooting conditions than previous methods
Tran, Ba-Hien. "Advancing Bayesian Deep Learning : Sensible Priors and Accelerated Inference." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS280.pdf.
Full textOver the past decade, deep learning has witnessed remarkable success in a wide range of applications, revolutionizing various fields with its unprecedented performance. However, a fundamental limitation of deep learning models lies in their inability to accurately quantify prediction uncertainty, posing challenges for applications that demand robust risk assessment. Fortunately, Bayesian deep learning provides a promising solution by adopting a Bayesian formulation for neural networks. Despite significant progress in recent years, there remain several challenges that hinder the widespread adoption and applicability of Bayesian deep learning. In this thesis, we address some of these challenges by proposing solutions to choose sensible priors and accelerate inference for Bayesian deep learning models. The first contribution of the thesis is a study of the pathologies associated with poor choices of priors for Bayesian neural networks for supervised learning tasks and a proposal to tackle this problem in a practical and effective way. Specifically, our approach involves reasoning in terms of functional priors, which are more easily elicited, and adjusting the priors of neural network parameters to align with these functional priors. The second contribution is a novel framework for conducting model selection for Bayesian autoencoders for unsupervised tasks, such as representation learning and generative modeling. To this end, we reason about the marginal likelihood of these models in terms of functional priors and propose a fully sample-based approach for its optimization. The third contribution is a novel fully Bayesian autoencoder model that treats both local latent variables and the global decoder in a Bayesian fashion. We propose an efficient amortized MCMC scheme for this model and impose sparse Gaussian process priors over the latent space to capture correlations between latent encodings. The last contribution is a simple yet effective approach to improve likelihood-based generative models through data mollification. This accelerates inference for these models by allowing accurate density-esimation in low-density regions while addressing manifold overfitting
Bourgeois, Yoann. "Les réseaux de neurones artificiels pour mesurer les risques économiques et financiers." Paris, EHESS, 2003. http://www.theses.fr/2003EHES0118.
Full textThe objective of this thesis is to provide complete methodologies to solve prediction and classification problems in economy and finance by using Artificial Neural networks. The plan of work shows that the thesisplays a great part in establishing in several ways a statistic methodology for neural networks. We proceed in four chapters. The first chapter describes supervised and unsupervised neural network methodology to modelize quantitative or qualitative variables. In the second chapter, we are interested by the bayesian approach for supervised neural networks and the developpement of a set of misspecification statistic tests for binary choice models. In chapter three, we show that multivariate supervised neural networks enable to take into account structural changes and the neural networks methodology is able to estimate some probabilities of exchange crisis. In chapter four, we develope a complete based neural network-GARCH model to manage a stocks portfolio. We introduce some terms as conditional returns or conditional risk for a stock or a portfolio. Next, we apply bayesian Self-Organizing Map in order to estimate the univariate probability density function of the DM/USD exchange rate
Tchoumatchenko, Irina. "Extraction des règles logiques dans des réseaux de neurones formels : application a la prédiction de la structure secondaire des protéines." Paris 6, 1994. http://www.theses.fr/1994PA066448.
Full textBoubezoul, Abderrahmane. "Système d'aide au diagnostic par apprentissage : application aux systèmes microélectroniques." Aix-Marseille 3, 2008. http://www.theses.fr/2008AIX30072.
Full textLanternier, Brice. "Retour d'expérience et fiabilité prévisionnelle : mise en oeuvre de modèles et détermination des facteurs influant la fiabilité pour le calcul de taux de défaillance des matériels mécaniques utilisés en tant que dispositifs de sécurité." Saint-Etienne, 2007. http://www.theses.fr/2007STET4011.
Full textFunctional safety assessment requires a safety level quantification of equipments by a qualitative and quantitative analysis. Some industrials whose have no specific feedback for their activities experience difficulties to provide reliable and relevant results. Designers of reliability databases for electronic components have defined models for calculating failure rates depending on the parameters of use. There is nothing in the field of mechanical equipment. This research aims to develop a methodology to improve the predictions reliability of mechanical and electromechanical equipment. Thus, this work implements models that allow accurate prediction reliability taking into account mechanical equipment specificity and influential factors reliability. We propose an analysis method for different feedback based on the quality and quantity of information. This study is only based on operating feedback equipment to take into account influencing factors reliability, the subject of this thesis. Thus, in order to deal with efficiently operating feedback resulting from generic databases, the use of bayesian techniques and weighting of various input data according to pre-defined factors is proposed. The second approach, fully parametric, is based on proportional hazard model to get an environmental function reflecting the impact of factors on reliability. Finally, a neural networks mode is available for numerous operating feedback in quantity and quality
Verley, Gilles. "Contribution à la validation des réseaux connexionnistes en reconnaissance des formes." Tours, 1994. http://www.theses.fr/1994TOUR4024.
Full textMothe, Josiane. "Modèle connexionniste pour la recherche d'informations. Expansion dirigée de requêtes et apprentissage." Toulouse 3, 1994. http://www.theses.fr/1994TOU30080.
Full textDehaene, Guillaume. "Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB189.
Full textBayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood
Ouali, Abdelaziz. "Nouvelle approche de "Fouille de données" permettant le démembrement syndromique des troubles psychotiques." Versailles-St Quentin en Yvelines, 2006. http://www.theses.fr/2006VERS0002.
Full textCurrent approaches in the field of data analysis applied to Medicine use traditional statistical methods which showed limitations Data Mining consists in exploring and processing large volumes of data while the other methods are confirmatory and use structured data of often smaller sizes The main motivation of our thesis consist on the proposal of a new approach based on a hybrid Data Mining algorithm in order to extract knowledge applied to medical databases. The object of our study concerns a disease which affects about 1 % of the French population that is Schizophrenia. Conventional descriptions, codified by means of internationally recognized classifications, allowed the definition of nosographic categories of psychiatric disorders, which were however never validated by physiopathological data. It results in a considerable amount of data that needs to be optimizing both for operational and scientific purpose. It is thus necessary to use precise tools for phenotypic characterization and provide with an appreciation of the value of those variables to define possible sub-groups of the disease. We suggest setting up knowledge extraction architecture merging DataMining algorithms, the first part of this architecture will use the algorithm of association rules as the most relevant tool of feature selection of variables. Based on this sub-group of attributes, the second part will aim at supplying probabilistic profiles concerning phonotypical characteristics of patients suffering schizophrenia and to create a model of reliable classification by the use of the algorithms of Bayesians Networks and Neuronal Networks
Lamirel, Jean-Charles. "Vers une approche systémique et multivues pour l'analyse de données et la recherche d'information : un nouveau paradigme." Habilitation à diriger des recherches, Université Nancy II, 2010. http://tel.archives-ouvertes.fr/tel-00552247.
Full textLalanne, Tristan. "Développement d'un procédé d'analyse automatique d'images trichromes appliqué à la métrologie thermique." Toulouse, ENSAE, 1999. http://www.theses.fr/1999ESAE0008.
Full textWolinski, Pierre. "Structural Learning of Neural Networks." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS026.
Full textThe structure of a neural network determines to a large extent its cost of training and use, as well as its ability to learn. These two aspects are usually in competition: the larger a neural network is, the better it will perform the task assigned to it, but the more it will require memory and computing time resources for training. Automating the search of efficient network structures -of reasonable size and performing well- is then a very studied question in this area. Within this context, neural networks with various structures are trained, which requires a new set of training hyperparameters for each new structure tested. The aim of the thesis is to address different aspects of this problem. The first contribution is a training method that operates within a large perimeter of network structures and tasks, without needing to adjust the learning rate. The second contribution is a network training and pruning technique, designed to be insensitive to the initial width of the network. The last contribution is mainly a theorem that makes possible to translate an empirical training penalty into a Bayesian prior, theoretically well founded. This work results from a search for properties that theoretically must be verified by training and pruning algorithms to be valid over a wide range of neural networks and objectives
Jauffret, Adrien. "De l'auto-évaluation aux émotions : approche neuromimétique et bayésienne de l'apprentissage de comportements complexes impliquant des informations multimodales." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112120/document.
Full textThe goal of this thesis is to build a bio-inspired architecture allowing a robot to autonomouslynavigate over large distances. In a cognitive science point of view, the model also aim at improv-ing the understanding of the underlying biological mechanisms. Previous works showed thata computational model of hippocampal place cells, based on neurobiological studies made onrodent, allows a robot to learn robust navigation behaviors. The robot can learn a round or ahoming behavior from a few associations between places and actions. The learning and recog-nition of a place were only defined by visual information and shows limitations for navigatinglarge environments.Adding other sensorial modalities is an effective solution for improving the robustness of placesrecognition in complex environments. This solution led us to the elementary blocks requiredwhen trying to perform multimodal information merging. Such merging has been done, first,by a simple conditioning between 2 modalities and next improved by a more generic model ofinter-modal prediction. In this model, each modality learns to predict the others in usual situa-tions, in order to be able to detect abnormal situations and to compensate missing informationof the others. Such a low level mechanism allows to keep a coherent perception even if onemodality is wrong. Moreover, the model can detect unexpected situations and thus exhibit someself-assessment capabilities: the assessment of its own perception. Following this model of self-assessment, we focus on the fundamental properties of a system for evaluating its behaviors.The first fundamental property that pops out is the statement that evaluating a behavior is anability to recognize a dynamics between sensations and actions, rather than recognizing a sim-ple sensorial pattern. A first step was thus to take into account the sensation/action couplingand build an internal minimalist model of the interaction between the agent and its environment.Such of model defines the basis on which the system will build predictions and expectations.The second fundamental property of self-assessment is the ability to extract relevant informa-tion by the use of statistical processes to perform predictions. We show how a neural networkcan estimate probability density functions through a simple conditioning rule. This probabilis-tic learning allows to achieve bayesian inferences since the system estimates the probability ofobserving a particular behavior from statistical information it recognizes about this behavior.The robot estimates the different statistical momentums (mean, variance, skewness, etc...) of abehavior dynamics by cascading few simple conditioning. Then, the non-recognition of such adynamics is interpreted as an abnormal behavior.But detecting an abnormal behavior is not sufficient to conclude to its inefficiency. The systemmust also monitor the temporal evolution of such an abnormality to judge the relevance of thebehavior. We show how an emotional meta-controller can use this novelty detection to regu-late behaviors and so select the best appropriate strategy in a given context. Finally, we showhow a simple frustration mechanism allows the robot to call for help when it detects potentialdeadlocks. Such a mechanism highlights situations where a skills improvement is possible, soas some developmental processes
Schmitt, Aurore. "Variabilité de la sénescence du squelette humain. Réflexions sur les indicateurs de l'âge au décès : à la recherche d'un outil performant." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2001. http://tel.archives-ouvertes.fr/tel-00255753.
Full textSuite à ces analyses, une nouvelle approche méthodologique est proposée. Après avoir sélectionné certains indicateurs osseux (la symphyse pubienne, la surface sacro-pelvienne iliaque et l'extrémité sternale de la quatrième côte), nous avons élaboré un nouveau système de cotation de façon à optimiser la reproductibilité et à prendre en compte la variabilité des indicateurs. Nous avons étudié des échantillons de référence provenant de six contextes géographiques différents, de façon à englober une variabilité de la sénescence la plus large possible. Les données ont ensuite été traitées par l'approche bayésienne dans le but de classer les spécimens dans des intervalles chronologiques. Nous avons également testé le potentiel des réseaux de neurones artificiels, mécanisme calculatoire approprié pour gérer les relations non-linéaires entre variables.
Les résultats ont mis en évidence que la surface sacro-pelvienne iliaque est un indicateur majeur de l'âge au décès, mais que la combinaison de plusieurs indicateurs n'augmente pas la fiabilité de l'estimation. Le nouveau système de cotation et le traitement des données proposés permettent de classer les spécimens avec fiabilité et d'identifier les individus de plus de 60 ans, catégorie dont l'effectif est toujours sous-estimé dans les études paléobiologiques. Les réseaux de neurones artificiels s'avèrent un outil prometteur.
Touya, Thierry. "Méthodes d'optimisation pour l'espace et l'environnement." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00366141.
Full textLa première traite d'une antenne spatiale réseau active.
Il faut d'abord calculer les lois d'alimentation pour satisfaire les contraintes de rayonnement. Nous transformons un problème avec de nombreux minima locaux en un problème d'optimisation convexe, dont l'optimum est le minimum global du problème initial, en utilisant le principe de conservation de l'énergie.
Nous résolvons ensuite un problème d'optimisation topologique: il faut réduire le nombre d'éléments rayonnants (ER). Nous appliquons une décomposition en valeurs singulières à l'ensemble des modules optimaux relaxés, puis un algorithme de type gradient topologique décide les regroupements entre ER élémentaires.
La deuxième partie porte sur une simulation type boîte noire d'un accident chimique.
Nous effectuons une étude de fiabilité et de sensibilité suivant un grand nombre de paramètres (probabilités de défaillance, point de conception, et paramètres influents). Sans disposer du gradient, nous utilisons un modèle réduit.
Dans un premier cas test nous avons comparé les réseaux neuronaux et la méthode d'interpolation sur grille éparse Sparse Grid (SG). Les SG sont une technique émergente: grâce à leur caractère hiérarchique et un algorithme adaptatif, elles deviennent particulièrement efficaces pour les problèmes réels (peu de variables influentes).
Elles sont appliquées à un cas test en plus grande dimension avec des améliorations spécifiques (approximations successives et seuillage des données).
Dans les deux cas, les algorithmes ont donné lieu à des logiciels opérationnels.
Jaureguiberry, Xabier. "Fusion pour la séparation de sources audio." Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0030.
Full textUnderdetermined blind source separation is a complex mathematical problem that can be satisfyingly resolved for some practical applications, providing that the right separation method has been selected and carefully tuned. In order to automate this selection process, we propose in this thesis to resort to the principle of fusion which has been widely used in the related field of classification yet is still marginally exploited in source separation. Fusion consists in combining several methods to solve a given problem instead of selecting a unique one. To do so, we introduce a general fusion framework in which a source estimate is expressed as a linear combination of estimates of this same source given by different separation algorithms, each source estimate being weighted by a fusion coefficient. For a given task, fusion coefficients can then be learned on a representative training dataset by minimizing a cost function related to the separation objective. To go further, we also propose two ways to adapt the fusion coefficients to the mixture to be separated. The first one expresses the fusion of several non-negative matrix factorization (NMF) models in a Bayesian fashion similar to Bayesian model averaging. The second one aims at learning time-varying fusion coefficients thanks to deep neural networks. All proposed methods have been evaluated on two distinct corpora. The first one is dedicated to speech enhancement while the other deals with singing voice extraction. Experimental results show that fusion always outperform simple selection in all considered cases, best results being obtained by adaptive time-varying fusion with neural networks
Saad, Ali. "Detection of Freezing of Gait in Parkinson's disease." Thesis, Le Havre, 2016. http://www.theses.fr/2016LEHA0029/document.
Full textFreezing of Gait (FoG) is an episodic phenomenon that is a common symptom of Parkinson's disease (PD). This research is headed toward implementing a detection, diagnosis and correction system that prevents FoG episodes using a multi-sensor device. This particular study aims to detect/diagnose FoG using different machine learning approaches. In this study we validate the choice of integrating multiple sensors to detect FoG with better performance. Our first level of contribution is introducing new types of sensors for the detection of FoG (telemeter and goniometer). An advantage in our work is that due to the inconsistency of FoG events, the extracted features from all sensors are combined using the Principal Component Analysis technique. The second level of contribution is implementing a new detection algorithm in the field of FoG detection, which is the Gaussian Neural Network algorithm. The third level of contribution is developing a probabilistic modeling approach based on Bayesian Belief Networks that is able to diagnosis the behavioral walking change of patients before, during and after a freezing event. Our final level of contribution is utilizing tree-structured Bayesian Networks to build a global model that links and diagnoses multiple Parkinson's disease symptoms such as FoG, handwriting, and speech. To achieve our goals, clinical data are acquired from patients diagnosed with PD. The acquired data are subjected to effective time and frequency feature extraction then introduced to the different detection/diagnosis approaches. The used detection methods are able to detect 100% of the present appearances of FoG episodes. The classification performances of our approaches are studied thoroughly and the accuracy of all methodologies is considered carefully and evaluated
Boonkongkird, Chotipan. "Deep learning for Lyman-alpha based cosmology." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS733.pdf.
Full textAs cosmological surveys advance and become more sophisticated, they provide data with increasing resolution and volume. The Lyman-α forest has emerged as a powerful probe to study the intergalactic medium (IGM) properties up to a very high redshift. Analysing this extensive data requires advanced hydrodynamical simulations capable of resolving the observational data, which demands robust hardware and a considerable amount of computational time. Recent developments in machine learning, particularly neural networks, offer potential solutions. With their ability to function as universal fitting mechanisms, neural networks are gaining traction in various disciplines, including astrophysics and cosmology. In this doctoral thesis, we explore a machine learning framework, specifically, an artificial neural network to emulate hydrodynamical simulations from N-body simulations of dark matter. The core principle of this work is based on the fluctuating Gunn-Peterson approximation (FGPA), a framework commonly used to emulate the Lyman-α forest from dark matter. While useful for physical understanding, the FGPA misses to properly predict the absorption by neglecting non-locality in the construction of the IGM. Instead, our method includes the diversity of the IGM while being interpretable, which does not exclusively benefit the Lyman-α forest and extends to other applications. It also provides a more efficient solution to generate simulations, significantly reducing time compared to standard hydrodynamical simulations. We also test its resilience and explore the potential of using this framework to generalise to various astrophysical hypotheses of the IGM physics using a transfer learning method. We discuss how the results relate to other existing methods. Finally, the Lyman-α simulator typically constructs the observational volume using a single timestep of the cosmological simulations. This implies an identical astrophysical environment everywhere, which does not reflect the real universe. We explore and experiment to go beyond this limitation with our emulator, accounting for variable baryonic effects along the line of sight. While this is still preliminary, it could become a framework for constructing consistent light-cones. We apply neural networks to interpolate astrophysical feedback across different cells in simulations to provide mock observables more realistic to the real universe, which would allow us to understand the nature of IGM better and to constrain the ΛCDM model
Wang, Zhiyi. "évaluation du risque sismique par approches neuronales." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC089/document.
Full textSeismic probabilistic risk assessment (SPRA) is one of the most widely used methodologiesto assess and to ensure the performance of critical infrastructures, such as nuclear power plants (NPPs),faced with earthquake events. SPRA adopts a probabilistic approach to estimate the frequency ofoccurrence of severe consequences of NPPs under seismic conditions. The thesis provides discussionson the following aspects: (i) Construction of meta-models with ANNs to build the relations betweenseismic IMs and engineering demand parameters of the structures, for the purpose of accelerating thefragility analysis. The uncertainty related to the substitution of FEMs models by ANNs is investigated.(ii) Proposal of a Bayesian-based framework with adaptive ANNs, to take into account different sourcesof information, including numerical simulation results, reference values provided in the literature anddamage data obtained from post-earthquake observations, in the fragility analysis. (iii) Computation ofGMPEs with ANNs. The epistemic uncertainties of the GMPE input parameters, such as the magnitudeand the averaged thirty-meter shear wave velocity, are taken into account in the developed methodology.(iv) Calculation of the annual failure rate by combining results from the fragility and hazard analyses.The fragility curves are determined by the adaptive ANN, whereas the hazard curves are obtained fromthe GMPEs calibrated with ANNs. The proposed methodologies are applied to various industrial casestudies, such as the KARISMA benchmark and the SMART model
Jaureguiberry, Xabier. "Fusion pour la séparation de sources audio." Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0030/document.
Full textUnderdetermined blind source separation is a complex mathematical problem that can be satisfyingly resolved for some practical applications, providing that the right separation method has been selected and carefully tuned. In order to automate this selection process, we propose in this thesis to resort to the principle of fusion which has been widely used in the related field of classification yet is still marginally exploited in source separation. Fusion consists in combining several methods to solve a given problem instead of selecting a unique one. To do so, we introduce a general fusion framework in which a source estimate is expressed as a linear combination of estimates of this same source given by different separation algorithms, each source estimate being weighted by a fusion coefficient. For a given task, fusion coefficients can then be learned on a representative training dataset by minimizing a cost function related to the separation objective. To go further, we also propose two ways to adapt the fusion coefficients to the mixture to be separated. The first one expresses the fusion of several non-negative matrix factorization (NMF) models in a Bayesian fashion similar to Bayesian model averaging. The second one aims at learning time-varying fusion coefficients thanks to deep neural networks. All proposed methods have been evaluated on two distinct corpora. The first one is dedicated to speech enhancement while the other deals with singing voice extraction. Experimental results show that fusion always outperform simple selection in all considered cases, best results being obtained by adaptive time-varying fusion with neural networks
De, Brevern Alexandre. "Nouvelles stratégies d'analyses et de prédiction des structures tridimensionnelles des protéines." Phd thesis, Université Paris-Diderot - Paris VII, 2001. http://tel.archives-ouvertes.fr/tel-00133819.
Full textCette prédiction se base avec une méthode bayésienne qui permet de comprendre l'importance des acides aminés de maniè;re simple. Pour améliorer cette prédiction, nous nous sommes bases sur deux concepts : (i) 1 repliement local -> n séquences et (ii) 1 séquence -> n repliements. Le premier concept signifie que plusieurs types de séquences peuvent être associes a la même structure et le second qu'une séquence peut-être associée a plusieurs type de repliements. Ces deux aspects sont développés en se basant sur la recherche d'un indice de fiabilité lie a la prédiction locale, pour trouver des zones de fortes probabilités. Certains mots, i.e. successions de blocs protéiques apparaissent plus fréquemment que d'autres. Nous avons donc défini au mieux quelle est l'architecture de ces successions, les liens existants entre ces différents mots.
Du fait de cette redondance qui peut apparaìtre dans la structure protéique, une méthode de compactage qui permet d'associer des structures structurellement proches sur le plan local a été mise au point. Cette approche appelée "protéine hybride" de conception simple permet de catégoriser en classes "structurellement dépendantes" l'ensemble des structures de la base de données protéiques. Cette approche, en plus du compactage, peut être utilisée dans une optique différente, celle de la recherche d'homologie structurale et de la caractérisation des dépendances entre structures et séquences.
Liu, Kaixuan. "Study on knowledge-based garment design and fit evaluation system." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10020/document.
Full textFashion design and fit evaluation play a very important role in the clothing industry. Garment style and fit directly determine whether a customer buys the garment or not. In order to develop a fit garment, designers and pattern makers should adjust style and pattern many times until the satisfaction of their customers. Currently, the traditional fashion design and fit evaluation have three main shortcomings: 1) very time-consuming and low efficiency, 2) requiring experienced designers, and 3) not suitable for garment e-shopping. In my Ph.D. thesis, we propose three key technologies to improve the current design processes in the clothing industry. The first one is the Garment Flat and Pattern Associated design technology (GFPADT). The second one is the 3D interactive garment pattern making technology (3DIGPMT). The last one is the Machine learning-based Garment Fit Evaluation technology (MLBGFET). Finally, we provide a number of knowledge-based garment design and fit evaluation solutions (processes) by combining the proposed three key technologies to deal with garment design and production issues of fashions companies