Дисертації з теми "Apprentissage profond avec incertitude"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-22 дисертацій для дослідження на тему "Apprentissage profond avec incertitude".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Yang, Yingyu. "Analyse automatique de la fonction cardiaque par intelligence artificielle : approche multimodale pour un dispositif d'échocardiographie portable." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4107.
According to the 2023 annual report of the World Heart Federation, cardiovascular diseases (CVD) accounted for nearly one third of all global deaths in 2021. Compared to high-income countries, more than 80% of CVD deaths occurred in low and middle-income countries. The inequitable distribution of CVD diagnosis and treatment resources still remains unresolved. In the face of this challenge, affordable point-of-care ultrasound (POCUS) devices demonstrate significant potential to improve the diagnosis of CVDs. Furthermore, by taking advantage of artificial intelligence (AI)-based tools, POCUS enables non-experts to help, thus largely improving the access to care, especially in less-served regions.The objective of this thesis is to develop robust and automatic algorithms to analyse cardiac function for POCUS devices, with a focus on echocardiography (ECHO) and electrocardiogram (ECG). Our first goal is to obtain explainable cardiac features from each single modality respectively. Our second goal is to explore a multi-modal approach by combining ECHO and ECG data.We start by presenting two novel deep learning (DL) frameworks for echocardiography segmentation and motion estimation tasks, respectively. By incorporating shape prior and motion prior into DL models, we demonstrate through extensive experiments that such prior can help improve the accuracy and generalises well on different unseen datasets. Furthermore, we are able to extract left ventricle ejection fraction (LVEF), global longitudinal strain (GLS) and other useful indices for myocardial infarction (MI) detection.Next, we propose an explainable DL model for unsupervised electrocardiogram decomposition. This model can extract interpretable information related to different ECG subwaves without manual annotation. We further apply those parameters to a linear classifier for myocardial infarction detection, which showed good generalisation across different datasets.Finally, we combine data from both modalities together for trustworthy multi-modal classification. Our approach employs decision-level fusion with uncertainty, allowing training with unpaired multi-modal data. We further evaluate the trained model using paired multi-modal data, showcasing the potential of multi-modal MI detection to surpass that from a single modality.Overall, our proposed robust and generalisable algorithms for ECHO and ECG analysis demonstrate significant potential for portable cardiac function analysis. We anticipate that our novel framework could be further validated using real-world portable devices. We envision that such advanced integrative tools may significantly contribute towards better identification of CVD patients
Lelong, Thibault. "Reconnaissance des documents avec de l'apprentissage profond pour la réalité augmentée." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS017.
This doctoral project focuses on issues related to the identification of images and documents in augmented reality applications using markers, particularly when using cameras. The research is set in a technological context where interaction through augmented reality is essential in several domains, including industry, which require reliable identification methodologies.In an initial phase, the project assesses various identification and image processing methodologies using a database specially designed to reflect the challenges of the industrial context. This research allows an in-depth analysis of existing methodologies, thus revealing their potentials and limitations in various application scenarios.Subsequently, the project proposes a document detection system aimed at enhancing existing solutions, optimized for environments such as web browsers. Then, an innovative image research methodology is introduced, relying on an analysis of the image in sub-parts to increase the accuracy of identification and avoid image confusions. This approach allows for more precise and adaptive identification, particularly with respect to variations in the layout of the target image.Finally, in the context of collaborative work with ARGO company, a real-time image tracking engine was developed, optimized for low-power devices and web environments. This ensures the deployment of augmented reality web applications and their operation on a wide range of devices, including those with limited processing capabilities.It is noteworthy that the works resulting from this doctoral project have been concretely applied and valorized by the Argo company for commercial purposes, thereby confirming the relevance and viability of the developed methodologies and solutions, and attesting to their significant contribution to the technological and industrial field of augmented reality
Phan, Thi Hai Hong. "Reconnaissance d'actions humaines dans des vidéos avec l'apprentissage automatique." Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1038.
In recent years, human action recognition (HAR) has attracted the research attention thanks to its various applications such as intelligent surveillance systems, video indexing, human activities analysis, human-computer interactions and so on. The typical issues that the researchers are envisaging can be listed as the complexity of human motions, the spatial and temporal variations, cluttering, occlusion and change of lighting condition. This thesis focuses on automatic recognizing of the ongoing human actions in a given video. We address this research problem by using both shallow learning and deep learning approaches.First, we began the research work with traditional shallow learning approaches based on hand-scrafted features by introducing a novel feature named Motion of Oriented Magnitudes Patterns (MOMP) descriptor. We then incorporated this discriminative descriptor into simple yet powerful representation techniques such as Bag of Visual Words, Vector of locally aggregated descriptors (VLAD) and Fisher Vector to better represent actions. Also, PCA (Principal Component Analysis) and feature selection (statistical dependency, mutual information) are applied to find out the best subset of features in order to improve the performance and decrease the computational expense. The proposed method obtained the state-of-the-art results on several common benchmarks.Recent deep learning approaches require an intensive computations and large memory usage. They are therefore difficult to be used and deployed on the systems with limited resources. In the second part of this thesis, we present a novel efficient algorithm to compress Convolutional Neural Network models in order to decrease both the computational cost and the run-time memory footprint. We measure the redundancy of parameters based on their relationship using the information theory based criteria, and we then prune the less important ones. The proposed method significantly reduces the model sizes of different networks such as AlexNet, ResNet up to 70% without performance loss on the large-scale image classification task.Traditional approach with the proposed descriptor achieved the great performance for human action recognition but only on small datasets. In order to improve the performance on the large-scale datasets, in the last part of this thesis, we therefore exploit deep learning techniques to classify actions. We introduce the concepts of MOMP Image as an input layer of CNNs as well as incorporate MOMP image into deep neural networks. We then apply our network compression algorithm to accelerate and improve the performance of system. The proposed method reduces the model size, decreases the over-fitting, and thus increases the overall performance of CNN on the large-scale action datasets.Throughout the thesis, we have showed that our algorithms obtain good performance in comparison to the state-of-the-art on challenging action datasets (Weizmann, KTH, UCF Sports, UCF-101 and HMDB51) with low resource required
Coutant, Anthony. "Modèles Relationnels Probabilistes et Incertitude de Références : Apprentissage de structure avec algorithmes de partitionnement." Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=e9a2bfb8-cea0-4ce5-91a0-6b48cae0e909.
We are surrounded by heterogeneous and interdependent data. The i. I. D. Assumption has shown its limits in the algorithms considering tabular datasets, containing individuals with same data domain and without mutual influence on each other. Statistical relational learning aims at representing knowledge, reasoning, and learning in multi-relational datasets with uncertainty and lifted probabilistic graphical models offer a solution for generative learning in this context. We study in this thesis a type of directed lifted graphical model, called probabilistic relational models, in the context of reference uncertainty, i. E. Where dataset’s individuals can have uncertainty over both their internal attributes description and their external memberships in associations with others, having the particularity of relying on individuals partitioning functions in order to find out general knowledge. We show existing models’ limits for learning in this context and propose extensions allowing to use relational clustering methods, more adequate for the problem, and offering a less constrained representation bias permitting extra knowledge discovery, especially between associations types in the relational data domain
Sablayrolles, Alexandre. "Mémorisation et apprentissage de structures d'indexation avec les réseaux de neurones." Thesis, Université Grenoble Alpes, 2020. https://thares.univ-grenoble-alpes.fr/2020GRALM044.pdf.
Machine learning systems, and in particular deep neural networks, aretrained on large quantities of data. In computer vision for instance, convolutionalneural networks used for image classification, scene recognition,and object detection, are trained on datasets which size ranges from tensof thousands to billions of samples. Deep parametric models have a largecapacity, often in the order of magnitude of the number of datapoints.In this thesis, we are interested in the memorization aspect of neuralnetworks, under two complementary angles: explicit memorization,i.e. memorization of all samples of a set, and implicit memorization,that happens inadvertently while training models. Considering explicitmemorization, we build a neural network to perform approximate setmembership, and show that the capacity of such a neural network scaleslinearly with the number of data points. Given such a linear scaling, weresort to another construction for set membership, in which we build aneural network to produce compact codes, and perform nearest neighborsearch among the compact codes, thereby separating “distribution learning”(the neural network) from storing samples (the compact codes), theformer being independent of the number of samples and the latter scalinglinearly with a small constant. This nearest neighbor system performs amore generic task, and can be plugged in to perform set membership.In the second part of this thesis, we analyze the “unintended” memorizationthat happens during training, and assess if a particular data pointwas used to train a model (membership inference). We perform empiricalmembership inference on large networks, on both individual and groupsof samples. We derive the Bayes-optimal membership inference, andconstruct several approximations that lead to state-of-the-art results inmembership attacks. Finally, we design a new technique, radioactive data,that slightly modifies datasets such that any model trained on them bearsan identifiable mark
Belilovsky, Eugene. "Apprentissage de graphes structuré et parcimonieux dans des données de haute dimension avec applications à l’imagerie cérébrale." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC027.
This dissertation presents novel structured sparse learning methods on graphs that address commonly found problems in the analysis of neuroimaging data as well as other high dimensional data with few samples. The first part of the thesis proposes convex relaxations of discrete and combinatorial penalties involving sparsity and bounded total variation on a graph as well as bounded `2 norm. These are developed with the aim of learning an interpretable predictive linear model and we demonstrate their effectiveness on neuroimaging data as well as a sparse image recovery problem.The subsequent parts of the thesis considers structure discovery of undirected graphical models from few observational data. In particular we focus on invoking sparsity and other structured assumptions in Gaussian Graphical Models (GGMs). To this end we make two contributions. We show an approach to identify differences in Gaussian Graphical Models (GGMs) known to have similar structure. We derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. We then show how this approach can be used to obtain confidence intervals on edge differences in GGMs. We then introduce a novel learning based approach to the problem structure discovery of undirected graphical models from observational data. We demonstrate how neural networks can be used to learn effective estimators for this problem. This is empirically shown to be flexible and efficient alternatives to existing techniques
Roca, Vincent. "Harmonisation multicentrique d'images IRM du cerveau avec des modèles génératifs non-supervisés." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILS060.
Magnetic resonance imaging (MRI) enables the acquisition of brain images used in the study of neurologic and psychiatric diseases. MR images are more and more used in statistical studies to identify biomarkers and for predictive models. To improve statistical power, these studies sometimes pool data acquired with different machines, which may introduce technical variability and bias into the analysis of biological variabilities. In the last few years, harmonization methods have been proposed to limit the impact of these variabilities. Many studies have notably worked on generative models based on unsupervised deep learning. The doctoral research is within the context of these models, which constitute a promising but still exploratory research field. In the first part of this manuscript, a review of the prospective harmonization methods is proposed. Different methods consisting in normalization applied at the image level, domain translation or style transfer are described to understand their respective issues, with a special focus on unsupervised generative models. The second part is about the methods for evaluation of retrospective harmonization. A review of these methods is first conducted. The most common rely on “traveling” subjects to assume ground truths for harmonization. The review also presents evaluations employed in the absence of such subjects: study of inter-domain differences, biological patterns and performances of predictive models. Experiments showing limits of some approaches commonly employed and important points to consider for their use are then proposed. The third part presents a new model for harmonization of brain MR images based on a CycleGAN architecture. In contrast with the previous works, the model is three-dimensional and processes full volumes. MR images from six datasets that vary in terms of acquisition parameters and age distributions are used to test the method. Analyses of intensity distributions, brain volumes, image quality metrics and radiomic features show an efficient homogenisation between the different sites of the study. Next, the conservation and the reinforcement of biological patterns are demonstrated with an analysis of the evolution of gray-matter volume estimations with age, experiments of age prediction, ratings of radiologic patterns in the images and a supervised evaluation with a traveling subject dataset. The fourth part also presents an original harmonization method with major updates of the first one in order to establish a “universal” generator able to harmonize images without knowing their domain of origin. After a training with data acquired on eleven MRI scanners, experiments on images from sites not seen during the training show a reinforcement of brain patterns relative to age and Alzheimer after harmonization. Moreover, comparisons with other intensity harmonization approaches suggest that the model is more efficient and more robust to different tasks subsequent to harmonization. These different works are a significant contribution to the domain of retrospective harmonization of brain MR images. The bibliographic documentations indeed provide a methodological knowledge base for the future studies in this domain, whether for harmonization in itself or for validation. In addition, the two developed models are two robust tools publicly available that may be integrated in future MRI multicenter studies
Deschemps, Antonin. "Apprentissage machine et réseaux de convolutions pour une expertise augmentée en dosimétrie biologique." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS104.
Biological dosimetry is the branch of health physics dealing with the estimation of ionizing radiation doses from biomarkers. The current gold standard (defined by the IAEA) relies on estimating how frequently dicentric chromosomes appear in peripheral blood lymphocytes. Variations in acquisition conditions and chromosome morphology makes this a challenging object detection problem. Furthermore, the need for an accurate estimation of the average number of dicentric per cell means that a large number of image has to be processed. Human counting is intrinsically limited, as cognitive load is high and the number of specialist insufficient in the context of a large-scale exposition. The main goal of this PhD is to use recent developments in computer vision brought by deep learning, especially for object detection. The main contribution of this thesis is a proof of concept for a dicentric chromosome detection model. This model agregates several Unet models to reach a high level of performance and quantify its prediction uncertainty, which is a stringent requirement in a medical setting
Vallée, Rémi. "Apprentissage profond pour l'aide au diagnostic et comparaison des mécanismes d'explicabilité avec l'attention visuelle humaine : application à la détection de la maladie de Crohn." Thesis, Nantes Université, 2022. http://www.theses.fr/2022NANU4018.
What are the similarities and differences between the way we perceive our environment and that of deep neural networks? We study this question through a concrete application case, the detection of lesions from Crohn’s disease in endoscopic video capsules. In a first step, we have developed a database, carefully annotated by several experts, which we have made public in order to compensate for the lack of data allowing the evaluation and training of deep learning algorithms in this domain. In a second step, to make the networks more transparent in their decision making and their predictions more explainable, we worked on artificial attention and establish a parallel between it and human visual attention. We have recorded the eye movements of subjects of different levels of expertise during a classification task and show that deep neural networks, whose performance on the classification task is closer to that of experts than to novices, also have an attentional behavior closer to the former. Through this manuscript, we hope to provide tools for the development of diagnostic assistance algorithms, as well as a way to evaluate artificial attention methods. This work provides a deeper understanding of the links between human and artificial attention, with the goal of assisting medical experts in their training and helping to develop new algorithm architectures
Jezequel, Loïc. "Vers une détection d'anomalie unifiée avec une application à la détection de fraude." Electronic Thesis or Diss., CY Cergy Paris Université, 2023. http://www.theses.fr/2023CYUN1190.
Detecting observations straying apart from a baseline case is becoming increasingly critical in many applications. It is found in fraud detection, medical imaging, video surveillance or even in manufacturing defect detection with data ranging from images to sound. Deep anomaly detection was introduced to tackle this challenge by properly modeling the normal class, and considering anything significantly different as anomalous. Given the anomalous class is not well-defined, classical binary classification will not be suitable and lack robustness and reliability outside its training domain. Nevertheless, the best-performing anomaly detection approaches still lack generalization to different types of anomalies. Indeed, each method is either specialized on high-scale object anomalies or low-scale local anomalies.In this context, we first introduce a more generic one-class pretext-task anomaly detector. The model, named OC-MQ, computes an anomaly score by learning to solve a complex pretext task on the normal class. The pretext task is composed of several sub-tasks allowing it to capture a wide variety of visual cues. More specifically, our model is made of two branches each representing discriminative and generative tasks.Nevertheless, an additional anomalous dataset is in reality often available in many applications and can provide harder edge-case anomalous examples. In this light, we explore two approaches for outlier-exposure. First, we generalize the concept of pretext task to outlier-exposure by dynamically learning the pretext task itself with normal and anomalous samples. We propose two the models SadTPS and SadRest that respectively learn a discriminative pretext task of thin plate transform recognition and generative task of image restoration. In addition, we present a new anomaly-distance model SadCLR, where the training of previously unreliable anomaly-distance models is stabilized by adding contrastive regularization on the representation direction. We further enrich existing anomalies by generating several types of pseudo-anomalies.Finally, we extend the two previous approaches to be usable in both one-class and outlier-exposure setting. Firstly, we introduce the AnoMem model which memorizes a set of multi-scale normal prototypes by using modern Hopfield layers. Anomaly distance estimators are then fitted on the deviations between the input and normal prototypes in a one-class or outlier-exposure manner. Secondly, we generalize learnable pretext tasks to be learned only using normal samples. Our proposed model HEAT adversarially learns the pretext task to be just challenging enough to keep good performance on normal samples, while failing on anomalies. Besides, we choose the recently proposed Busemann distance in the hyperbolic Poincaré ball model to compute the anomaly score.Extensive testing was conducted for each proposed method, varying from coarse and subtle style anomalies to a fraud detection dataset of face presentation attacks with local anomalies. These tests yielded state-of-the-art results, showing the significant success of our methods
Suzano, Massa Francisco Vitor. "Mise en relation d'images et de modèles 3D avec des réseaux de neurones convolutifs." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1198/document.
The recent availability of large catalogs of 3D models enables new possibilities for a 3D reasoning on photographs. This thesis investigates the use of convolutional neural networks (CNNs) for relating 3D objects to 2D images.We first introduce two contributions that are used throughout this thesis: an automatic memory reduction library for deep CNNs, and a study of CNN features for cross-domain matching. In the first one, we develop a library built on top of Torch7 which automatically reduces up to 91% of the memory requirements for deploying a deep CNN. As a second point, we study the effectiveness of various CNN features extracted from a pre-trained network in the case of images from different modalities (real or synthetic images). We show that despite the large cross-domain difference between rendered views and photographs, it is possible to use some of these features for instance retrieval, with possible applications to image-based rendering.There has been a recent use of CNNs for the task of object viewpoint estimation, sometimes with very different design choices. We present these approaches in an unified framework and we analyse the key factors that affect performance. We propose a joint training method that combines both detection and viewpoint estimation, which performs better than considering the viewpoint estimation separately. We also study the impact of the formulation of viewpoint estimation either as a discrete or a continuous task, we quantify the benefits of deeper architectures and we demonstrate that using synthetic data is beneficial. With all these elements combined, we improve over previous state-of-the-art results on the Pascal3D+ dataset by a approximately 5% of mean average viewpoint precision.In the instance retrieval study, the image of the object is given and the goal is to identify among a number of 3D models which object it is. We extend this work to object detection, where instead we are given a 3D model (or a set of 3D models) and we are asked to locate and align the model in the image. We show that simply using CNN features are not enough for this task, and we propose to learn a transformation that brings the features from the real images close to the features from the rendered views. We evaluate our approach both qualitatively and quantitatively on two standard datasets: the IKEAobject dataset, and a subset of the Pascal VOC 2012 dataset of the chair category, and we show state-of-the-art results on both of them
Chaabouni, Souad. "Etude et prédiction d'attention visuelle avec les outils d'apprentissage profond en vue d'évaluation des patients atteints des maladies neuro-dégénératives." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0768/document.
This thesis is motivated by the diagnosis and the evaluation of the dementia diseasesand with the aim of predicting if a new recorded gaze presents a complaint of thesediseases. Nevertheless, large-scale population screening is only possible if robust predictionmodels can be constructed. In this context, we are interested in the design and thedevelopment of automatic prediction models for specific visual content to be used in thepsycho-visual experience involving patients with dementia (PwD). The difficulty of sucha prediction lies in a very small amount of training data.Visual saliency models cannot be founded only on bottom-up features, as suggested byfeature integration theory. The top-down component of human visual attention becomesprevalent as human observers explore the visual scene. Visual saliency can be predictedon the basis of seen data. Deep Convolutional Neural Networks (CNN) have proven tobe a powerful tool for prediction of salient areas in static images. In order to constructan automatic prediction model for the salient areas in natural and intentionally degradedvideos, we have designed a specific CNN architecture. To overcome the lack of learningdata we designed a transfer learning scheme derived from bengio’s method. We measureits performances when predicting salient regions. The obtained results are interestingregarding the reaction of normal control subjects against degraded areas in videos. Thepredicted saliency map of intentionally degraded videos gives an interesting results comparedto gaze fixation density maps and other reference models
Farabet, Clément. "Analyse sémantique des images en temps-réel avec des réseaux convolutifs." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00965622.
Theobald, Claire. "Bayesian Deep Learning for Mining and Analyzing Astronomical Data." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0081.
In this thesis, we address the issue of trust in deep learning predictive systems in two complementary research directions. The first line of research focuses on the ability of AI to estimate its level of uncertainty in its decision-making as accurately as possible. The second line, on the other hand, focuses on the explainability of these systems, that is, their ability to convince human users of the soundness of their predictions.The problem of estimating the uncertainties is addressed from the perspective of Bayesian Deep Learning. Bayesian Neural Networks assume a probability distribution over their parameters, which allows them to estimate different types of uncertainties. First, aleatoric uncertainty which is related to the data, but also epistemic uncertainty which quantifies the lack of knowledge the model has on the data distribution. More specifically, this thesis proposes a Bayesian neural network can estimate these uncertainties in the context of a multivariate regression task. This model is applied to the regression of complex ellipticities on galaxy images as part of the ANR project "AstroDeep''. These images can be corrupted by different sources of perturbation and noise which can be reliably estimated by the different uncertainties. The exploitation of these uncertainties is then extended to galaxy mapping and then to "coaching'' the Bayesian neural network. This last technique consists of generating increasingly complex data during the model's training process to improve its performance.On the other hand, the problem of explainability is approached from the perspective of counterfactual explanations. These explanations consist of identifying what changes to the input parameters would have led to a different prediction. Our contribution in this field is based on the generation of counterfactual explanations relying on a variational autoencoder (VAE) and an ensemble of predictors trained on the latent space generated by the VAE. This method is particularly adapted to high-dimensional data, such as images. In this case, they are referred as counterfactual visual explanations. By exploiting both the latent space and the ensemble of classifiers, we can efficiently produce visual counterfactual explanations that reach a higher degree of realism than several state-of-the-art methods
Carrillo, Hernan. "Colorisation d'images avec réseaux de neurones guidés par l'intéraction humaine." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0016.
Colorization is the process of adding colors to grayscale images. It is an important task in the image-editing and animation community. Although automatic colorization methods exist, they often produce unsatisfying results due to artifacts such as color bleeding, inconsistency, unnatural colors, and the ill-posed nature of the problem. Manual intervention is often necessary to achieve the desired outcome. Consequently, there is a growing interest in automating the colorization process while allowing artists to transfer their own style and vision to the process. In this thesis, we investigate various interaction formats by guiding colors of specific areas of an image or transferring them from a reference image or object. As part of this research, we introduce two semi-automatic colorization frameworks. First, we describe a deep learning architecture for exemplar-based image colorization that takes into account user’s reference images. Our second framework uses a diffusion model to colorize line art using user-provided color scribbles. This thesis first delves into a comprehensive overview of state-of-the-art image colorization methods, color spaces, evaluation metrics, and losses. While recent colorization methods based on deep-learning techniques are achieving the best results on this task, these methods are based on complex architectures and a high number of joint losses, which makes the reasoning behind each of these methods difficult. Here, we leverage a simple architecture in order to analyze the impact of different color spaces and several losses. Then, we propose a novel attention layer based on superpixel features to establish robust correspondences between high-resolution deep features from target and reference image pairs, called super-attention. This proposal deals with the quadratic complexity problem of the non-local calculation in the attention layer. Additionally, it helps to overcome color bleeding artifacts. We study its use in color transfer and exemplar-based colorization. We finally extend this model to specifically guide the colorization on segmented objects. Finally, we propose a diffusion probabilistic model based on implicit and explicit conditioning mechanism, to learn colorizing line art. Our approach enables the incorporation of user guidance through explicit color hints while leveraging on the prior knowledge from the trained diffusion model. We condition with an application-specific encoder that learns to extract meaningful information on user-provided scribbles. The method generates diverse and high-quality colorized images
Arnez, Yagualca Fabio Alejandro. "Deep neural network uncertainty runtime monitoring for robust and safe AI-based automated navigation." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG100.
Deep Neural Networks (DNNs) have revolutionized various industries in the past decade, such as highly automated vehicles and unmanned aerial vehicles. DNNs can achieve a notorious performance improvement due to their effectiveness in processing complex sensory inputs and their powerful representation learning that outperforms traditional methods across different automation tasks.Despite the impressive performance improvements introduced by DNNs, they still have significant limitations due to their complexity, opacity, and lack of interpretability. More importantly, for the scope of this thesis, DNNs are susceptible to data distribution shifts, confidence representation in DNN predictions is not straightforward, and design-time property specification and verification can become unfeasible in large DNNs. While reducing errors from deep learning components is essential for building trustworthy AI-based systems that can be deployed and adopted in society, addressing these before-mentioned challenges is crucial as well.This thesis proposes new methods to overcome the aforementioned limitations that leverage uncertainty information to build trustworthy AI-based systems. The approach is bottom-up, starting from the component-level perspective and then moving to the systems-level point of view. The use of uncertainty at the component level is presented for the data distribution shift detection task to enable the detection of situations that may impact the reliability of a DNN component functionality and, therefore, the behavior of an automated system. Next, the system perspective is introduced by taking into account a set of components in sequence, where one component consumes the predictions from another to make its own predictions. In this regard, a method to propagate uncertainty is provided so that a downstream component can consider the uncertainty from the predictions of an upstream component. Finally, a framework for dynamic risk management is proposed to cope with the uncertainties that arise along the autonomous navigation system
Chen, Dexiong. "Modélisation de données structurées avec des machines profondes à noyaux et des applications en biologie computationnelle." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM070.
Developing efficient algorithms to learn appropriate representations of structured data, including sequences or graphs, is a major and central challenge in machine learning. To this end, deep learning has become popular in structured data modeling. Deep neural networks have drawn particular attention in various scientific fields such as computer vision, natural language understanding or biology. For instance, they provide computational tools for biologists to possibly understand and uncover biological properties or relationships among macromolecules within living organisms. However, most of the success of deep learning methods in these fields essentially relies on the guidance of empirical insights as well as huge amounts of annotated data. Exploiting more data-efficient models is necessary as labeled data is often scarce.Another line of research is kernel methods, which provide a systematic and principled approach for learning non-linear models from data of arbitrary structure. In addition to their simplicity, they exhibit a natural way to control regularization and thus to avoid overfitting.However, the data representations provided by traditional kernel methods are only defined by simply designed hand-crafted features, which makes them perform worse than neural networks when enough labeled data are available. More complex kernels inspired by prior knowledge used in neural networks have thus been developed to build richer representations and thus bridge this gap. Yet, they are less scalable. By contrast, neural networks are able to learn a compact representation for a specific learning task, which allows them to retain the expressivity of the representation while scaling to large sample size.Incorporating complementary views of kernel methods and deep neural networks to build new frameworks is therefore useful to benefit from both worlds.In this thesis, we build a general kernel-based framework for modeling structured data by leveraging prior knowledge from classical kernel methods and deep networks. Our framework provides efficient algorithmic tools for learning representations without annotations as well as for learning more compact representations in a task-driven way. Our framework can be used to efficiently model sequences and graphs with simple interpretation of predictions. It also offers new insights about designing more expressive kernels and neural networks for sequences and graphs
Geiler, Louis. "Deep learning for churn prediction." Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7333.
The problem of churn prediction has been traditionally a field of study for marketing. However, in the wake of the technological advancements, more and more data can be collected to analyze the customers behaviors. This manuscript has been built in this frame, with a particular focus on machine learning. Thus, we first looked at the supervised learning problem. We have demonstrated that logistic regression, random forest and XGBoost taken as an ensemble offer the best results in terms of Area Under the Curve (AUC) among a wide range of traditional machine learning approaches. We also have showcased that the re-sampling approaches are solely efficient in a local setting and not a global one. Subsequently, we aimed at fine-tuning our prediction by relying on customer segmentation. Indeed,some customers can leave a service because of a cost that they deem to high, and other customers due to a problem with the customer’s service. Our approach was enriched with a novel deep neural network architecture, which operates with both the auto-encoders and the k-means approach. Going further, we focused on self-supervised learning in the tabular domain. More precisely, the proposed architecture was inspired by the work on the SimCLR approach, where we altered the architecture with the Mean-Teacher model from semi-supervised learning. We showcased through the win matrix the superiority of our approach with respect to the state of the art. Ultimately, we have proposed to apply what we have built in this manuscript in an industrial setting, the one of Brigad. We have alleviated the company churn problem with a random forest that we optimized through grid-search and threshold optimization. We also proposed to interpret the results with SHAP (SHapley Additive exPlanations)
Taha, May. "Probing sequence-level instructions for gene expression." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTT096/document.
Gene regulation is tightly controlled to ensure a wide variety of cell types and functions. These controls take place at different levels and are associated with different genomic regulatory regions. An actual challenge is to understand how the gene regulation machinery works in each cell type and to identify the most important regulators. Several studies attempt to understand the regulatory mechanisms by modeling gene expression using epigenetic marks. Nonetheless, these approaches rely on experimental data which are limited to some samples, costly and time-consuming. Besides, the important component of gene regulation based at the sequence level cannot be captured by these approaches. The main objective of this thesis is to explain mRNA expression based only on DNA sequences features. In a first work, we use Lasso penalized linear regression to predict gene expression using DNA features such as transcription factor binding site (motifs) and nucleotide compositions. We measured the accuracy of our approach on several data from the TCGA database and find similar performance as that of models fitted with experimental data. In addition, we show that nucleotide compositions of different regulatory regions have a major impact on gene expression. Furthermore, we rank the influence of each regulatory regions and show a strong effect of the gene body, especially introns.In a second part, we try to increase the performances of the model. We first consider adding interactions between nucleotide compositions and applying non-linear transformations on predictive variables. This induces a slight increase in model performances.To go one step further, we then learn deep neuronal networks. We consider two types of neural networks: multilayer perceptrons and convolution networks. Hyperparameters of each network are optimized. The performances of both types of networks appear slightly higher than those of a Lasso penalized linear model. In this thesis, we were able to (i) demonstrate the existence of sequence-level instructions for gene expression and (ii) provide different frameworks based on complementary approaches. Additional work is ongoing, in particular with the last direction based on deep learning, with the aim of detecting additional information present in the sequence
Tong, Zheng. "Evidential deep neural network in the framework of Dempster-Shafer theory." Thesis, Compiègne, 2022. http://www.theses.fr/2022COMP2661.
Deep neural networks (DNNs) have achieved remarkable success on many realworld applications (e.g., pattern recognition and semantic segmentation) but still face the problem of managing uncertainty. Dempster-Shafer theory (DST) provides a wellfounded and elegant framework to represent and reason with uncertain information. In this thesis, we have proposed a new framework using DST and DNNs to solve the problems of uncertainty. In the proposed framework, we first hybridize DST and DNNs by plugging a DSTbased neural-network layer followed by a utility layer at the output of a convolutional neural network for set-valued classification. We also extend the idea to semantic segmentation by combining fully convolutional networks and DST. The proposed approach enhances the performance of DNN models by assigning ambiguous patterns with high uncertainty, as well as outliers, to multi-class sets. The learning strategy using soft labels further improves the performance of the DNNs by converting imprecise and unreliable label data into belief functions. We have also proposed a modular fusion strategy using this proposed framework, in which a fusion module aggregates the belief-function outputs of evidential DNNs by Dempster’s rule. We use this strategy to combine DNNs trained from heterogeneous datasets with different sets of classes while keeping at least as good performance as those of the individual networks on their respective datasets. Further, we apply the strategy to combine several shallow networks and achieve a similar performance of an advanced DNN for a complicated task
de, Vries Harm. "Deep learning and reinforcement learning methods for grounded goal-oriented dialogue." Thesis, 2020. http://hdl.handle.net/1866/24639.
While dialogue systems have the potential to fundamentally change human-machine interaction, developing general chatbots with deep learning and reinforce-ment learning techniques has proven difficult. One challenging aspect is that these systems are expected to operate in broad application domains for which there is not a clear measure of evaluation. This thesis investigates goal-oriented dialogue tasks in multi-modal environments because it (i) constrains the scope of the conversa-tion, (ii) comes with a better-defined objective, and (iii) enables enriching language representations by grounding them to perceptual experiences. More specifically, we develop GuessWhat, an image-based guessing game in which two agents cooper-ate to locate an unknown object through asking a sequence of questions. For the subtask of visual question answering, we propose Conditional Batch Normalization layers as a simple but effective conditioning method that adapts the convolutional activations to the specific question at hand. Finally, we investigate the difficulty of dialogue-based navigation by introducing Talk The Walk, a new task where two agents (a “tourist” and a “guide”) collaborate to have the tourist navigate to target locations in the virtual streets of New York City.
Grégoire, Francis. "Extraction de phrases parallèles à partir d’un corpus comparable avec des réseaux de neurones récurrents bidirectionnels." Thèse, 2017. http://hdl.handle.net/1866/20191.