Dissertations / Theses on the topic 'Réseaux neuronaux à convolution'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Réseaux neuronaux à convolution.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Khalfaoui, Hassani Ismail. "Convolution dilatée avec espacements apprenables." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES017.
Full textIn this thesis, we develop and study the Dilated Convolution with Learnable Spacings (DCLS) method. The DCLS method can be considered as an extension of the standard dilated convolution method, but in which the positions of the weights of a neural network are learned during training by the gradient backpropagation algorithm, thanks to an interpolation technique. We empirically demonstrate the effectiveness of the DCLS method by providing concrete evidence from numerous supervised learning experiments. These experiments are drawn from the fields of computer vision, audio, and speech processing, and all show that the DCLS method has a competitive advantage over standard convolution techniques, as well as over several advanced convolution methods. Our approach is structured in several steps, starting with an analysis of the literature and existing convolution techniques that preceded the development of the DCLS method. We were particularly interested in the methods that are closely related to our own and that remain essential to capture the nuances and uniqueness of our approach. The cornerstone of our study is the introduction and application of the DCLS method to convolutional neural networks (CNNs), as well as to hybrid architectures that rely on both convolutional and visual attention approaches. The DCLS method is particularly noteworthy for its capabilities in supervised computer vision tasks such as classification, semantic segmentation, and object detection, all of which are essential tasks in the field. Having originally developed the DCLS method with bilinear interpolation, we explored other interpolation methods that could replace the bilinear interpolation conventionally used in DCLS, and which aim to make the position parameters of the weights in the convolution kernel differentiable. Gaussian interpolation proved to be slightly better in terms of performance. Our research then led us to apply the DCLS method in the field of spiking neural networks (SNNs) to enable synaptic delay learning within a neural network that could eventually be transferred to so-called neuromorphic chips. The results show that the DCLS method stands out as a new state-of-the-art technique in SNN audio classification for certain benchmark tasks in this field. These tasks involve datasets with a high temporal component. In addition, we show that DCLS can significantly improve the accuracy of artificial neural networks for the multi-label audio classification task, a key achievement in one of the most important audio classification benchmarks. We conclude with a discussion of the chosen experimental setup, its limitations, the limitations of our method, and our results
Elbayad, Maha. "Une alternative aux modèles neuronaux séquence-à-séquence pour la traduction automatique." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM012.
Full textIn recent years, deep learning has enabled impressive achievements in Machine Translation.Neural Machine Translation (NMT) relies on training deep neural networks with large number of parameters on vast amounts of parallel data to learn how to translate from one language to another.One crucial factor to the success of NMT is the design of new powerful and efficient architectures. State-of-the-art systems are encoder-decoder models that first encode a source sequence into a set of feature vectors and then decode the target sequence conditioning on the source features.In this thesis we question the encoder-decoder paradigm and advocate for an intertwined encoding of the source and target so that the two sequences interact at increasing levels of abstraction. For this purpose, we introduce Pervasive Attention, a model based on two-dimensional convolutions that jointly encode the source and target sequences with interactions that are pervasive throughout the network.To improve the efficiency of NMT systems, we explore online machine translation where the source is read incrementally and the decoder is fed partial contexts so that the model can alternate between reading and writing. We investigate deterministic agents that guide the read/write alternation through a rigid decoding path, and introduce new dynamic agents to estimate a decoding path for each sample.We also address the resource-efficiency of encoder-decoder models and posit that going deeper in a neural network is not required for all instances.We design depth-adaptive Transformer decoders that allow for anytime prediction and sample-adaptive halting mechanisms to favor low cost predictions for low complexity instances and save deeper predictions for complex scenarios
Pradels, Léo. "Efficient CNN inference acceleration on FPGAs : a pattern pruning-driven approach." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS087.
Full textCNN-based deep learning models provide state-of-the-art performance in image and video processing tasks, particularly for image enhancement or classification. However, these models are computationally and memory-intensive, making them unsuitable for real-time constraints on embedded FPGA systems. As a result, compressing these CNNs and designing accelerator architectures for inference that integrate compression in a hardware-software co-design approach is essential. While software optimizations like pruning have been proposed, they often lack the structured approach needed for effective accelerator integration. To address these limitations, this thesis focuses on accelerating CNNs on FPGAs while complying with real-time constraints on embedded systems. This is achieved through several key contributions. First, it introduces pattern pruning, which imposes structure on network sparsity, enabling efficient hardware acceleration with minimal accuracy loss due to compression. Second, a scalable accelerator for CNN inference is presented, which adapts its architecture based on input performance criteria, FPGA specifications, and target CNN model architecture. An efficient method for integrating pattern pruning within the accelerator and a complete flow for CNN acceleration are proposed. Finally, improvements in network compression are explored through Shift&Add quantization, which modifies FPGA computation methods while maintaining baseline network accuracy
Gariépy, Alexandre, and Alexandre Gariépy. "Robust parallel-gripper grasp getection using convolutional neural networks." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/37993.
Full textLa saisie d’objet est une tâche fondamentale du domaine de la robotique. Des avancées dans ce domaine sont nécessaires au déploiement de robots domestiques ou pour l’automatisation des entrepôts par exemple. Par contre, seulement quelques approches sont capables d’effectuer la détection de points de saisie en temps réel. Dans cet optique, nous présentons une architecture de réseau de neurones à une seule passe nommée Réseau à Transformation Spatiale de Qualité de Saisie, ou encore Grasp Quality Spatial Transformer Network (GQ-STN) en anglais. Se basant sur le Spatial Transformer Network (STN), notre réseau produit non seulement une configuration de saisie mais il produit également une image de profondeur centrée sur cette configuration. Nous connectons notre architecture à un réseau pré-entraîné qui évalue une métrique de robustesse de saisie. Ainsi, nous pouvons entraîner efficacement notre réseau à satisfaire cette métrique de robustesse en utilisant la propagation arrière du gradient provenant du réseau d’évaluation. De plus, ceci nous permet de facilement entraîner le réseau sur des jeux de données contenant peu d’annotations, ce qui est un problème commun en saisie d’objet. Nous proposons également d’utiliser le réseau d’évaluation de robustesse pour comparer différentes approches, ce qui est plus fiable que la métrique d’évaluation par rectangle, la métrique traditionnelle. Notre GQ-STN est capable de détecter des configurations de saisie robustes sur des images de profondeur de jeu de données Dex-Net 2.0 à une précision de 92.4 % en une seule passe du réseau. Finalement, nous démontrons dans une expérience sur un montage physique que notre méthode peut proposer des configurations de saisie robustes plus souvent que les techniques précédentes par échantillonage aléatoire, tout en étant plus de 60 fois plus rapide.
Grasping is a fundamental robotic task needed for the deployment of household robots or furthering warehouse automation. However, few approaches are able to perform grasp detection in real time (frame rate). To this effect, we present Grasp Quality Spatial Transformer Network (GQ-STN), a one-shot grasp detection network. Being based on the Spatial Transformer Network (STN), it produces not only a grasp configuration, but also directly outputs a depth image centered at this configuration. By connecting our architecture to an externally-trained grasp robustness evaluation network, we can train efficiently to satisfy a robustness metric via the backpropagation of the gradient emanating from the evaluation network. This removes the difficulty of training detection networks on sparsely annotated databases, a common issue in grasping. We further propose to use this robustness classifier to compare approaches, being more reliable than the traditional rectangle metric. Our GQ-STN is able to detect robust grasps on the depth images of the Dex-Net 2.0 dataset with 92.4 % accuracy in a single pass of the network. We finally demonstrate in a physical benchmark that our method can propose robust grasps more often than previous sampling-based methods, while being more than 60 times faster.
Grasping is a fundamental robotic task needed for the deployment of household robots or furthering warehouse automation. However, few approaches are able to perform grasp detection in real time (frame rate). To this effect, we present Grasp Quality Spatial Transformer Network (GQ-STN), a one-shot grasp detection network. Being based on the Spatial Transformer Network (STN), it produces not only a grasp configuration, but also directly outputs a depth image centered at this configuration. By connecting our architecture to an externally-trained grasp robustness evaluation network, we can train efficiently to satisfy a robustness metric via the backpropagation of the gradient emanating from the evaluation network. This removes the difficulty of training detection networks on sparsely annotated databases, a common issue in grasping. We further propose to use this robustness classifier to compare approaches, being more reliable than the traditional rectangle metric. Our GQ-STN is able to detect robust grasps on the depth images of the Dex-Net 2.0 dataset with 92.4 % accuracy in a single pass of the network. We finally demonstrate in a physical benchmark that our method can propose robust grasps more often than previous sampling-based methods, while being more than 60 times faster.
Groueix, Thibault. "Learning 3D Generation and Matching." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1024.
Full textThe goal of this thesis is to develop deep learning approaches to model and analyse 3D shapes. Progress in this field could democratize artistic creation of 3D assets which currently requires time and expert skills with technical software.We focus on the design of deep learning solutions for two particular tasks, key to many 3D modeling applications: single-view reconstruction and shape matching.A single-view reconstruction (SVR) method takes as input a single image and predicts the physical world which produced that image. SVR dates back to the early days of computer vision. In particular, in the 1960s, Lawrence G. Roberts proposed to align simple 3D primitives to the input image under the assumption that the physical world is made of cuboids. Another approach proposed by Berthold Horn in the 1970s is to decompose the input image in intrinsic images and use those to predict the depth of every input pixel.Since several configurations of shapes, texture and illumination can explain the same image, both approaches need to form assumptions on the distribution of images and 3D shapes to resolve the ambiguity. In this thesis, we learn these assumptions from large-scale datasets instead of manually designing them. Learning allows us to perform complete object reconstruction, including parts which are not visible in the input image.Shape matching aims at finding correspondences between 3D objects. Solving this task requires both a local and global understanding of 3D shapes which is hard to achieve explicitly. Instead we train neural networks on large-scale datasets to solve this task and capture this knowledge implicitly through their internal parameters.Shape matching supports many 3D modeling applications such as attribute transfer, automatic rigging for animation, or mesh editing.The first technical contribution of this thesis is a new parametric representation of 3D surfaces modeled by neural networks.The choice of data representation is a critical aspect of any 3D reconstruction algorithm. Until recently, most of the approaches in deep 3D model generation were predicting volumetric voxel grids or point clouds, which are discrete representations. Instead, we present an alternative approach that predicts a parametric surface deformation ie a mapping from a template to a target geometry. To demonstrate the benefits of such a representation, we train a deep encoder-decoder for single-view reconstruction using our new representation. Our approach, dubbed AtlasNet, is the first deep single-view reconstruction approach able to reconstruct meshes from images without relying on an independent post-processing, and can do it at arbitrary resolution without memory issues. A more detailed analysis of AtlasNet reveals it also generalizes better to categories it has not been trained on than other deep 3D generation approaches.Our second main contribution is a novel shape matching approach purely based on reconstruction via deformations. We show that the quality of the shape reconstructions is critical to obtain good correspondences, and therefore introduce a test-time optimization scheme to refine the learned deformations. For humans and other deformable shape categories deviating by a near-isometry, our approach can leverage a shape template and isometric regularization of the surface deformations. As category exhibiting non-isometric variations, such as chairs, do not have a clear template, we learn how to deform any shape into any other and leverage cycle-consistency constraints to learn meaningful correspondences. Our reconstruction-for-matching strategy operates directly on point clouds, is robust to many types of perturbations, and outperforms the state of the art by 15% on dense matching of real human scans
Saidane, Zohra. "Reconnaissance de texte dans les images et les vidéos en utilisant les réseaux de neurones à convolutions." Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004685.
Full textVialatte, Jean-Charles. "Convolution et apprentissage profond sur graphes." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0118/document.
Full textConvolutional neural networks have proven to be the deep learning model that performs best on regularly structured datasets like images or sounds. However, they cannot be applied on datasets with an irregular structure (e.g. sensor networks, citation networks, MRIs). In this thesis, we develop an algebraic theory of convolutions on irregular domains. We construct a family of convolutions that are based on group actions (or, more generally, groupoid actions) that acts on the vertex domain and that have properties that depend on the edges. With the help of these convolutions, we propose extensions of convolutional neural netowrks to graph domains. Our researches lead us to propose a generic formulation of the propagation between layers, that we call the neural contraction. From this formulation, we derive many novel neural network models that can be applied on irregular domains. Through benchmarks and experiments, we show that they attain state-of-the-art performances, and beat them in some cases
Mamalet, Franck. "Adéquation algorithme-architecture pour les réseaux de neurones à convolution : application à l'analyse de visages embarquée." Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0068.
Full textProliferation of image sensors in many electronic devices, and increasing processing capabilities of such sensors, open a field of exploration for the implementation and optimization of complex image processing algorithms in order to provide embedded vision systems. This work is a contribution in the research domain of algorithm-architecture matching. It focuses on a class of algorithms called convolution neural network (ConvNet) and its applications in embedded facial analysis. The facial analysis framework, introduced by Garcia et al., was chosen for its state of the art performances in detection/recognition, and also for its homogeneity based on ConvNets. The first contribution of this work deals with an adequacy study of this facial analysis framework with embedded processors. We propose several algorithmic adaptations of ConvNets, and show that they can lead to significant speedup factors (up to 700) on an embedded processor for mobile phone, without performance degradation. We then present a study of ConvNets parallelization capabilities, through N. Farrugia's PhD work. A coarse-grain parallelism exploration of ConvNets, followed by study of internal scheduling of elementary processors, lead to a parameterized parallel architecture on FPGA, able to detect faces at more than 10 VGA frames per second. Finally, we propose an extension of these studies to the learning phase of neural networks. We analyze several hypothesis space restrictions for ConvNets, and show, on a case study, that classification rate performances are almost the same with a training time divided by up to five
Plouet, Erwan. "Convolutional and dynamical spintronic neural networks." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP120.
Full textThis thesis addresses the development of spintronic components for neuromorphic computing, a novel approach aimed at reducing the significant energy consumption of AI applications. The widespread adoption of AI, including very large scale langage models like ChatGPT, has led to increased energy demands, with data centers consuming about 1-2% of global power, and projected to double by 2030. Traditional hardware architectures, which separate memory and processing units, are not well-suited for AI tasks, as neural networks require frequent access to large in-memory parameters, resulting in excessive energy dissipation. Neuromorphic computing, inspired by the human brain, merges memory and processing capabilities in the same device, potentially reducing energy use. Spintronics, which manipulates electron spin rather than charge, offers components that can operate at lower power and provide efficient processing solutions. The thesis is divided into two main parts. The first part focuses on the experimental implementation of a hybrid hardware-software convolutional neural network (CNN) using spintronic components. Spintronic synapses, which operate with radio frequency signals, enable frequency multiplexing to reduce the need for numerous physical connections in neural networks. This research work explores various designs of AMR spin diode-based synapses, each with different specificities, and demonstrates the integration of these synapses into a hardware CNN. A significant achievement was the implementation of a spintronic convolutional layer within a CNN that, when combined with a software fully-connected layer, successfully classified images from the FashionMNIST dataset with an accuracy of 88%, closely matching the performance of the pure software equivalent network. Key findings include the development and precise control of spintronic synapses, the fabrication of synaptic chains for weighted summation in neural networks, and the successful implementation of a hybrid CNN with experimental spintronic components on a complex task. The second part of the thesis explores the use of spintronic nano oscillators (STNOs) for processing time-dependent signals through their transient dynamics. STNOs exhibit nonlinear behaviors that can be utilized for complex tasks like time series classification. A network of simulated STNOs was trained to discriminate between different types of time series, demonstrating superior performance compared to standard reservoir computing methods. We also proposed and evaluated a multilayer network architecture of STNOs for more complex tasks, such as classifying handwritten digits presented pixel-by-pixel. This architecture achieved an average accuracy of 89.83% similar to an equivalent standard continuous time recurrent neural network (CTRNN), indicating the potential of these networks to adapt to various dynamic tasks. Additionally, guidelines were established for matching device dynamics with input timescales, crucial for optimizing performance in networks of dynamic neurons. We demonstrated that multilayer networks of coupled STNOs can be effectively trained via backpropagation through time, highlighting the efficiency and scalability of spintronic neuromorphic computing. This research demonstrated that spintronic networks can be used to implement specific architectures and solve complex tasks. This paves the way for the creation of compact, low-power spintronic neural networks that could be an alternative to AI hardware, offering a sustainable solution to the growing energy demands of AI technologies
Achvar, Didier. "Séparation de sources : généralisation à un modèle convolutif." Montpellier 2, 1993. http://www.theses.fr/1993MON20222.
Full textChabot, Florian. "Analyse fine 2D/3D de véhicules par réseaux de neurones profonds." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC018/document.
Full textIn this thesis, we are interested in fine-grained analysis of vehicle from an image. We define fine-grained analysis as the following concepts : vehicle detection in the image, vehicle viewpoint (or orientation) estimation, vehicle visibility characterization, vehicle 3D localization and make and model recognition. The design of reliable solutions for fine-grained analysis of vehicle open the door to multiple applications in particular for intelligent transport systems as well as video surveillance systems. In this work, we propose several contributions allowing to address partially or wholly this issue. Proposed approaches are based on joint deep learning technologies and 3D models. In a first section, we deal with make and model classification keeping in mind the difficulty to create training data. In a second section, we investigate a novel method for both vehicle detection and fine-grained viewpoint estimation based on local apparence features and geometric spatial coherence. It uses models learned only on synthetic data. Finally, in a third section, a complete system for fine-grained analysis is proposed. It is based on the multi-task concept. Throughout this report, we provide quantitative and qualitative results. On several aspects related to vehicle fine-grained analysis, this work allowed to outperform state of the art methods
Paillassa, Maxime. "Détection robuste de sources astronomiques par réseaux de neurones à convolutions." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0147.
Full textExtracting reliable source catalogs from images is crucial for a broad range of astronomical research topics.However, the efficiency of current source detection methods becomes severely limited in crowded fields, or when images are contaminated by optical, electronic and environmental defects.Performance in terms of reliability and completeness is now often insufficient with regard to the scientific requirements of large imaging surveys.In this thesis, we develop new methods to produce more robust and reliable source catalogs.We leverage recent advances in deep supervised learning to design generic and reliable models based on convolutional neural networks (CNNs).We present MaxiMask and MaxiTrack, two convolutional neural networks that we trained to automatically identify 13 different types of image defects in astronomical exposures.We also introduce a prototype of a multi-scale CNN-based source detector robust to image defects, which we show to significantly outperform existing algorithms.We discuss the current limitations and potential improvements of our approach in the scope of forthcoming large scale surveys such as Euclid
Plesse, François. "Intégration de Connaissances aux Modèles Neuronaux pour la Détection de Relations Visuelles Rares." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1003.
Full textData shared throughout the world has a major impact on the lives of billions of people. It is critical to be able to analyse this data automatically in order to measure and alter its impact. This analysis is tackled by training deep neural networks, which have reached competitive results in many domains. In this work, we focus on the understanding of daily life images, in particular on the interactions between objects and people that are visible in images, which we call visual relations.To complete this task, neural networks are trained in a supervised manner. This involves minimizing an objective function that quantifies how detected relations differ from annotated ones. Performance of these models thus depends on how widely and accurately annotations cover the space of visual relations.However, existing annotations are not sufficient to train neural networks to detect uncommon relations. Thus we integrate knowledge into neural networks during the training phase. To do this, we model semantic relationships between visual relations. This provides a fuzzy set of relations that more accurately represents visible relations. Using the semantic similarities between relations, the model is able to learn to detect uncommon relations from similar and more common ones. However, the improved training does not always translate to improved detections, because the objective function does not capture the whole relation detection process. Thus during the inference phase, we combine knowledge to model predictions in order to predict more relevant relations, aiming to imitate the behaviour of human observers
Tang, Daogui. "A simulation-based modeling framework for the analysis and protection of smart grids against false pricing attacks." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST017.
Full textThe integration of information and communication technology (ICT) systems with power systems enables a two-way communication exchange between customers and utilities, which helps engaging customers in various demand-response (DR) programs of smart grids (SGs), such as time-of-use (TOU) pricing and real-time pricing (RTP). However, this makes SG cyber-physical system exposed to additional threats coming from the ICT layer. For this reason, the threat of cyber attacks of various types has become a major concern. In this context, the focus of the thesis is on the modeling of , detection of and defense from a specific type of cyber attacks to DR schemes, namely, false pricing attacks (FPAs). The study approaches the problem firstly by modeling FPAs initiated in social networks (SNs). The false electricity prices spreading process is described by a multi-level influence propagation model considering customers’ personality characteristics and information value. Monte Carlo simulation is utilized to account for the stochastic nature of the influence propagation process. Then, considering the integration of distributed renewable energy resources (DRERs) in the RTP context, we study FPAs where attackers manipulate realtime electricity prices by injecting false consumption and renewable generation information. A convolutional neural network (CNN)-based online detector is developed to detect the considered FPAs. Finally, to mitigate the impact of FPAs, an optimal defense strategy is defined, under limited resources. The dynamic interaction between attackers and defenders is modeled as a zero-sum Markov game where neither player has full information of the game model. A modelfree multi-agent reinforcement learning method is proposed to solve the game and find the Nash Equilibrium policies for both players. The thesis provides a simulationbased framework for modelling FPAs to smart grids. The findings of the thesis give insights into how FPAs can impact cyber-physical power systems by misleading a portion of customers in the electricity market and provide implications on how to mitigate such impact by detecting and defending the attacks
Fernandez, Brillet Lucas. "Réseaux de neurones CNN pour la vision embarquée." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM043.
Full textRecently, Convolutional Neural Networks have become the state-of-the-art soluion(SOA) to most computer vision problems. In order to achieve high accuracy rates, CNNs require a high parameter count, as well as a high number of operations. This greatly complicates the deployment of such solutions in embedded systems, which strive to reduce memory size. Indeed, while most embedded systems are typically in the range of a few KBytes of memory, CNN models from the SOA usually account for multiple MBytes, or even GBytes in model size. Throughout this thesis, multiple novel ideas allowing to ease this issue are proposed. This requires to jointly design the solution across three main axes: Application, Algorithm and Hardware.In this manuscript, the main levers allowing to tailor computational complexity of a generic CNN-based object detector are identified and studied. Since object detection requires scanning every possible location and scale across an image through a fixed-input CNN classifier, the number of operations quickly grows for high-resolution images. In order to perform object detection in an efficient way, the detection process is divided into two stages. The first stage involves a region proposal network which allows to trade-off recall for the number of operations required to perform the search, as well as the number of regions passed on to the next stage. Techniques such as bounding box regression also greatly help reduce the dimension of the search space. This in turn simplifies the second stage, since it allows to reduce the task’s complexity to the set of possible proposals. Therefore, parameter counts can greatly be reduced.Furthermore, CNNs also exhibit properties that confirm their over-dimensionment. This over-dimensionement is one of the key success factors of CNNs in practice, since it eases the optimization process by allowing a large set of equivalent solutions. However, this also greatly increases computational complexity, and therefore complicates deploying the inference stage of these algorithms on embedded systems. In order to ease this problem, we propose a CNN compression method which is based on Principal Component Analysis (PCA). PCA allows to find, for each layer of the network independently, a new representation of the set of learned filters by expressing them in a more appropriate PCA basis. This PCA basis is hierarchical, meaning that basis terms are ordered by importance, and by removing the least important basis terms, it is possible to optimally trade-off approximation error for parameter count. Through this method, it is possible to compress, for example, a ResNet-32 network by a factor of ×2 both in the number of parameters and operations with a loss of accuracy <2%. It is also shown that the proposed method is compatible with other SOA methods which exploit other CNN properties in order to reduce computational complexity, mainly pruning, winograd and quantization. Through this method, we have been able to reduce the size of a ResNet-110 from 6.88Mbytes to 370kbytes, i.e. a x19 memory gain with a 3.9 % accuracy loss.All this knowledge, is applied in order to achieve an efficient CNN-based solution for a consumer face detection scenario. The proposed solution consists of just 29.3kBytes model size. This is x65 smaller than other SOA CNN face detectors, while providing equal detection performance and lower number of operations. Our face detector is also compared to a more traditional Viola-Jones face detector, exhibiting approximately an order of magnitude faster computation, as well as the ability to scale to higher detection rates by slightly increasing computational complexity.Both networks are finally implemented in a custom embedded multiprocessor, verifying that theorical and measured gains from PCA are consistent. Furthermore, parallelizing the PCA compressed network over 8 PEs achieves a x11.68 speed-up with respect to the original network running on a single PE
Poulenard, Adrien. "Structures for deep learning and topology optimization of functions on 3D shapes." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX007.
Full textThe field of geometry processing is following a similar path as image analysis with the explosion of publications dedicated to deep learning in recent years. An important research effort is being made to reproduce the successes of deep learning 2D computer vision in the context of 3D shape analysis. Unlike images shapes comes in various representations like meshes or point clouds which often lack canonical structure. This makes traditional deep learning algorithms like Convolutional Neural Networks (CNN) non straightforward to apply to 3D data. In this thesis we propose three main contributions:First, we introduce a method to compare functions on different domains without correspondences and to deform them to make the topology of their set of levels more alike. We apply our method to the classical problem of shape matching in the context of functional maps to produce smoother and more accurate correspondences. Furthermore, our method is based on the continuous optimization of a differentiable energy with respect to the compared functions and is applicable to deep learning. We make two direct contributions to deep learning on 3D data. We introduce a new convolution operator over triangles meshes based on local polar coordinates and apply it to deep learning on meshes. Unlike previous works our operator takes all choices of polar coordinates into account without loss of directional information. Lastly we introduce a new rotation invariant convolution layer over point clouds and show that CNNs based on this layer can outperform state of the art methods in standard tasks on un-alligned datasets even with data augmentation
Haj, Hassan Hawraa. "Détection et classification temps réel de biocellules anormales par technique de segmentation d’images." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0043.
Full textDevelopment of methods for help diagnosis of the real time detection of abnormal cells (which can be considered as cancer cells) through bio-image processing and detection are most important research directions in information science and technology. Our work has been concerned by developing automatic reading procedures of the normal and abnormal bio-images tissues. Therefore, the first step of our work is to detect a certain type of abnormal bio-images associated to many types evolution of cancer within a Microscopic multispectral image, which is an image, repeated in many wavelengths. And using a new segmentation method that reforms itself in an iterative adaptive way to localize and cover the real cell contour, using some segmentation techniques. It is based on color intensity and can be applied on sequences of objects in the image. This work presents a classification of the abnormal tissues using the Convolution neural network (CNN), where it was applied on the microscopic images segmented using the snake method, which gives a high performance result with respect to the other segmentation methods. This classification method reaches high performance values, where it reaches 100% for training and 99.168% for testing. This method was compared to different papers that uses different feature extraction, and proved its high performance with respect to other methods. As a future work, we will aim to validate our approach on a larger datasets, and to explore different CNN architectures and the optimization of the hyper-parameters, in order to increase its performance, and it will be applied to relevant medical imaging tasks including computer-aided diagnosis
Martineau, Maxime. "Deep learning onto graph space : application to image-based insect recognition." Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4024.
Full textThe goal of this thesis is to investigate insect recognition as an image-based pattern recognition problem. Although this problem has been extensively studied along the previous three decades, an element is to the best of our knowledge still to be experimented as of 2017: deep approaches. Therefore, a contribution is about determining to what extent deep convolutional neural networks (CNNs) can be applied to image-based insect recognition. Graph-based representations and methods have also been tested. Two attempts are presented: The former consists in designing a graph-perceptron classifier and the latter graph-based work in this thesis is on defining convolution on graphs to build graph convolutional neural networks. The last chapter of the thesis deals with applying most of the aforementioned methods to insect image recognition problems. Two datasets are proposed. The first one consists of lab-based images with constant background. The second one is generated by taking a ImageNet subset. This set is composed of field-based images. CNNs with transfer learning are the most successful method applied on these datasets
Wang, Lianfa. "Improving the confidence of CFD results by deep learning." Electronic Thesis or Diss., Université Paris sciences et lettres, 2024. http://www.theses.fr/2024UPSLM008.
Full textComputational Fluid Dynamics (CFD) has become an indispensable tool for studying complex flow phenomena in both research and industry over the years. The accuracy of CFD simulations depends on various parameters – geometry, mesh, schemes, solvers, etc. – as well as phenomenological knowledge that only an expert CFD engineer can configure and optimize. The objective of this thesis is to propose an AI assistant to help users, whether they are experts or not, to better choose simulation options and ensure the reliability of results for a target flow phenomenon. In this context, deep learning algorithms are explored to identify the characteristics of flows computed on structured and unstructured meshes of complex geometries. Initially, convolutional neural networks (CNNs), known for their ability to extract patterns from im-ages, are used to identify flow phenomena such as vortices and thermal stratification on structured 2D meshes. Although the results obtained on structured meshes are satisfactory, CNNs can only be applied to structured meshes. To overcome this limitation, a graph-based neural network (GNN) framework is proposed. This framework uses the U-Net architecture and a hierarchy of successively refined graphs through the implementation of a multigrid method (AMG) inspired by the one used in the Code_Saturne CFD code. Subsequently, an in-depth study of kernel functions was conducted according to identification accuracy and training efficiency criteria to better filter the different phenomena on unstructured meshes. After comparing available kernel functions in the literature, a new kernel function based on the Gaussian mixture model was proposed. This function is better suited to identifying flow phenomena on unstructured meshes. The superiority of the proposed architecture and kernel function is demonstrated by several numerical experiments identifying 2D vortices and its adaptability to identifying the characteristics of a 3D flow
Yedroudj, Mehdi. "Steganalysis and steganography by deep learning." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS095.
Full textImage steganography is the art of secret communication in order to exchange a secret message. In the other hand, image steganalysis attempts to detect the presence of a hidden message by searching artefacts within an image. For about ten years, the classic approach for steganalysis was to use an Ensemble Classifier fed by hand-crafted features. In recent years, studies have shown that well-designed convolutional neural networks (CNNs) can achieve superior performance compared to conventional machine-learning approaches.The subject of this thesis deals with the use of deep learning techniques for image steganography and steganalysis in the spatialdomain.The first contribution is a fast and very effective convolutional neural network for steganalysis, named Yedroudj-Net. Compared tomodern deep learning based steganalysis methods, Yedroudj-Net can achieve state-of-the-art detection results, but also takes less time to converge, allowing the use of a large training set. Moreover,Yedroudj-Net can easily be improved by using well known add-ons. Among these add-ons, we have evaluated the data augmentation, and the the use of an ensemble of CNN; Both increase our CNN performances.The second contribution is the application of deep learning techniques for steganography i.e the embedding. Among the existing techniques, we focus on the 3-player game approach.We propose an embedding algorithm that automatically learns how to hide a message secretly. Our proposed steganography system is based on the use of generative adversarial networks. The training of this steganographic system is conducted using three neural networks that compete against each other: the embedder, the extractor, and the steganalyzer. For the steganalyzer we use Yedroudj-Net, this for its affordable size, and for the fact that its training does not require the use of any tricks that could increase the computational time.This second contribution defines a research direction, by giving first reflection elements while giving promising first results
Martin, Pierre-Etienne. "Détection et classification fines d'actions à partir de vidéos par réseaux de neurones à convolutions spatio-temporelles : Application au tennis de table." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0313.
Full textAction recognition in videos is one of the key problems in visual data interpretation. Despite intensive research, differencing and recognizing similar actions remains a challenge. This thesis deals with fine-grained classification of sport gestures from videos, with an application to table tennis.In this manuscript, we propose a method based on deep learning for automatically segmenting and classifying table tennis strokes in videos. Our aim is to design a smart system for students and teachers for analyzing their performances. By profiling the players, a teacher can therefore tailor the training sessions more efficiently in order to improve their skills. Players can also have an instant feedback on their performances.For developing such a system with fine-grained classification, a very specific dataset is needed to supervise the learning process. To that aim, we built the “TTStroke-21” dataset, which is composed of 20 stroke classes plus a rejection class. The TTStroke-21 dataset comprises video clips of recorded table tennis exercises performed by students at the sport faculty of the University of Bordeaux - STAPS. These recorded sessions were annotated by professional players or teachers using a crowdsourced annotation platform. The annotations consist in a description of the handedness of the player and information for each stroke performed (starting and ending frames, class of the stroke).Fine-grained action recognition has some notable differences with coarse-grained action recognition. In general, datasets used for coarse-grained action recognition, the background context often provides discriminative information that methods can use to classify the action, rather than focusing on the action itself. In fine-grained classification, where the inter-class similarity is high, discriminative visual features are harder to extract and the motion plays a key role for characterizing an action.In this thesis, we introduce a Twin Spatio-Temporal Convolutional Neural Network. This deep learning network takes as inputs an RGB image sequence and its computed Optical Flow. The RGB image sequence allows our model to capture appearance features while the optical flow captures motion features. Those two streams are processed in parallel using 3D convolutions, and fused at the last stage of the network. Spatio-temporal features extracted in the network allow efficient classification of video clips from TTStroke-21. Our method gets an average classification performance of 87.3% with a best run of 93.2% accuracy on the test set. When applied on joint detection and classification task, the proposed method reaches an accuracy of 82.6%.A systematic study of the influence of each stream and fusion types on classification accuracy has been performed, giving clues on how to obtain the best performances. A comparison of different optical flow methods and the role of their normalization on the classification score is also done. The extracted features are also analyzed by back-tracing strong features from the last convolutional layer to understand the decision path of the trained model. Finally, we introduce an attention mechanism to help the model focusing on particular characteristic features and also to speed up the training process. For comparison purposes, we provide performances of other methods on TTStroke-21 and test our model on other datasets. We notice that models performing well on coarse-grained action datasets do not always perform well on our fine-grained action dataset.The research presented in this manuscript was validated with publications in one international journal, five international conference papers, two international workshop papers and a reconductible task in MediaEval workshop in which participants can apply their action recognition methods to TTStroke-21. Two additional international workshop papers are in process along with one book chapter
Heuillet, Alexandre. "Exploring deep neural network differentiable architecture design." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG069.
Full textArtificial Intelligence (AI) has gained significant popularity in recent years, primarily due to its successful applications in various domains, including textual data analysis, computer vision, and audio processing. The resurgence of deep learning techniques has played a central role in this success. The groundbreaking paper by Krizhevsky et al., AlexNet, narrowed the gap between human and machine performance in image classification tasks. Subsequent papers such as Xception and ResNet have further solidified deep learning as a leading technique, opening new horizons for the AI community. The success of deep learning lies in its architecture, which is manually designed with expert knowledge and empirical validation. However, these architectures lack the certainty of an optimal solution. To address this issue, recent papers introduced the concept of Neural Architecture Search (NAS), enabling the learning of deep architectures. However, most initial approaches focused on large architectures with specific targets (e.g., supervised learning) and relied on computationally expensive optimization techniques such as reinforcement learning and evolutionary algorithms. In this thesis, we further investigate this idea by exploring automatic deep architecture design, with a particular emphasis on differentiable NAS (DNAS), which represents the current trend in NAS due to its computational efficiency. While our primary focus is on Convolutional Neural Networks (CNNs), we also explore Vision Transformers (ViTs) with the goal of designing cost-effective architectures suitable for real-time applications
Li, Xuhong. "Regularization schemes for transfer learning with convolutional networks." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2497/document.
Full textTransfer learning with deep convolutional neural networks significantly reduces the computation and data overhead of the training process and boosts the performance on the target task, compared to training from scratch. However, transfer learning with a deep network may cause the model to forget the knowledge acquired when learning the source task, leading to the so-called catastrophic forgetting. Since the efficiency of transfer learning derives from the knowledge acquired on the source task, this knowledge should be preserved during transfer. This thesis solves this problem of forgetting by proposing two regularization schemes that preserve the knowledge during transfer. First we investigate several forms of parameter regularization, all of which explicitly promote the similarity of the final solution with the initial model, based on the L1, L2, and Group-Lasso penalties. We also propose the variants that use Fisher information as a metric for measuring the importance of parameters. We validate these parameter regularization approaches on various tasks. The second regularization scheme is based on the theory of optimal transport, which enables to estimate the dissimilarity between two distributions. We benefit from optimal transport to penalize the deviations of high-level representations between the source and target task, with the same objective of preserving knowledge during transfer learning. With a mild increase in computation time during training, this novel regularization approach improves the performance of the target tasks, and yields higher accuracy on image classification tasks compared to parameter regularization approaches
Barhoumi, Amira. "Une approche neuronale pour l’analyse d’opinions en arabe." Thesis, Le Mans, 2020. http://www.theses.fr/2020LEMA1022.
Full textMy thesis is part of Arabic sentiment analysis. Its aim is to determine the global polarity of a given textual statement written in MSA or dialectal arabic. This research area has been subject of numerous studies dealing with Indo-European languages, in particular English. One of difficulties confronting this thesis is the processing of Arabic. In fact, Arabic is a morphologically rich language which implies a greater sparsity : we want to overcome this problem by producing, in a completely automatic way, new arabic specific embeddings. Our study focuses on the use of a neural approach to improve polarity detection, using embeddings. These embeddings have revealed fundamental in various natural languages processing tasks (NLP). Our contribution in this thesis concerns several axis. First, we begin with a preliminary study of the various existing pre-trained word embeddings resources in arabic. These embeddings consider words as space separated units in order to capture semantic and syntactic similarities in the embedding space. Second, we focus on the specifity of Arabic language. We propose arabic specific embeddings that take into account agglutination and morphological richness of Arabic. These specific embeddings have been used, alone and in combined way, as input to neural networks providing an improvement in terms of classification performance. Finally, we evaluate embeddings with intrinsic and extrinsic methods specific to sentiment analysis task. For intrinsic embeddings evaluation, we propose a new protocol introducing the notion of sentiment stability in the embeddings space. We propose also a qualitaive extrinsic analysis of our embeddings by using visualisation methods
Haykal, Vanessa. "Modélisation des séries temporelles par apprentissage profond." Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4019.
Full textTime series prediction is a problem that has been addressed for many years. In this thesis, we have been interested in methods resulting from deep learning. It is well known that if the relationships between the data are temporal, it is difficult to analyze and predict accurately due to non-linear trends and the existence of noise specifically in the financial and electrical series. From this context, we propose a new hybrid noise reduction architecture that models the recursive error series to improve predictions. The learning process fusessimultaneouslyaconvolutionalneuralnetwork(CNN)andarecurrentlongshort-term memory network (LSTM). This model is distinguished by its ability to capture globally a variety of hybrid properties, where it is able to extract local signal features, to learn long-term and non-linear dependencies, and to have a high noise resistance. The second contribution concerns the limitations of the global approaches because of the dynamic switching regimes in the signal. We present a local unsupervised modification with our previous architecture in order to adjust the results by adapting the Hidden Markov Model (HMM). Finally, we were also interested in multi-resolution techniques to improve the performance of the convolutional layers, notably by using the variational mode decomposition method (VMD)
Fourure, Damien. "Réseaux de neurones convolutifs pour la segmentation sémantique et l'apprentissage d'invariants de couleur." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSES056/document.
Full textComputer vision is an interdisciplinary field that investigates how computers can gain a high level of understanding from digital images or videos. In artificial intelligence, and more precisely in machine learning, the field in which this thesis is positioned,computer vision involves extracting characteristics from images and then generalizing concepts related to these characteristics. This field of research has become very popular in recent years, particularly thanks to the results of the convolutional neural networks that form the basis of so-called deep learning methods. Today, neural networks make it possible, among other things, to recognize different objects present in an image, to generate very realistic images or even to beat the champions at the Go game. Their performance is not limited to the image domain, since they are also used in other fields such as natural language processing (e. g. machine translation) or sound recognition. In this thesis, we study convolutional neural networks in order to develop specialized architectures and loss functions for low-level tasks (color constancy) as well as high-level tasks (semantic segmentation). Color constancy, is the ability of the human visual system to perceive constant colours for a surface despite changes in the spectrum of illumination (lighting change). In computer vision, the main approach consists in estimating the color of the illuminant and then suppressing its impact on the perceived color of objects. We approach the task of color constancy with the use of neural networks by developing a new architecture composed of a subsampling operator inspired by traditional methods. Our experience shows that our method makes it possible to obtain competitive performances with the state of the art. Nevertheless, our architecture requires a large amount of training data. In order to partially correct this problem and improve the training of neural networks, we present several techniques for artificial data augmentation. We are also making two contributions on a high-level issue : semantic segmentation. This task, which consists of assigning a semantic class to each pixel of an image, is a challenge in computer vision because of its complexity. On the one hand, it requires many examples of training that are costly to obtain. On the other hand, it requires the adaptation of traditional convolutional neural networks in order to obtain a so-called dense prediction, i. e., a prediction for each pixel present in the input image. To solve the difficulty of acquiring training data, we propose an approach that uses several databases annotated with different labels at the same time. To do this, we define a selective loss function that has the advantage of allowing the training of a convolutional neural network from data from multiple databases. We also developed self-context approach that captures the correlations between labels in different databases. Finally, we present our third contribution : a new convolutional neural network architecture called GridNet specialized for semantic segmentation. Unlike traditional networks, implemented with a single path from the input (image) to the output (prediction), our architecture is implemented as a 2D grid allowing several interconnected streams to operate at different resolutions. In order to exploit all the paths of the grid, we propose a technique inspired by dropout. In addition, we empirically demonstrate that our architecture generalize many of well-known stateof- the-art networks. We conclude with an analysis of the empirical results obtained with our architecture which, although trained from scratch, reveals very good performances, exceeding popular approaches often pre-trained
Firmo, Drumond Thalita. "Apports croisées de l'apprentissage hiérarchique et la modélisation du système visuel : catégorisation d'images sur des petits corpus de données." Thesis, Bordeaux, 2020. https://tel.archives-ouvertes.fr/tel-03129189.
Full textDeep convolutional neural networks (DCNN) have recently protagonized a revolution in large-scale object recognition. They have changed the usual computer vision practices of hand-engineered features, with their ability to hierarchically learn representative features from data with a pertinent classifier. Together with hardware advances, they have made it possible to effectively exploit the ever-growing amounts of image data gathered online. However, in specific domains like healthcare and industrial applications, data is much less abundant, and expert labeling costs higher than those of general purpose image datasets. This scarcity scenario leads to this thesis' core question: can these limited-data domains profit from the advantages of DCNNs for image classification? This question has been addressed throughout this work, based on an extensive study of literature, divided in two main parts, followed by proposal of original models and mechanisms.The first part reviews object recognition from an interdisciplinary double-viewpoint. First, it resorts to understanding the function of vision from a biological stance, comparing and contrasting to DCNN models in terms of structure, function and capabilities. Second, a state-of-the-art review is established aiming to identify the main architectural categories and innovations in modern day DCNNs. This interdisciplinary basis fosters the identification of potential mechanisms - inspired both from biological and artificial structures — that could improve image recognition under difficult situations. Recurrent processing is a clear example: while not completely absent from the "deep vision" literature, it has mostly been applied to videos — due to their inherently sequential nature. From biology however it is clear such processing plays a role in refining our perception of a still scene. This theme is further explored through a dedicated literature review focused on recurrent convolutional architectures used in image classification.The second part carries on in the spirit of improving DCNNs, this time focusing more specifically on our central question: deep learning over small datasets. First, the work proposes a more detailed and precise discussion of the small sample problem and its relation to learning hierarchical features with deep models. This discussion is followed up by a structured view of the field, organizing and discussing the different possible paths towards adapting deep models to limited data settings. Rather than a raw listing, this review work aims to make sense out of the myriad of approaches in the field, grouping methods with similar intent or mechanism of action, in order to guide the development of custom solutions for small-data applications. Second, this study is complemented by an experimental analysis, exploring small data learning with the proposition of original models and mechanisms (previously published as a journal paper).In conclusion, it is possible to apply deep learning to small datasets and obtain good results, if done in a thoughtful fashion. On the data path, one shall try gather more information from additional related data sources if available. On the complexity path, architecture and training methods can be calibrated in order to profit the most from any available domain-specific side-information. Proposals concerning both of these paths get discussed in detail throughout this document. Overall, while there are multiple ways of reducing the complexity of deep learning with small data samples, there is no universal solution. Each method has its own drawbacks and practical difficulties and needs to be tailored specifically to the target perceptual task at hand
Zossou, Vincent-Béni Sèna. "Détection du carcinome hépatocellulaire et des métastases hépatiques basée sur les images tomodensitométriques et l'apprentissage automatique." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR034.
Full textRadiologists use a series of images from abdominal computed tomography (CT) scans to examine the liver and diagnose potential pathologies. However, this process is often lengthy, complex, and prone to human error. Recent studies have shown that artificial intelligence (AI) has opened new horizons in medical imaging, allowing for earlier detection of liver cancers and optimizing the entire diagnostic process. In Africa, particularly in Benin, few studies have been conducted on the use of these techniques, largely due to a lack of equipment and local data. This thesis addresses this gap by proposing AI techniques for automatically detecting and classifying liver lesions from CT scans. Specifically, it presents a tool that includes: (i) a liver and lesion segmentation model based on a neural network, (ii) a radiomic signature to better characterize liver conditions, (iii) a lesion classification model using convolutional neural networks, and (iv) a diagnostic assistance platform to improve patient care. The results demonstrate improvements over existing solutions, paving the way for broader adoption of these technologies, with the aim of improving healthcare quality and reducing medical errors
Beltzung, Benjamin. "Utilisation de réseaux de neurones convolutifs pour mieux comprendre l’évolution et le développement du comportement de dessin chez les Hominidés." Electronic Thesis or Diss., Strasbourg, 2023. http://www.theses.fr/2023STRAJ114.
Full textThe study of drawing behavior can be highly informative, both cognitively and psychologically, in humans and other primates. However, this wealth of information can also be a challenge to analysis and interpretation, particularly in the absence of explanation or verbalization by the author of the drawing. Indeed, an adult's interpretation of a drawing may not be in line with the artist's original intention. During my thesis, I showed that, although generally regarded as black boxes, convolutional neural networks (CNNs) can provide a better understanding of the drawing behavior. Firstly, by using a CNN to classify drawings of a female orangutan according to their season of production, and highlighting variation in style and content. In addition, an ontogenetic approach was considered to quantify the similarity between productions from different age groups. In the future, more interpretable models and the application of new interpretability methods could be applied to better decipher drawing behavior
Suzano, Massa Francisco Vitor. "Mise en relation d'images et de modèles 3D avec des réseaux de neurones convolutifs." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1198/document.
Full textThe recent availability of large catalogs of 3D models enables new possibilities for a 3D reasoning on photographs. This thesis investigates the use of convolutional neural networks (CNNs) for relating 3D objects to 2D images.We first introduce two contributions that are used throughout this thesis: an automatic memory reduction library for deep CNNs, and a study of CNN features for cross-domain matching. In the first one, we develop a library built on top of Torch7 which automatically reduces up to 91% of the memory requirements for deploying a deep CNN. As a second point, we study the effectiveness of various CNN features extracted from a pre-trained network in the case of images from different modalities (real or synthetic images). We show that despite the large cross-domain difference between rendered views and photographs, it is possible to use some of these features for instance retrieval, with possible applications to image-based rendering.There has been a recent use of CNNs for the task of object viewpoint estimation, sometimes with very different design choices. We present these approaches in an unified framework and we analyse the key factors that affect performance. We propose a joint training method that combines both detection and viewpoint estimation, which performs better than considering the viewpoint estimation separately. We also study the impact of the formulation of viewpoint estimation either as a discrete or a continuous task, we quantify the benefits of deeper architectures and we demonstrate that using synthetic data is beneficial. With all these elements combined, we improve over previous state-of-the-art results on the Pascal3D+ dataset by a approximately 5% of mean average viewpoint precision.In the instance retrieval study, the image of the object is given and the goal is to identify among a number of 3D models which object it is. We extend this work to object detection, where instead we are given a 3D model (or a set of 3D models) and we are asked to locate and align the model in the image. We show that simply using CNN features are not enough for this task, and we propose to learn a transformation that brings the features from the real images close to the features from the rendered views. We evaluate our approach both qualitatively and quantitatively on two standard datasets: the IKEAobject dataset, and a subset of the Pascal VOC 2012 dataset of the chair category, and we show state-of-the-art results on both of them
Morère, Olivier André Luc. "Deep learning compact and invariant image representations for instance retrieval." Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066406.
Full textImage instance retrieval is the problem of finding an object instance present in a query image from a database of images. Also referred to as particular object retrieval, this problem typically entails determining with high precision whether the retrieved image contains the same object as the query image. Scale, rotation and orientation changes between query and database objects and background clutter pose significant challenges for this problem. State-of-the-art image instance retrieval pipelines consist of two major steps: first, a subset of images similar to the query are retrieved from the database, and second, Geometric Consistency Checks (GCC) are applied to select the relevant images from the subset with high precision. The first step is based on comparison of global image descriptors: high-dimensional vectors with up to tens of thousands of dimensions rep- resenting the image data. The second step is computationally highly complex and can only be applied to hundreds or thousands of images in practical applications. More discriminative global descriptors result in relevant images being more highly ranked, resulting in fewer images that need to be compared pairwise with GCC. As a result, better global descriptors are key to improving retrieval performance and have been the object of much recent interest. Furthermore, fast searches in large databases of millions or even billions of images requires the global descriptors to be compressed into compact representations. This thesis will focus on how to achieve extremely compact global descriptor representations for large-scale image instance retrieval. After introducing background concepts about supervised neural networks, Restricted Boltzmann Machine (RBM) and deep learning in Chapter 2, Chapter 3 will present the design principles and recent work for the Convolutional Neural Networks (CNN), which recently became the method of choice for large-scale image classification tasks. Next, an original multistage approach for the fusion of the output of multiple CNN is proposed. Submitted as part of the ILSVRC 2014 challenge, results show that this approach can significantly improve classification results. The promising perfor- mance of CNN is largely due to their capability to learn appropriate high-level visual representations from the data. Inspired by a stream of recent works showing that the representations learnt on one particular classification task can transfer well to other classification tasks, subsequent chapters will focus on the transferability of representa- tions learnt by CNN to image instance retrieval…
Pham, Huy-Hieu. "Architectures d'apprentissage profond pour la reconnaissance d'actions humaines dans des séquences vidéo RGB-D monoculaires : application à la surveillance dans les transports publics." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30145.
Full textThis thesis is dealing with automatic recognition of human actions from monocular RGB-D video sequences. Our main goal is to recognize which human actions occur in unknown videos. This problem is a challenging task due to a number of obstacles caused by the variability of the acquisition conditions, including the lighting, the position, the orientation and the field of view of the camera, as well as the variability of actions which can be performed differently, notably in terms of speed. To tackle these problems, we first review and evaluate the most prominent state-of-the-art techniques to identify the current state of human action recognition in videos. We then propose a new approach for skeleton-based action recognition using Deep Neural Networks (DNNs). Two key questions have been addressed. First, how to efficiently represent the spatio-temporal patterns of skeletal data for fully exploiting the capacity in learning high-level representations of Deep Convolutional Neural Networks (D-CNNs). Second, how to design a powerful D-CNN architecture that is able to learn discriminative features from the proposed representation for classification task. As a result, we introduce two new 3D motion representations called SPMF (Skeleton Posture-Motion Feature) and Enhanced-SPMF that encode skeleton poses and their motions into color images. For learning and classification tasks, we design and train different D-CNN architectures based on the Residual Network (ResNet), Inception-ResNet-v2, Densely Connected Convolutional Network (DenseNet) and Efficient Neural Architecture Search (ENAS) to extract robust features from color-coded images and classify them. Experimental results on various public and challenging human action recognition datasets (MSR Action3D, Kinect Activity Recognition Dataset, SBU Kinect Interaction, and NTU-RGB+D) show that the proposed approach outperforms current state-of-the-art. We also conducted research on the problem of 3D human pose estimation from monocular RGB video sequences and exploited the estimated 3D poses for recognition task. Specifically, a deep learning-based model called OpenPose is deployed to detect 2D human poses. A DNN is then proposed and trained for learning a 2D-to-3D mapping in order to map the detected 2D keypoints into 3D poses. Our experiments on the Human3.6M dataset verified the effectiveness of the proposed method. These obtained results allow opening a new research direction for human action recognition from 3D skeletal data, when the depth cameras are failing. In addition, we collect and introduce in this thesis, CEMEST database, a new RGB-D dataset depicting passengers' behaviors in public transport. It consists of 203 untrimmed real-world surveillance videos of realistic "normal" and "abnormal" events. We achieve promising results on CEMEST with the support of data augmentation and transfer learning techniques. This enables the construction of real-world applications based on deep learning for enhancing public transportation management services
Gonthier, Nicolas. "Transfer learning of convolutional neural networks for texture synthesis and visual recognition in artistic images." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG024.
Full textIn this thesis, we study the transfer of Convolutional Neural Networks (CNN) trained on natural images to related tasks. We follow two axes: texture synthesis and visual recognition in artworks. The first one consists in synthesizing a new image given a reference sample. Most methods are based on enforcing the Gram matrices of ImageNet-trained CNN features. We develop a multi-resolution strategy to take into account large scale structures. This strategy can be coupled with long-range constraints either through a Fourier frequency constraint, or the use of feature maps autocorrelation. This scheme allows excellent high-resolution synthesis especially for regular textures. We compare our methods to alternatives ones with quantitative and perceptual evaluations. In a second axis, we focus on transfer learning of CNN for artistic image classification. CNNs can be used as off-the-shelf feature extractors or fine-tuned. We illustrate the advantage of the last solution. Second, we use feature visualization techniques, CNNs similarity indexes and quantitative metrics to highlight some characteristics of the fine-tuning process. Another possibility is to transfer a CNN trained for object detection. We propose a simple multiple instance method using off-the-shelf deep features and box proposals, for weakly supervised object detection. At training time, only image-level annotations are needed. We experimentally show the interest of our models on six non-photorealistic
Mallik, Mohammed Tariqul Hassan. "Electromagnetic Field Exposure Reconstruction by Artificial Intelligence." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. https://pepite-depot.univ-lille.fr/ToutIDP/EDENGSYS/2023/2023ULILN052.pdf.
Full textThe topic of exposure to electromagnetic fields has received muchattention in light of the current deployment of the fifth generation(5G) cellular network. Despite this, accurately reconstructing theelectromagnetic field across a region remains difficult due to a lack ofsufficient data. In situ measurements are of great interest, but theirviability is limited, making it difficult to fully understand the fielddynamics. Despite the great interest in localized measurements, thereare still untested regions that prevent them from providing a completeexposure map. The research explored reconstruction strategies fromobservations from certain localized sites or sensors distributed inspace, using techniques based on geostatistics and Gaussian processes.In particular, recent initiatives have focused on the use of machinelearning and artificial intelligence for this purpose. To overcome theseproblems, this work proposes new methodologies to reconstruct EMFexposure maps in a specific urban area in France. The main objective isto reconstruct exposure maps to electromagnetic waves from some datafrom sensors distributed in space. We proposed two methodologies basedon machine learning to estimate exposure to electromagnetic waves. Forthe first method, the exposure reconstruction problem is defined as animage-to-image translation task. First, the sensor data is convertedinto an image and the corresponding reference image is generated using aray tracing-based simulator. We proposed an adversarial network cGANconditioned by the environment topology to estimate exposure maps usingthese images. The model is trained on sensor map images while anenvironment is given as conditional input to the cGAN model.Furthermore, electromagnetic field mapping based on the GenerativeAdversarial Network is compared to simple Kriging. The results show thatthe proposed method produces accurate estimates and is a promisingsolution for exposure map reconstruction. However, producing referencedata is a complex task as it involves taking into account the number ofactive base stations of different technologies and operators, whosenetwork configuration is unknown, e.g. powers and beams used by basestations. Additionally, evaluating these maps requires time andexpertise. To answer these questions, we defined the problem as amissing data imputation task. The method we propose takes into accountthe training of an infinite neural network to estimate exposure toelectromagnetic fields. This is a promising solution for exposure mapreconstruction, which does not require large training sets. The proposedmethod is compared with other machine learning approaches based on UNetnetworks and conditional generative adversarial networks withcompetitive results
Pirovano, Antoine. "Computer-aided diagnosis methods for cervical cancer screening on liquid-based Pap smears using convolutional neural networks : design, optimization and interpretability." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT011.
Full textCervical cancer is the second most important cancer for women after breast cancer. In 2012, the number of cases exceeded 500,000 worldwide, among which half turned to be deadly.Until today, primary cervical cancer screening is performed by a regular visual analysis of cells, sampled by pap-smear by cytopathologists under brightfield microscopy in pathology laboratories. In France, about 5 millions of cervical screening are performed each year and about 90% lead to a negative diagnosis (i.e. no pre-cancerous changes detected). Yet, these analyses under microscope are extremely tedious and time-consuming for cytotechnicians and can require the joint opinion of several experts. This process has an impact on the capacity to tackle this huge amount of cases and to avoid false negatives that are the main cause of treatment delay. The lack of automation and traceability of screening is thus becoming more critical as the number of cyto-pathologists decreases. In that respect, the integration of digital tools in pathology laboratories is becoming a real public health stake for patients and the privileged path for the improvement of these laboratories. Since 2012, deep learning methods have revolutionized the computer vision field, in particular thanks to convolutional neural networks that have been applied successfully to a wide range of applications among which biomedical imaging. Along with it, the whole slide imaging digitization process has opened the opportunity for new efficient computer-aided diagnosis methods and tools. In this thesis, after motivating the medical needs and introducing the state-of-the-art deep learning methods for image processing and understanding, we present our contribution to the field of computer vision tackling cervical cancer screening in the context of liquid-based cytology. Our first contribution consists in proposing a simple regularization constraint for classification model training in the context of ordinal regression tasks (i.e. ordered classes). We prove the advantage of our method on cervical cells classification using Herlev dataset. Furthermore, we propose to rely on explanations from gradient-based explanations to perform weakly-supervised localization and detection of abnormality. Finally, we show how we integrate these methods as a computer-aided tool that could be used to reduce the workload of cytopathologists.The second contribution focuses on whole slide classification and the interpretability of these pipelines. We present in detail the most popular approaches for whole slide classification relying on multiple instance learning, and improve the interpretability in a context of weakly-supervised learning through tile-level feature visualizations and a novel manner of computing explanations of heat-maps. Finally, we apply these methods for cervical cancer screening by using a weakly trained “abnormality” detector for region of interest sampling that guides the training
Lecomte-Denis, François. "Amélioration des procédures guidées par fluoroscopie à l'aide d'un réseau de neurones pour le recalage déformable des organes." Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAD062.
Full textIn fluoroscopy-guided interventions, the lack of contrast prevents direct visualization of essential anatomical structures.Existing solutions have significant drawbacks: the use of CBCT increases radiation exposure, while contrast agents present toxicity risks for patients.Fluoroscopy to CT registration has the potential to alleviate these issues, but existing literature has primarily focused on respiratory motion compensation.Yet, during interventions, clinicians' actions on organs are an additional source of deformation, rendering these registration approaches ineffective.To address these challenges, we present a real-time 2D-3D deformable registration method tailored to fluoroscopy-guided interventions.Our proposed deep learning approach seamlessly integrates into existing clinical workflows, with minimal training time after preoperative CT scan acquisition.Thanks to our novel domain-agnostic data generation framework, the trained neural network can recover arbitrary deformations, leveraging pose information through its 2D-3D feature backprojection module.Experiments on simulated fluoroscopic images demonstrated our method's ability to provide real-time vessel visualization without contrast agents.On real fluoroscopic images, our method compensates for respiratory motion with a median accuracy of 2.4 mm.These results demonstrate the potential of the proposed method, establishing a foundation for future developments while motivating more comprehensive clinical validation
Abidi, Azza. "Investigating Deep Learning and Image-Encoded Time Series Approaches for Multi-Scale Remote Sensing Analysis in the context of Land Use/Land Cover Mapping." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2024. http://www.theses.fr/2024UMONS007.
Full textIn this thesis, the potential of machine learning (ML) in enhancing the mapping of complex Land Use and Land Cover (LULC) patterns using Earth Observation data is explored. Traditionally, mapping methods relied on manual and time-consuming classification and interpretation of satellite images, which are susceptible to human error. However, the application of ML, particularly through neural networks, has automated and improved the classification process, resulting in more objective and accurate results. Additionally, the integration of Satellite Image Time Series(SITS) data adds a temporal dimension to spatial information, offering a dynamic view of the Earth's surface over time. This temporal information is crucial for accurate classification and informed decision-making in various applications. The precise and current LULC information derived from SITS data is essential for guiding sustainable development initiatives, resource management, and mitigating environmental risks. The LULC mapping process using ML involves data collection, preprocessing, feature extraction, and classification using various ML algorithms. Two main classification strategies for SITS data have been proposed: pixel-level and object-based approaches. While both approaches have shown effectiveness, they also pose challenges, such as the inability to capture contextual information in pixel-based approaches and the complexity of segmentation in object-based approaches.To address these challenges, this thesis aims to implement a method based on multi-scale information to perform LULC classification, coupling spectral and temporal information through a combined pixel-object methodology and applying a methodological approach to efficiently represent multivariate SITS data with the aim of reusing the large amount of research advances proposed in the field of computer vision
Etienne, Caroline. "Apprentissage profond appliqué à la reconnaissance des émotions dans la voix." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS517.
Full textThis thesis deals with the application of artificial intelligence to the automatic classification of audio sequences according to the emotional state of the customer during a commercial phone call. The goal is to improve on existing data preprocessing and machine learning models, and to suggest a model that is as efficient as possible on the reference IEMOCAP audio dataset. We draw from previous work on deep neural networks for automatic speech recognition, and extend it to the speech emotion recognition task. We are therefore interested in End-to-End neural architectures to perform the classification task including an autonomous extraction of acoustic features from the audio signal. Traditionally, the audio signal is preprocessed using paralinguistic features, as part of an expert approach. We choose a naive approach for data preprocessing that does not rely on specialized paralinguistic knowledge, and compare it with the expert approach. In this approach, the raw audio signal is transformed into a time-frequency spectrogram by using a short-term Fourier transform. In order to apply a neural network to a prediction task, a number of aspects need to be considered. On the one hand, the best possible hyperparameters must be identified. On the other hand, biases present in the database should be minimized (non-discrimination), for example by adding data and taking into account the characteristics of the chosen dataset. We study these aspects in order to develop an End-to-End neural architecture that combines convolutional layers specialized in the modeling of visual information with recurrent layers specialized in the modeling of temporal information. We propose a deep supervised learning model, competitive with the current state-of-the-art when trained on the IEMOCAP dataset, justifying its use for the rest of the experiments. This classification model consists of a four-layer convolutional neural networks and a bidirectional long short-term memory recurrent neural network (BLSTM). Our model is evaluated on two English audio databases proposed by the scientific community: IEMOCAP and MSP-IMPROV. A first contribution is to show that, with a deep neural network, we obtain high performances on IEMOCAP, and that the results are promising on MSP-IMPROV. Another contribution of this thesis is a comparative study of the output values of the layers of the convolutional module and the recurrent module according to the data preprocessing method used: spectrograms (naive approach) or paralinguistic indices (expert approach). We analyze the data according to their emotion class using the Euclidean distance, a deterministic proximity measure. We try to understand the characteristics of the emotional information extracted autonomously by the network. The idea is to contribute to research focused on the understanding of deep neural networks used in speech emotion recognition and to bring more transparency and explainability to these systems, whose decision-making mechanism is still largely misunderstood
Boukhtache, Seyfeddine. "Système de traitement d’images temps réel dédié à la mesure de champs denses de déplacements et de déformations." Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC054.
Full textThis PhD thesis has been carried out in a multidisciplinary context. It deals with the challenge of real-time and metrological performance in digital image processing. This is particularly interesting in photomechanics. This is a recent field of activity, which consists in developing and using systems for measuring whole fields of small displacements and small deformations of solids subjected to thermomechanical loading. The technique targeted in this PhD thesis is Digital Images Correlation (DIC), which is the most popular measuring technique in this community. However, it has some limitations, the main one being the computing resources and the metrological performance, which should be improved to reach that of classic pointwise measuring sensors such as strain gauges.In order to address this challenge, this work relies on two main studies. The first one consists in optimizing the interpolation process because this is the most expensive treatment in DIC. Acceleration is proposed by using a parallel hardware implementation on FPGA, and by taking into consideration the consumption of hardware resources as well as accuracy. The main conclusion of this study is that a single FPGA (current technology) is not sufficient to implement the entire DIC algorithm. Thus, a second study has been proposed. It is based on the use of convolutional neural networks (CNNs) in an attempt to achieve both better metrological performance than CIN and real-time processing. This second study shows the relevance of using CNNs for measuring displacement and deformation fields. It opens new perspectives in terms of metrological performance and speed of full-field measuring systems
Oyallon, Edouard. "Analyzing and introducing structures in deep convolutional neural networks." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE060.
Full textThis thesis studies empirical properties of deep convolutional neural networks, and in particular the Scattering Transform. Indeed, the theoretical analysis of the latter is hard and until now remains a challenge: successive layers of neurons have the ability to produce complex computations, whose nature is still unknown, thanks to learning algorithms whose convergence guarantees are not well understood. However, those neural networks are outstanding tools to tackle a wide variety of difficult tasks, like image classification or more formally statistical prediction. The Scattering Transform is a non-linear mathematical operator whose properties are inspired by convolutional networks. In this work, we apply it to natural images, and obtain competitive accuracies with unsupervised architectures. Cascading a supervised neural networks after the Scattering permits to compete on ImageNet2012, which is the largest dataset of labeled images available. An efficient GPU implementation is provided. Then, this thesis focuses on the properties of layers of neurons at various depths. We show that a progressive dimensionality reduction occurs and we study the numerical properties of the supervised classification when we vary the hyper parameters of the network. Finally, we introduce a new class of convolutional networks, whose linear operators are structured by the symmetry groups of the classification task
Caye, Daudt Rodrigo. "Convolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT033.
Full textThe analysis of satellite and aerial Earth observation images allows us to obtain precise information over large areas. A multitemporal analysis of such images is necessary to understand the evolution of such areas. In this thesis, convolutional neural networks are used to detect and understand changes using remote sensing images from various sources in supervised and weakly supervised settings. Siamese architectures are used to compare coregistered image pairs and to identify changed pixels. The proposed method is then extended into a multitask network architecture that is used to detect changes and perform land cover mapping simultaneously, which permits a semantic understanding of the detected changes. Then, classification filtering and a novel guided anisotropic diffusion algorithm are used to reduce the effect of biased label noise, which is a concern for automatically generated large-scale datasets. Weakly supervised learning is also achieved to perform pixel-level change detection using only image-level supervision through the usage of class activation maps and a novel spatial attention layer. Finally, a domain adaptation method based on adversarial training is proposed, which succeeds in projecting images from different domains into a common latent space where a given task can be performed. This method is tested not only for domain adaptation for change detection, but also for image classification and semantic segmentation, which proves its versatility
Chen, Yifu. "Deep learning for visual semantic segmentation." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS200.
Full textIn this thesis, we are interested in Visual Semantic Segmentation, one of the high-level task that paves the way towards complete scene understanding. Specifically, it requires a semantic understanding at the pixel level. With the success of deep learning in recent years, semantic segmentation problems are being tackled using deep architectures. In the first part, we focus on the construction of a more appropriate loss function for semantic segmentation. More precisely, we define a novel loss function by employing a semantic edge detection network. This loss imposes pixel-level predictions to be consistent with the ground truth semantic edge information, and thus leads to better shaped segmentation results. In the second part, we address another important issue, namely, alleviating the need for training segmentation models with large amounts of fully annotated data. We propose a novel attribution method that identifies the most significant regions in an image considered by classification networks. We then integrate our attribution method into a weakly supervised segmentation framework. The semantic segmentation models can thus be trained with only image-level labeled data, which can be easily collected in large quantities. All models proposed in this thesis are thoroughly experimentally evaluated on multiple datasets and the results are competitive with the literature
Mabon, Jules. "Apprentissage de modèles de géométrie stochastique et réseaux de neurones convolutifs. Application à la détection d'objets multiples dans des jeux de données aérospatiales." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4116.
Full textUnmanned aerial vehicles and low-orbit satellites, including CubeSats, are increasingly used for wide-area surveillance, generating substantial data for processing. Satellite imagery acquisition is susceptible to atmospheric disruptions, occlusions, and limited resolution, resulting in limited visual data for small object detection. However, the objects of interest (e.g., small vehicles) are unevenly distributed in the image: there are some priors on the structure of the configurations.In recent years, convolutional neural network (CNN) models have excelled at extracting information from images, especially texture details. Yet, modeling object interactions requires a significant increase in model complexity and parameters. CNN models generally treat interaction as a post-processing step.In contrast, point processes aim to simultaneously model each point's likelihood in relation to the image (data term) and their interactions (prior term). Most point process models rely on contrast measures (foreground vs. background) for their data terms, which work well with clearly contrasted objects and minimal background clutter. However, small vehicles in satellite images exhibit varying contrast levels and a diverse range of background and false alarm objects.In this PhD thesis, we propose harnessing CNN models information extraction abilities in combination with point process interaction models, using CNN outputs as data terms. Additionally, we introduce a unified method for estimating point process model parameters. Our model demonstrates excellent performance on multiple remote sensing datasets, providing geometric regularization and enhanced noise robustness, all with a minimal parameter footprint
Chen, Dexiong. "Modélisation de données structurées avec des machines profondes à noyaux et des applications en biologie computationnelle." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM070.
Full textDeveloping efficient algorithms to learn appropriate representations of structured data, including sequences or graphs, is a major and central challenge in machine learning. To this end, deep learning has become popular in structured data modeling. Deep neural networks have drawn particular attention in various scientific fields such as computer vision, natural language understanding or biology. For instance, they provide computational tools for biologists to possibly understand and uncover biological properties or relationships among macromolecules within living organisms. However, most of the success of deep learning methods in these fields essentially relies on the guidance of empirical insights as well as huge amounts of annotated data. Exploiting more data-efficient models is necessary as labeled data is often scarce.Another line of research is kernel methods, which provide a systematic and principled approach for learning non-linear models from data of arbitrary structure. In addition to their simplicity, they exhibit a natural way to control regularization and thus to avoid overfitting.However, the data representations provided by traditional kernel methods are only defined by simply designed hand-crafted features, which makes them perform worse than neural networks when enough labeled data are available. More complex kernels inspired by prior knowledge used in neural networks have thus been developed to build richer representations and thus bridge this gap. Yet, they are less scalable. By contrast, neural networks are able to learn a compact representation for a specific learning task, which allows them to retain the expressivity of the representation while scaling to large sample size.Incorporating complementary views of kernel methods and deep neural networks to build new frameworks is therefore useful to benefit from both worlds.In this thesis, we build a general kernel-based framework for modeling structured data by leveraging prior knowledge from classical kernel methods and deep networks. Our framework provides efficient algorithmic tools for learning representations without annotations as well as for learning more compact representations in a task-driven way. Our framework can be used to efficiently model sequences and graphs with simple interpretation of predictions. It also offers new insights about designing more expressive kernels and neural networks for sequences and graphs
Haj, Hassan Hawraa. "Détection et classification temps réel de biocellules anormales par technique de segmentation d’images." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0043.
Full textDevelopment of methods for help diagnosis of the real time detection of abnormal cells (which can be considered as cancer cells) through bio-image processing and detection are most important research directions in information science and technology. Our work has been concerned by developing automatic reading procedures of the normal and abnormal bio-images tissues. Therefore, the first step of our work is to detect a certain type of abnormal bio-images associated to many types evolution of cancer within a Microscopic multispectral image, which is an image, repeated in many wavelengths. And using a new segmentation method that reforms itself in an iterative adaptive way to localize and cover the real cell contour, using some segmentation techniques. It is based on color intensity and can be applied on sequences of objects in the image. This work presents a classification of the abnormal tissues using the Convolution neural network (CNN), where it was applied on the microscopic images segmented using the snake method, which gives a high performance result with respect to the other segmentation methods. This classification method reaches high performance values, where it reaches 100% for training and 99.168% for testing. This method was compared to different papers that uses different feature extraction, and proved its high performance with respect to other methods. As a future work, we will aim to validate our approach on a larger datasets, and to explore different CNN architectures and the optimization of the hyper-parameters, in order to increase its performance, and it will be applied to relevant medical imaging tasks including computer-aided diagnosis
Khlif, Wafa. "Multi-lingual scene text detection based on convolutional neural networks." Thesis, La Rochelle, 2022. http://www.theses.fr/2022LAROS022.
Full textThis dissertation explores text detection approaches via deep learning techniques towards achieving the goal of mining and retrieval of weakly structured contents in scene images. First, this dissertation presents a method for detecting text in scene images based on multi-level connected component (CC) analysis and learning text component features via convolutional neural networks (CNN), followed by a graph-based grouping of overlapping text boxes. The features of the resulting raw text/non-text components of different granularity levels are learned via a CNN. The second contribution is inspired from YOLO: Real-Time Object Detection system. Both methods perform text detection and script identification simultaneously. The system presents a joint text detection and script identification approach based on casting the multi-script text detection task as an object detection problem, where the object is the script of the text. Joint text detection and script identification strategy is realized in a holistic approach using a single convolutional neural network where the input data is the full image and the outputs are the text bounding boxes and their script. Textual feature extraction and script classification are performed jointly via a CNN. The experimental evaluation of these methods are performed on the Multi-Lingual Text MLT dataset. We contributed in building this new dataset. It is constituted of natural scene images with embedded text, such as street signs and advertisement boards, passing vehicles, user photos in microblog. This kind of images represents one of the mostly encountered image types on the internet which are the images with embedded text in social media
Tong, Zheng. "Evidential deep neural network in the framework of Dempster-Shafer theory." Thesis, Compiègne, 2022. http://www.theses.fr/2022COMP2661.
Full textDeep neural networks (DNNs) have achieved remarkable success on many realworld applications (e.g., pattern recognition and semantic segmentation) but still face the problem of managing uncertainty. Dempster-Shafer theory (DST) provides a wellfounded and elegant framework to represent and reason with uncertain information. In this thesis, we have proposed a new framework using DST and DNNs to solve the problems of uncertainty. In the proposed framework, we first hybridize DST and DNNs by plugging a DSTbased neural-network layer followed by a utility layer at the output of a convolutional neural network for set-valued classification. We also extend the idea to semantic segmentation by combining fully convolutional networks and DST. The proposed approach enhances the performance of DNN models by assigning ambiguous patterns with high uncertainty, as well as outliers, to multi-class sets. The learning strategy using soft labels further improves the performance of the DNNs by converting imprecise and unreliable label data into belief functions. We have also proposed a modular fusion strategy using this proposed framework, in which a fusion module aggregates the belief-function outputs of evidential DNNs by Dempster’s rule. We use this strategy to combine DNNs trained from heterogeneous datasets with different sets of classes while keeping at least as good performance as those of the individual networks on their respective datasets. Further, we apply the strategy to combine several shallow networks and achieve a similar performance of an advanced DNN for a complicated task
Christoffel, Quentin. "Apprentissage de représentation différenciées dans des modèles d’apprentissage profond : détection de classes inconnues et interprétabilité." Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAD027.
Full textDeep learning, and particularly convolutional neural networks, has revolutionized numerous fields such as computer vision. However, these models remain limited when encountering data from unknown classes (never seen during training) and often suffer from a lack of interpretability. We proposed a method aimed at directly optimizing the representation space learned by the model. Each dimension of the representation is associated with a known class. A dimension is activated with a specific value when the model faces the associated class, meaning that certain features have been detected in the image. This allows the model to detect unknown data by their distinct representation from known data, as they should not share the same features. Our approach also promotes semantic relationships within the representation space by allocating a subspace to each known class. Moreover, a degree of interpretability is achieved by analysing the activated dimensions for a given image, enabling an understanding of which features of which class are detected. This thesis details the development and evaluation of our method across multiple iterations, each aimed at improving performance and addressing identified limitations through interpretability, such as the correlation of extracted features. The results obtained on an unknown class detection benchmark show a notable improvement in performance between our versions, although they remain below the state-of-the-art
Mlynarski, Pawel. "Apprentissage profond pour la segmentation des tumeurs cérébrales et des organes à risque en radiothérapie." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4084.
Full textMedical images play an important role in cancer diagnosis and treatment. Oncologists analyze images to determine the different characteristics of the cancer, to plan the therapy and to observe the evolution of the disease. The objective of this thesis is to propose efficient methods for automatic segmentation of brain tumors and organs at risk in the context of radiotherapy planning, using Magnetic Resonance (MR) images. First, we focus on segmentation of brain tumors using Convolutional Neural Networks (CNN) trained on MRIs manually segmented by experts. We propose a segmentation model having a large 3D receptive field while being efficient in terms of computational complexity, based on combination of 2D and 3D CNNs. We also address problems related to the joint use of several MRI sequences (T1, T2, FLAIR). Second, we introduce a segmentation model which is trained using weakly-annotated images in addition to fully-annotated images (with voxelwise labels), which are usually available in very limited quantities due to their cost. We show that this mixed level of supervision considerably improves the segmentation accuracy when the number of fully-annotated images is limited.\\ Finally, we propose a methodology for an anatomy-consistent segmentation of organs at risk in the context of radiotherapy of brain tumors. The segmentations produced by our system on a set of MRIs acquired in the Centre Antoine Lacassagne (Nice, France) are evaluated by an experienced radiotherapist
Nguyen, Thanh Hai. "Some contributions to deep learning for metagenomics." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS102.
Full textMetagenomic data from human microbiome is a novel source of data for improving diagnosis and prognosis in human diseases. However, to do a prediction based on individual bacteria abundance is a challenge, since the number of features is much bigger than the number of samples. Hence, we face the difficulties related to high dimensional data processing, as well as to the high complexity of heterogeneous data. Machine Learning has obtained great achievements on important metagenomics problems linked to OTU-clustering, binning, taxonomic assignment, etc. The contribution of this PhD thesis is multi-fold: 1) a feature selection framework for efficient heterogeneous biomedical signature extraction, and 2) a novel deep learning approach for predicting diseases using artificial image representations. The first contribution is an efficient feature selection approach based on visualization capabilities of Self-Organizing Maps for heterogeneous data fusion. The framework is efficient on a real and heterogeneous datasets containing metadata, genes of adipose tissue, and gut flora metagenomic data with a reasonable classification accuracy compared to the state-of-the-art methods. The second approach is a method to visualize metagenomic data using a simple fill-up method, and also various state-of-the-art dimensional reduction learning approaches. The new metagenomic data representation can be considered as synthetic images, and used as a novel data set for an efficient deep learning method such as Convolutional Neural Networks. The results show that the proposed methods either achieve the state-of-the-art predictive performance, or outperform it on public rich metagenomic benchmarks