Academic literature on the topic 'Réseaux neuronaux convolutifs (CNN)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Réseaux neuronaux convolutifs (CNN).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Réseaux neuronaux convolutifs (CNN)"
Benyamna, Y., E. Ouiame, C. Zineb, and S. Gallouj. "Performance des réseaux neuronaux convolutifs d’apprentissage profond dans la différenciation entre nævus et mélanome cutané." Annales de Dermatologie et de Vénéréologie - FMC 3, no. 8 (December 2023): A263—A264. http://dx.doi.org/10.1016/j.fander.2023.09.480.
Full textJovanović, S., and S. Weber. "Modélisation et accélération de réseaux de neurones profonds (CNN) en Python/VHDL/C++ et leur vérification et test à l’aide de l’environnement Pynq sur les FPGA Xilinx." J3eA 21 (2022): 1028. http://dx.doi.org/10.1051/j3ea/20220028.
Full textDissertations / Theses on the topic "Réseaux neuronaux convolutifs (CNN)"
Fernandez, Brillet Lucas. "Réseaux de neurones CNN pour la vision embarquée." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM043.
Full textRecently, Convolutional Neural Networks have become the state-of-the-art soluion(SOA) to most computer vision problems. In order to achieve high accuracy rates, CNNs require a high parameter count, as well as a high number of operations. This greatly complicates the deployment of such solutions in embedded systems, which strive to reduce memory size. Indeed, while most embedded systems are typically in the range of a few KBytes of memory, CNN models from the SOA usually account for multiple MBytes, or even GBytes in model size. Throughout this thesis, multiple novel ideas allowing to ease this issue are proposed. This requires to jointly design the solution across three main axes: Application, Algorithm and Hardware.In this manuscript, the main levers allowing to tailor computational complexity of a generic CNN-based object detector are identified and studied. Since object detection requires scanning every possible location and scale across an image through a fixed-input CNN classifier, the number of operations quickly grows for high-resolution images. In order to perform object detection in an efficient way, the detection process is divided into two stages. The first stage involves a region proposal network which allows to trade-off recall for the number of operations required to perform the search, as well as the number of regions passed on to the next stage. Techniques such as bounding box regression also greatly help reduce the dimension of the search space. This in turn simplifies the second stage, since it allows to reduce the task’s complexity to the set of possible proposals. Therefore, parameter counts can greatly be reduced.Furthermore, CNNs also exhibit properties that confirm their over-dimensionment. This over-dimensionement is one of the key success factors of CNNs in practice, since it eases the optimization process by allowing a large set of equivalent solutions. However, this also greatly increases computational complexity, and therefore complicates deploying the inference stage of these algorithms on embedded systems. In order to ease this problem, we propose a CNN compression method which is based on Principal Component Analysis (PCA). PCA allows to find, for each layer of the network independently, a new representation of the set of learned filters by expressing them in a more appropriate PCA basis. This PCA basis is hierarchical, meaning that basis terms are ordered by importance, and by removing the least important basis terms, it is possible to optimally trade-off approximation error for parameter count. Through this method, it is possible to compress, for example, a ResNet-32 network by a factor of ×2 both in the number of parameters and operations with a loss of accuracy <2%. It is also shown that the proposed method is compatible with other SOA methods which exploit other CNN properties in order to reduce computational complexity, mainly pruning, winograd and quantization. Through this method, we have been able to reduce the size of a ResNet-110 from 6.88Mbytes to 370kbytes, i.e. a x19 memory gain with a 3.9 % accuracy loss.All this knowledge, is applied in order to achieve an efficient CNN-based solution for a consumer face detection scenario. The proposed solution consists of just 29.3kBytes model size. This is x65 smaller than other SOA CNN face detectors, while providing equal detection performance and lower number of operations. Our face detector is also compared to a more traditional Viola-Jones face detector, exhibiting approximately an order of magnitude faster computation, as well as the ability to scale to higher detection rates by slightly increasing computational complexity.Both networks are finally implemented in a custom embedded multiprocessor, verifying that theorical and measured gains from PCA are consistent. Furthermore, parallelizing the PCA compressed network over 8 PEs achieves a x11.68 speed-up with respect to the original network running on a single PE
Deramgozin, Mohammadmahdi. "Développement de modèles de reconnaissance des expressions faciales à base d’apprentissage profond pour les applications embarquées." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0286.
Full textThe field of Facial Emotion Recognition (FER) is pivotal in advancing human-machine interactions and finds essential applications in healthcare for conditions like depression and anxiety. Leveraging Convolutional Neural Networks (CNNs), this thesis presents a progression of models aimed at optimizing emotion detection and interpretation. The initial model is resource-frugal but competes favorably with state-of-the-art solutions, making it a strong candidate for embedded systems constrained in computational and memory resources. To capture the complexity and ambiguity of human emotions, the research work presented in this thesis enhances this CNN-based foundational model by incorporating facial Action Units (AUs). This approach not only refines emotion detection but also provides interpretability by identifying specific AUs tied to each emotion. Further sophistication is achieved by introducing neural attention mechanisms—both spatial and channel-based—improving the model's focus on salient facial features. This makes the CNN-based model adapted well to real-world scenarios, such as partially obscured or subtle facial expressions. Based on the previous results, in this thesis we propose finally an optimized, yet computationally efficient, CNN model that is ideal for resource-limited environments like embedded systems. While it provides a robust solution for FER, this research also identifies perspectives for future work, such as real-time applications and advanced techniques for model interpretability
Abidi, Azza. "Investigating Deep Learning and Image-Encoded Time Series Approaches for Multi-Scale Remote Sensing Analysis in the context of Land Use/Land Cover Mapping." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2024. http://www.theses.fr/2024UMONS007.
Full textIn this thesis, the potential of machine learning (ML) in enhancing the mapping of complex Land Use and Land Cover (LULC) patterns using Earth Observation data is explored. Traditionally, mapping methods relied on manual and time-consuming classification and interpretation of satellite images, which are susceptible to human error. However, the application of ML, particularly through neural networks, has automated and improved the classification process, resulting in more objective and accurate results. Additionally, the integration of Satellite Image Time Series(SITS) data adds a temporal dimension to spatial information, offering a dynamic view of the Earth's surface over time. This temporal information is crucial for accurate classification and informed decision-making in various applications. The precise and current LULC information derived from SITS data is essential for guiding sustainable development initiatives, resource management, and mitigating environmental risks. The LULC mapping process using ML involves data collection, preprocessing, feature extraction, and classification using various ML algorithms. Two main classification strategies for SITS data have been proposed: pixel-level and object-based approaches. While both approaches have shown effectiveness, they also pose challenges, such as the inability to capture contextual information in pixel-based approaches and the complexity of segmentation in object-based approaches.To address these challenges, this thesis aims to implement a method based on multi-scale information to perform LULC classification, coupling spectral and temporal information through a combined pixel-object methodology and applying a methodological approach to efficiently represent multivariate SITS data with the aim of reusing the large amount of research advances proposed in the field of computer vision
Antipov, Grigory. "Apprentissage profond pour la description sémantique des traits visuels humains." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0071.
Full textThe recent progress in artificial neural networks (rebranded as deep learning) has significantly boosted the state-of-the-art in numerous domains of computer vision. In this PhD study, we explore how deep learning techniques can help in the analysis of gender and age from a human face. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes.Firstly, we conduct a comprehensive study which results in an empirical formulation of a set of principles for optimal design and training of gender recognition and age estimation Convolutional Neural Networks (CNNs). As a result, we obtain the state-of-the-art CNNs for gender/age prediction according to the three most popular benchmarks, and win an international competition on apparent age estimation. On a very challenging internal dataset, our best models reach 98.7% of gender classification accuracy and an average age estimation error of 4.26 years.In order to address the problem of synthesis and editing of human faces, we design and train GA-cGAN, the first Generative Adversarial Network (GAN) which can generate synthetic faces of high visual fidelity within required gender and age categories. Moreover, we propose a novel method which allows employing GA-cGAN for gender swapping and aging/rejuvenation without losing the original identity in synthetic faces. Finally, in order to show the practical interest of the designed face editing method, we apply it to improve the accuracy of an off-the-shelf face verification software in a cross-age evaluation scenario
Antipov, Grigory. "Apprentissage profond pour la description sémantique des traits visuels humains." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0071/document.
Full textThe recent progress in artificial neural networks (rebranded as deep learning) has significantly boosted the state-of-the-art in numerous domains of computer vision. In this PhD study, we explore how deep learning techniques can help in the analysis of gender and age from a human face. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes.Firstly, we conduct a comprehensive study which results in an empirical formulation of a set of principles for optimal design and training of gender recognition and age estimation Convolutional Neural Networks (CNNs). As a result, we obtain the state-of-the-art CNNs for gender/age prediction according to the three most popular benchmarks, and win an international competition on apparent age estimation. On a very challenging internal dataset, our best models reach 98.7% of gender classification accuracy and an average age estimation error of 4.26 years.In order to address the problem of synthesis and editing of human faces, we design and train GA-cGAN, the first Generative Adversarial Network (GAN) which can generate synthetic faces of high visual fidelity within required gender and age categories. Moreover, we propose a novel method which allows employing GA-cGAN for gender swapping and aging/rejuvenation without losing the original identity in synthetic faces. Finally, in order to show the practical interest of the designed face editing method, we apply it to improve the accuracy of an off-the-shelf face verification software in a cross-age evaluation scenario
Garbay, Thomas. "Zip-CNN." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS210.pdf.
Full textDigital systems used for the Internet of Things (IoT) and Embedded Systems have seen an increasing use in recent decades. Embedded systems based on Microcontroller Unit (MCU) solve various problems by collecting a lot of data. Today, about 250 billion MCU are in use. Projections in the coming years point to very strong growth. Artificial intelligence has seen a resurgence of interest in 2012. The use of Convolutional Neural Networks (CNN) has helped to solve many problems in computer vision or natural language processing. The implementation of CNN within embedded systems would greatly improve the exploitation of the collected data. However, the inference cost of a CNN makes their implementation within embedded systems challenging. This thesis focuses on exploring the solution space, in order to assist the implementation of CNN within embedded systems based on microcontrollers. For this purpose, the ZIP-CNN methodology is defined. It takes into account the embedded system and the CNN to be implemented. It provides an embedded designer with information regarding the impact of the CNN inference on the system. A designer can explore the impact of design choices, with the objective of respecting the constraints of the targeted application. A model is defined to quantitatively provide an estimation of the latency, the energy consumption and the memory space required to infer a CNN within an embedded target, whatever the topology of the CNN is. This model takes into account algorithmic reductions such as knowledge distillation, pruning or quantization. The implementation of state-of-the-art CNN within MCU verified the accuracy of the different estimations through an experimental process. This thesis democratize the implementation of CNN within MCU, assisting the designers of embedded systems. Moreover, the results open a way of exploration to apply the developed models to other target hardware, such as multi-core architectures or FPGA. The estimation results are also exploitable in the Neural Architecture Search (NAS)
Fourure, Damien. "Réseaux de neurones convolutifs pour la segmentation sémantique et l'apprentissage d'invariants de couleur." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSES056/document.
Full textComputer vision is an interdisciplinary field that investigates how computers can gain a high level of understanding from digital images or videos. In artificial intelligence, and more precisely in machine learning, the field in which this thesis is positioned,computer vision involves extracting characteristics from images and then generalizing concepts related to these characteristics. This field of research has become very popular in recent years, particularly thanks to the results of the convolutional neural networks that form the basis of so-called deep learning methods. Today, neural networks make it possible, among other things, to recognize different objects present in an image, to generate very realistic images or even to beat the champions at the Go game. Their performance is not limited to the image domain, since they are also used in other fields such as natural language processing (e. g. machine translation) or sound recognition. In this thesis, we study convolutional neural networks in order to develop specialized architectures and loss functions for low-level tasks (color constancy) as well as high-level tasks (semantic segmentation). Color constancy, is the ability of the human visual system to perceive constant colours for a surface despite changes in the spectrum of illumination (lighting change). In computer vision, the main approach consists in estimating the color of the illuminant and then suppressing its impact on the perceived color of objects. We approach the task of color constancy with the use of neural networks by developing a new architecture composed of a subsampling operator inspired by traditional methods. Our experience shows that our method makes it possible to obtain competitive performances with the state of the art. Nevertheless, our architecture requires a large amount of training data. In order to partially correct this problem and improve the training of neural networks, we present several techniques for artificial data augmentation. We are also making two contributions on a high-level issue : semantic segmentation. This task, which consists of assigning a semantic class to each pixel of an image, is a challenge in computer vision because of its complexity. On the one hand, it requires many examples of training that are costly to obtain. On the other hand, it requires the adaptation of traditional convolutional neural networks in order to obtain a so-called dense prediction, i. e., a prediction for each pixel present in the input image. To solve the difficulty of acquiring training data, we propose an approach that uses several databases annotated with different labels at the same time. To do this, we define a selective loss function that has the advantage of allowing the training of a convolutional neural network from data from multiple databases. We also developed self-context approach that captures the correlations between labels in different databases. Finally, we present our third contribution : a new convolutional neural network architecture called GridNet specialized for semantic segmentation. Unlike traditional networks, implemented with a single path from the input (image) to the output (prediction), our architecture is implemented as a 2D grid allowing several interconnected streams to operate at different resolutions. In order to exploit all the paths of the grid, we propose a technique inspired by dropout. In addition, we empirically demonstrate that our architecture generalize many of well-known stateof- the-art networks. We conclude with an analysis of the empirical results obtained with our architecture which, although trained from scratch, reveals very good performances, exceeding popular approaches often pre-trained
Suzano, Massa Francisco Vitor. "Mise en relation d'images et de modèles 3D avec des réseaux de neurones convolutifs." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1198/document.
Full textThe recent availability of large catalogs of 3D models enables new possibilities for a 3D reasoning on photographs. This thesis investigates the use of convolutional neural networks (CNNs) for relating 3D objects to 2D images.We first introduce two contributions that are used throughout this thesis: an automatic memory reduction library for deep CNNs, and a study of CNN features for cross-domain matching. In the first one, we develop a library built on top of Torch7 which automatically reduces up to 91% of the memory requirements for deploying a deep CNN. As a second point, we study the effectiveness of various CNN features extracted from a pre-trained network in the case of images from different modalities (real or synthetic images). We show that despite the large cross-domain difference between rendered views and photographs, it is possible to use some of these features for instance retrieval, with possible applications to image-based rendering.There has been a recent use of CNNs for the task of object viewpoint estimation, sometimes with very different design choices. We present these approaches in an unified framework and we analyse the key factors that affect performance. We propose a joint training method that combines both detection and viewpoint estimation, which performs better than considering the viewpoint estimation separately. We also study the impact of the formulation of viewpoint estimation either as a discrete or a continuous task, we quantify the benefits of deeper architectures and we demonstrate that using synthetic data is beneficial. With all these elements combined, we improve over previous state-of-the-art results on the Pascal3D+ dataset by a approximately 5% of mean average viewpoint precision.In the instance retrieval study, the image of the object is given and the goal is to identify among a number of 3D models which object it is. We extend this work to object detection, where instead we are given a 3D model (or a set of 3D models) and we are asked to locate and align the model in the image. We show that simply using CNN features are not enough for this task, and we propose to learn a transformation that brings the features from the real images close to the features from the rendered views. We evaluate our approach both qualitatively and quantitatively on two standard datasets: the IKEAobject dataset, and a subset of the Pascal VOC 2012 dataset of the chair category, and we show state-of-the-art results on both of them
Groueix, Thibault. "Learning 3D Generation and Matching." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1024.
Full textThe goal of this thesis is to develop deep learning approaches to model and analyse 3D shapes. Progress in this field could democratize artistic creation of 3D assets which currently requires time and expert skills with technical software.We focus on the design of deep learning solutions for two particular tasks, key to many 3D modeling applications: single-view reconstruction and shape matching.A single-view reconstruction (SVR) method takes as input a single image and predicts the physical world which produced that image. SVR dates back to the early days of computer vision. In particular, in the 1960s, Lawrence G. Roberts proposed to align simple 3D primitives to the input image under the assumption that the physical world is made of cuboids. Another approach proposed by Berthold Horn in the 1970s is to decompose the input image in intrinsic images and use those to predict the depth of every input pixel.Since several configurations of shapes, texture and illumination can explain the same image, both approaches need to form assumptions on the distribution of images and 3D shapes to resolve the ambiguity. In this thesis, we learn these assumptions from large-scale datasets instead of manually designing them. Learning allows us to perform complete object reconstruction, including parts which are not visible in the input image.Shape matching aims at finding correspondences between 3D objects. Solving this task requires both a local and global understanding of 3D shapes which is hard to achieve explicitly. Instead we train neural networks on large-scale datasets to solve this task and capture this knowledge implicitly through their internal parameters.Shape matching supports many 3D modeling applications such as attribute transfer, automatic rigging for animation, or mesh editing.The first technical contribution of this thesis is a new parametric representation of 3D surfaces modeled by neural networks.The choice of data representation is a critical aspect of any 3D reconstruction algorithm. Until recently, most of the approaches in deep 3D model generation were predicting volumetric voxel grids or point clouds, which are discrete representations. Instead, we present an alternative approach that predicts a parametric surface deformation ie a mapping from a template to a target geometry. To demonstrate the benefits of such a representation, we train a deep encoder-decoder for single-view reconstruction using our new representation. Our approach, dubbed AtlasNet, is the first deep single-view reconstruction approach able to reconstruct meshes from images without relying on an independent post-processing, and can do it at arbitrary resolution without memory issues. A more detailed analysis of AtlasNet reveals it also generalizes better to categories it has not been trained on than other deep 3D generation approaches.Our second main contribution is a novel shape matching approach purely based on reconstruction via deformations. We show that the quality of the shape reconstructions is critical to obtain good correspondences, and therefore introduce a test-time optimization scheme to refine the learned deformations. For humans and other deformable shape categories deviating by a near-isometry, our approach can leverage a shape template and isometric regularization of the surface deformations. As category exhibiting non-isometric variations, such as chairs, do not have a clear template, we learn how to deform any shape into any other and leverage cycle-consistency constraints to learn meaningful correspondences. Our reconstruction-for-matching strategy operates directly on point clouds, is robust to many types of perturbations, and outperforms the state of the art by 15% on dense matching of real human scans
Beltzung, Benjamin. "Utilisation de réseaux de neurones convolutifs pour mieux comprendre l’évolution et le développement du comportement de dessin chez les Hominidés." Electronic Thesis or Diss., Strasbourg, 2023. http://www.theses.fr/2023STRAJ114.
Full textThe study of drawing behavior can be highly informative, both cognitively and psychologically, in humans and other primates. However, this wealth of information can also be a challenge to analysis and interpretation, particularly in the absence of explanation or verbalization by the author of the drawing. Indeed, an adult's interpretation of a drawing may not be in line with the artist's original intention. During my thesis, I showed that, although generally regarded as black boxes, convolutional neural networks (CNNs) can provide a better understanding of the drawing behavior. Firstly, by using a CNN to classify drawings of a female orangutan according to their season of production, and highlighting variation in style and content. In addition, an ontogenetic approach was considered to quantify the similarity between productions from different age groups. In the future, more interpretable models and the application of new interpretability methods could be applied to better decipher drawing behavior
Book chapters on the topic "Réseaux neuronaux convolutifs (CNN)"
BYTYN, Andreas, René AHLSDORF, and Gerd ASCHEID. "Systèmes multiprocesseurs basés sur un ASIP pour l’efficacité des CNN." In Systèmes multiprocesseurs sur puce 1, 93–111. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9021.ch4.
Full text