Academic literature on the topic 'Réseaux de neurones profonds parcimonieux'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Réseaux de neurones profonds parcimonieux.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Réseaux de neurones profonds parcimonieux":
Ruan, S., P. Vera, P. Decazes, and R. Modzelewski. "RADIOGAN : réseaux de neurones profonds génératifs conditionnels pour la synthétisation d’images TEP au FDG." Médecine Nucléaire 44, no. 2 (March 2020): 105–6. http://dx.doi.org/10.1016/j.mednuc.2020.01.128.
Jovanović, S., and S. Weber. "Modélisation et accélération de réseaux de neurones profonds (CNN) en Python/VHDL/C++ et leur vérification et test à l’aide de l’environnement Pynq sur les FPGA Xilinx." J3eA 21 (2022): 1028. http://dx.doi.org/10.1051/j3ea/20220028.
Ruan, S., P. Decazes, and R. Modzelewski. "Contribution des cartes d’activation de classe des réseaux de neurones profonds pour la classification des tumeurs primaires en TEP-FDG." Médecine Nucléaire 44, no. 2 (March 2020): 133. http://dx.doi.org/10.1016/j.mednuc.2020.01.080.
ETIEMBLE, Daniel. "Supports matériels pour les réseaux de neurones profonds." Technologies logicielles Architectures des systèmes, August 2021. http://dx.doi.org/10.51257/a-v1-h1098.
Philizot, Vivien. "Les mots, les choses et les images. Apprendre à voir à une machine." Radar, no. 4 (January 1, 2019). http://dx.doi.org/10.57086/radar.212.
Dissertations / Theses on the topic "Réseaux de neurones profonds parcimonieux":
Le, Quoc Tung. "Algorithmic and theoretical aspects of sparse deep neural networks." Electronic Thesis or Diss., Lyon, École normale supérieure, 2023. http://www.theses.fr/2023ENSL0105.
Sparse deep neural networks offer a compelling practical opportunity to reduce the cost of training, inference and storage, which are growing exponentially in the state of the art of deep learning. In this presentation, we will introduce an approach to study sparse deep neural networks through the lens of another related problem: sparse matrix factorization, i.e., the problem of approximating a (dense) matrix by the product of (multiple) sparse factors. In particular, we identify and investigate in detail some theoretical and algorithmic aspects of a variant of sparse matrix factorization named fixed support matrix factorization (FSMF) in which the set of non-zero entries of sparse factors are known. Several fundamental questions of sparse deep neural networks such as the existence of optimal solutions of the training problem or topological properties of its function space can be addressed using the results of (FSMF). In addition, by applying the results of (FSMF), we also study the butterfly parametrization, an approach that consists of replacing (large) weight matrices by the products of extremely sparse and structured ones in sparse deep neural networks
Nono, Wouafo Hugues Gérald. "Architectures matérielles numériques intégrées et réseaux de neurones à codage parcimonieux." Thesis, Lorient, 2016. http://www.theses.fr/2016LORIS394/document.
Nowadays, artificial neural networks are widely used in many applications such as image and signal processing. Recently, a new model of neural network was proposed to design associative memories, the GBNN (Gripon-Berrou Neural Network). This model offers a storage capacity exceeding those of Hopfield networks when the information to be stored has a uniform distribution. Methods improving performance for non-uniform distributions and hardware architectures implementing the GBNN networks were proposed. However, on one hand, these solutions are very expensive in terms of hardware resources and on the other hand, the proposed architectures can only implement fixed size networks and are not scalable. The objectives of this thesis are: (1) to design GBNN inspired models outperforming the state of the art, (2) to propose architectures cheaper than existing solutions and (3) to design a generic architecture implementing the proposed models and able to handle various sizes of networks. The results of these works are exposed in several parts. Initially, the concept of clone based neural networks and its variants are presented. These networks offer better performance than the state of the art for the same memory cost when a non-uniform distribution of the information to be stored is considered. The hardware architecture optimizations are then introduced to significantly reduce the cost in terms of resources. Finally, a generic scalable architecture able to handle various sizes of networks is proposed
Chabot, Florian. "Analyse fine 2D/3D de véhicules par réseaux de neurones profonds." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC018/document.
In this thesis, we are interested in fine-grained analysis of vehicle from an image. We define fine-grained analysis as the following concepts : vehicle detection in the image, vehicle viewpoint (or orientation) estimation, vehicle visibility characterization, vehicle 3D localization and make and model recognition. The design of reliable solutions for fine-grained analysis of vehicle open the door to multiple applications in particular for intelligent transport systems as well as video surveillance systems. In this work, we propose several contributions allowing to address partially or wholly this issue. Proposed approaches are based on joint deep learning technologies and 3D models. In a first section, we deal with make and model classification keeping in mind the difficulty to create training data. In a second section, we investigate a novel method for both vehicle detection and fine-grained viewpoint estimation based on local apparence features and geometric spatial coherence. It uses models learned only on synthetic data. Finally, in a third section, a complete system for fine-grained analysis is proposed. It is based on the multi-task concept. Throughout this report, we provide quantitative and qualitative results. On several aspects related to vehicle fine-grained analysis, this work allowed to outperform state of the art methods
Simonnet, Edwin. "Réseaux de neurones profonds appliqués à la compréhension de la parole." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1006/document.
This thesis is a part of the emergence of deep learning and focuses on spoken language understanding assimilated to the automatic extraction and representation of the meaning supported by the words in a spoken utterance. We study a semantic concept tagging task used in a spoken dialogue system and evaluated with the French corpus MEDIA. For the past decade, neural models have emerged in many natural language processing tasks through algorithmic advances or powerful computing tools such as graphics processors. Many obstacles make the understanding task complex, such as the difficult interpretation of automatic speech transcriptions, as many errors are introduced by the automatic recognition process upstream of the comprehension module. We present a state of the art describing spoken language understanding and then supervised automatic learning methods to solve it, starting with classical systems and finishing with deep learning techniques. The contributions are then presented along three axes. First, we develop an efficient neural architecture consisting of a bidirectional recurrent network encoder-decoder with attention mechanism. Then we study the management of automatic recognition errors and solutions to limit their impact on our performances. Finally, we envisage a disambiguation of the comprehension task making the systems more efficient
Metz, Clément. "Codages optimisés pour la conception d'accélérateurs matériels de réseaux de neurones profonds." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST190.
Neural networks are an important component of machine learning tools because of their wide range of applications (health, energy, defence, finance, autonomous navigation, etc.). The performance of neural networks is greatly influenced by the complexity of their architecture in terms of the number of layers, neurons and connections. But the training and inference of ever-larger networks translates to greater demands on hardware resources and longer computing times. Conversely, their portability is limited on embedded systems with low memory and/or computing capacity.The aim of this thesis is to study and design methods for reducing the hardware footprint of neural networks while preserving their performance as much as possible. We restrict ourselves to convolution networks dedicated to computer vision by studying the possibilities offered by quantization. Quantization aims to reduce the hardware footprint, in terms of memory, bandwidth and computation operators, by reducing the number of bits in the network parameters and activations.The contributions of this thesis consist of a new post-training quantization method based on the exploitation of spatial correlations of network parameters, an approach facilitating the learning of very highly quantized networks, and a method aiming to combine mixed precision quantization and lossless entropy coding.The contents of this thesis are essentially limited to algorithmic aspects, but the research orientations were strongly influenced by the requirement for hardware feasibility of our solutions
Huet, Romain. "Codage neural parcimonieux pour un système de vision." Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS439/document.
The neural networks have gained a renewed interest through the deep learning paradigm. Whilethe so called optimised neural nets, by optimising the parameters necessary for learning, require massive computational resources, we focus here on neural nets designed as addressable content memories, or neural associative memories. The challenge consists in realising operations, traditionally obtained through computation, exclusively with neural memory in order to limit the need in computational resources. In this thesis, we study an associative memory based on cliques, whose sparse neural coding optimises the data diversity encoded in the network. This large diversity allows the clique based network to be more efficient in messages retrieval from its memory than other neural associative memories. The associative memories are known for their incapacity to identify without ambiguities the messages stored in a saturated memory. Indeed, depending of the information present in the network and its encoding, a memory can fail to retrieve a desired result. We are interested in tackle this issue and propose several contributions in order to reduce the ambiguities in the cliques based neural network. Besides, these cliques based nets are unable to retrieve an information within their memories if the message is unknown. We propose a solution to this problem through a new associative memory based on cliques which preserves the initial network's corrective ability while being able to hierarchise the information. The hierarchy relies on a surjective and bidirectional transition to generalise an unknown input with an approximation of learnt information. The associative memories' experimental validation is usually based on low dimension artificial dataset. In the computer vision context, we report here the results obtained with real datasets used in the state-of-the-art, such as MNIST, Yale or CIFAR
Chollet, Paul. "Traitement parcimonieux de signaux biologiques." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0024/document.
Body area sensor networks gained great focused through the promiseof better quality and cheaper medical care system. They are used todetect anomalies and treat them as soon as they arise. Sensors are under heavy constraints such as reliability, sturdiness, size and power consumption. This thesis analyzes the operations perform by a body area sensor network. The different energy requirements are evaluated in order to choose the focus of the research to improve the battery life of the sensors. A sensor for arrhythmia detection is proposed. It includes some signal processing through a clique-based neural network. The system simulations allow a classification between three types of arrhythmia with 95 % accuracy. The prototype, based on a 65 nm CMOS mixed signal circuit, requires only 1.4 μJ. To further reduce energy consumption, a new sensing method is used. A converter architecture is proposed for heart beat acquisition. Simulations and estimation show a 1.18 nJ energy requirement for parameter acquisition while offering 98 % classification accuracy. This work leads the way to the development of low energy sensor with a lifetime battery life
Ducoffe, Mélanie. "Active learning et visualisation des données d'apprentissage pour les réseaux de neurones profonds." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4115/document.
Our work is presented in three separate parts which can be read independently. Firstly we propose three active learning heuristics that scale to deep neural networks: We scale query by committee, an ensemble active learning methods. We speed up the computation time by sampling a committee of deep networks by applying dropout on the trained model. Another direction was margin-based active learning. We propose to use an adversarial perturbation to measure the distance to the margin. We also establish theoretical bounds on the convergence of our Adversarial Active Learning strategy for linear classifiers. Some inherent properties of adversarial examples opens up promising opportunity to transfer active learning data from one network to another. We also derive an active learning heuristic that scales to both CNN and RNN by selecting the unlabeled data that minimize the variational free energy. Secondly, we focus our work on how to fasten the computation of Wasserstein distances. We propose to approximate Wasserstein distances using a Siamese architecture. From another point of view, we demonstrate the submodular properties of Wasserstein medoids and how to apply it in active learning. Eventually, we provide new visualization tools for explaining the predictions of CNN on a text. First, we hijack an active learning strategy to confront the relevance of the sentences selected with active learning to state-of-the-art phraseology techniques. These works help to understand the hierarchy of the linguistic knowledge acquired during the training of CNNs on NLP tasks. Secondly, we take advantage of deconvolution networks for image analysis to present a new perspective on text analysis to the linguistic community that we call Text Deconvolution Saliency
Mathieu, Félix. "Traitement de la phase des signaux audio dans les réseaux de neurones profonds." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT046.
The task of separating sound sources in an audio recording requires particular attention. The advent of deep neural networks has improved this task at the expense of increased computational complexity and algorithmic opacity. Interferences induced by these algorithms, whether parasitic or structured, can disrupt the understanding of the signal, especially in the context of voice reproduction. These issues become particularly pronounced during real-time discussions, necessitating performance metrics to evaluate source separation models. Criteria include the quality of reconstructing individual tracks, intelligibility of vocal signals, resilience to interferences, and other aspects such as reducing computational costs and improving interpretability of treatments. This thesis aims to enhance the interpretability of these models while mitigating their computational costs, with a specific focus on modeling the phase of signals. The current challenge lies in finding an appropriate model for this crucial component, essential for understanding audio signals. We will explore strategies such as using complex-valued models, phase-invariant representations, and models allowing abstraction from the phase component. The ultimate goal is to achieve significant advancements in modeling signal phase within deep neural networks, while preserving or reducing computational costs and enhancing interpretability of existing algorithmic decisions
Sarr, Jean Michel Amath. "Étude de l’augmentation de données pour la robustesse des réseaux de neurones profonds." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS072.
In this thesis, we considered the problem of the robustness of neural networks. That is, we have considered the case where the learning set and the deployment set are not independently and identically distributed from the same source. This hypothesis is called : the i.i.d hypothesis. Our main research axis has been data augmentation. Indeed, an extensive literature review and preliminary experiments showed us the regularization potential of data augmentation. Thus, as a first step, we sought to use data augmentation to make neural networks more robust to various synthetic and natural dataset shifts. A dataset shift being simply a violation of the i.i.d assumption. However, the results of this approach have been mixed. Indeed, we observed that in some cases the augmented data could lead to performance jumps on the deployment set. But this phenomenon did not occur every time. In some cases, the augmented data could even reduce performance on the deployment set. In our conclusion, we offer a granular explanation for this phenomenon. Better use of data augmentation toward neural network robustness is to generate stress tests to observe a model behavior when various shift occurs. Then, to use that information to estimate the error on the deployment set of interest even without labels, we call this deployment error estimation. Furthermore, we show that the use of independent data augmentation can improve deployment error estimation. We believe that this use of data augmentation will allow us to better quantify the reliability of neural networks when deployed on new unknown datasets
Book chapters on the topic "Réseaux de neurones profonds parcimonieux":
COGRANNE, Rémi, Marc CHAUMONT, and Patrick BAS. "Stéganalyse : détection d’information cachée dans des contenus multimédias." In Sécurité multimédia 1, 261–303. ISTE Group, 2021. http://dx.doi.org/10.51926/iste.9026.ch8.
ZHANG, Hanwei, Teddy FURON, Laurent AMSALEG, and Yannis AVRITHIS. "Attaques et défenses de réseaux de neurones profonds : le cas de la classification d’images." In Sécurité multimédia 1, 51–85. ISTE Group, 2021. http://dx.doi.org/10.51926/iste.9026.ch2.
Conference papers on the topic "Réseaux de neurones profonds parcimonieux":
Quintas, Sebastião, Alberto Abad, Julie Mauclair, Virginie Woisard, and Julien Pinquier. "Utilisation de réseaux de neurones profonds avec attention pour la prédiction de l’intelligibilité de la parole de patients atteints de cancers ORL." In XXXIVe Journées d'Études sur la Parole -- JEP 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/jep.2022-7.