Academic literature on the topic 'Réseaux neuronaux à convolution'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Réseaux neuronaux à convolution.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Réseaux neuronaux à convolution"
Benyamna, Y., E. Ouiame, C. Zineb, and S. Gallouj. "Performance des réseaux neuronaux convolutifs d’apprentissage profond dans la différenciation entre nævus et mélanome cutané." Annales de Dermatologie et de Vénéréologie - FMC 3, no. 8 (December 2023): A263—A264. http://dx.doi.org/10.1016/j.fander.2023.09.480.
Full textWendling, Fabrice. "Modélisation des réseaux neuronaux épileptogènes." Neurophysiologie Clinique 48, no. 4 (September 2018): 248. http://dx.doi.org/10.1016/j.neucli.2018.06.074.
Full textMeunier, Claude. "La physique des réseaux neuronaux." Intellectica. Revue de l'Association pour la Recherche Cognitive 9, no. 1 (1990): 313–21. http://dx.doi.org/10.3406/intel.1990.890.
Full textVenance, Laurent. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 112 (April 1, 2013): 884–86. http://dx.doi.org/10.4000/annuaire-cdf.1083.
Full textVenance, Laurent. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 114 (July 1, 2015): 1030–32. http://dx.doi.org/10.4000/annuaire-cdf.12073.
Full textVenance, Laurent. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 115 (November 1, 2016): 913–16. http://dx.doi.org/10.4000/annuaire-cdf.12639.
Full textVenance, Laurent. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 111 (April 1, 2012): 909–11. http://dx.doi.org/10.4000/annuaire-cdf.1706.
Full textDeniau, Jean-Michel. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 108 (December 1, 2008): 964–69. http://dx.doi.org/10.4000/annuaire-cdf.254.
Full textVenance, Laurent. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 113 (April 1, 2014): 947–49. http://dx.doi.org/10.4000/annuaire-cdf.2708.
Full textDeniau, Jean-Michel, and Laurent Venance. "Dynamique et physiopathologie des réseaux neuronaux." L’annuaire du Collège de France, no. 109 (March 1, 2010): 1082–86. http://dx.doi.org/10.4000/annuaire-cdf.456.
Full textDissertations / Theses on the topic "Réseaux neuronaux à convolution"
Khalfaoui, Hassani Ismail. "Convolution dilatée avec espacements apprenables." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES017.
Full textIn this thesis, we develop and study the Dilated Convolution with Learnable Spacings (DCLS) method. The DCLS method can be considered as an extension of the standard dilated convolution method, but in which the positions of the weights of a neural network are learned during training by the gradient backpropagation algorithm, thanks to an interpolation technique. We empirically demonstrate the effectiveness of the DCLS method by providing concrete evidence from numerous supervised learning experiments. These experiments are drawn from the fields of computer vision, audio, and speech processing, and all show that the DCLS method has a competitive advantage over standard convolution techniques, as well as over several advanced convolution methods. Our approach is structured in several steps, starting with an analysis of the literature and existing convolution techniques that preceded the development of the DCLS method. We were particularly interested in the methods that are closely related to our own and that remain essential to capture the nuances and uniqueness of our approach. The cornerstone of our study is the introduction and application of the DCLS method to convolutional neural networks (CNNs), as well as to hybrid architectures that rely on both convolutional and visual attention approaches. The DCLS method is particularly noteworthy for its capabilities in supervised computer vision tasks such as classification, semantic segmentation, and object detection, all of which are essential tasks in the field. Having originally developed the DCLS method with bilinear interpolation, we explored other interpolation methods that could replace the bilinear interpolation conventionally used in DCLS, and which aim to make the position parameters of the weights in the convolution kernel differentiable. Gaussian interpolation proved to be slightly better in terms of performance. Our research then led us to apply the DCLS method in the field of spiking neural networks (SNNs) to enable synaptic delay learning within a neural network that could eventually be transferred to so-called neuromorphic chips. The results show that the DCLS method stands out as a new state-of-the-art technique in SNN audio classification for certain benchmark tasks in this field. These tasks involve datasets with a high temporal component. In addition, we show that DCLS can significantly improve the accuracy of artificial neural networks for the multi-label audio classification task, a key achievement in one of the most important audio classification benchmarks. We conclude with a discussion of the chosen experimental setup, its limitations, the limitations of our method, and our results
Elbayad, Maha. "Une alternative aux modèles neuronaux séquence-à-séquence pour la traduction automatique." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM012.
Full textIn recent years, deep learning has enabled impressive achievements in Machine Translation.Neural Machine Translation (NMT) relies on training deep neural networks with large number of parameters on vast amounts of parallel data to learn how to translate from one language to another.One crucial factor to the success of NMT is the design of new powerful and efficient architectures. State-of-the-art systems are encoder-decoder models that first encode a source sequence into a set of feature vectors and then decode the target sequence conditioning on the source features.In this thesis we question the encoder-decoder paradigm and advocate for an intertwined encoding of the source and target so that the two sequences interact at increasing levels of abstraction. For this purpose, we introduce Pervasive Attention, a model based on two-dimensional convolutions that jointly encode the source and target sequences with interactions that are pervasive throughout the network.To improve the efficiency of NMT systems, we explore online machine translation where the source is read incrementally and the decoder is fed partial contexts so that the model can alternate between reading and writing. We investigate deterministic agents that guide the read/write alternation through a rigid decoding path, and introduce new dynamic agents to estimate a decoding path for each sample.We also address the resource-efficiency of encoder-decoder models and posit that going deeper in a neural network is not required for all instances.We design depth-adaptive Transformer decoders that allow for anytime prediction and sample-adaptive halting mechanisms to favor low cost predictions for low complexity instances and save deeper predictions for complex scenarios
Pradels, Léo. "Efficient CNN inference acceleration on FPGAs : a pattern pruning-driven approach." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS087.
Full textCNN-based deep learning models provide state-of-the-art performance in image and video processing tasks, particularly for image enhancement or classification. However, these models are computationally and memory-intensive, making them unsuitable for real-time constraints on embedded FPGA systems. As a result, compressing these CNNs and designing accelerator architectures for inference that integrate compression in a hardware-software co-design approach is essential. While software optimizations like pruning have been proposed, they often lack the structured approach needed for effective accelerator integration. To address these limitations, this thesis focuses on accelerating CNNs on FPGAs while complying with real-time constraints on embedded systems. This is achieved through several key contributions. First, it introduces pattern pruning, which imposes structure on network sparsity, enabling efficient hardware acceleration with minimal accuracy loss due to compression. Second, a scalable accelerator for CNN inference is presented, which adapts its architecture based on input performance criteria, FPGA specifications, and target CNN model architecture. An efficient method for integrating pattern pruning within the accelerator and a complete flow for CNN acceleration are proposed. Finally, improvements in network compression are explored through Shift&Add quantization, which modifies FPGA computation methods while maintaining baseline network accuracy
Gariépy, Alexandre, and Alexandre Gariépy. "Robust parallel-gripper grasp getection using convolutional neural networks." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/37993.
Full textLa saisie d’objet est une tâche fondamentale du domaine de la robotique. Des avancées dans ce domaine sont nécessaires au déploiement de robots domestiques ou pour l’automatisation des entrepôts par exemple. Par contre, seulement quelques approches sont capables d’effectuer la détection de points de saisie en temps réel. Dans cet optique, nous présentons une architecture de réseau de neurones à une seule passe nommée Réseau à Transformation Spatiale de Qualité de Saisie, ou encore Grasp Quality Spatial Transformer Network (GQ-STN) en anglais. Se basant sur le Spatial Transformer Network (STN), notre réseau produit non seulement une configuration de saisie mais il produit également une image de profondeur centrée sur cette configuration. Nous connectons notre architecture à un réseau pré-entraîné qui évalue une métrique de robustesse de saisie. Ainsi, nous pouvons entraîner efficacement notre réseau à satisfaire cette métrique de robustesse en utilisant la propagation arrière du gradient provenant du réseau d’évaluation. De plus, ceci nous permet de facilement entraîner le réseau sur des jeux de données contenant peu d’annotations, ce qui est un problème commun en saisie d’objet. Nous proposons également d’utiliser le réseau d’évaluation de robustesse pour comparer différentes approches, ce qui est plus fiable que la métrique d’évaluation par rectangle, la métrique traditionnelle. Notre GQ-STN est capable de détecter des configurations de saisie robustes sur des images de profondeur de jeu de données Dex-Net 2.0 à une précision de 92.4 % en une seule passe du réseau. Finalement, nous démontrons dans une expérience sur un montage physique que notre méthode peut proposer des configurations de saisie robustes plus souvent que les techniques précédentes par échantillonage aléatoire, tout en étant plus de 60 fois plus rapide.
Grasping is a fundamental robotic task needed for the deployment of household robots or furthering warehouse automation. However, few approaches are able to perform grasp detection in real time (frame rate). To this effect, we present Grasp Quality Spatial Transformer Network (GQ-STN), a one-shot grasp detection network. Being based on the Spatial Transformer Network (STN), it produces not only a grasp configuration, but also directly outputs a depth image centered at this configuration. By connecting our architecture to an externally-trained grasp robustness evaluation network, we can train efficiently to satisfy a robustness metric via the backpropagation of the gradient emanating from the evaluation network. This removes the difficulty of training detection networks on sparsely annotated databases, a common issue in grasping. We further propose to use this robustness classifier to compare approaches, being more reliable than the traditional rectangle metric. Our GQ-STN is able to detect robust grasps on the depth images of the Dex-Net 2.0 dataset with 92.4 % accuracy in a single pass of the network. We finally demonstrate in a physical benchmark that our method can propose robust grasps more often than previous sampling-based methods, while being more than 60 times faster.
Grasping is a fundamental robotic task needed for the deployment of household robots or furthering warehouse automation. However, few approaches are able to perform grasp detection in real time (frame rate). To this effect, we present Grasp Quality Spatial Transformer Network (GQ-STN), a one-shot grasp detection network. Being based on the Spatial Transformer Network (STN), it produces not only a grasp configuration, but also directly outputs a depth image centered at this configuration. By connecting our architecture to an externally-trained grasp robustness evaluation network, we can train efficiently to satisfy a robustness metric via the backpropagation of the gradient emanating from the evaluation network. This removes the difficulty of training detection networks on sparsely annotated databases, a common issue in grasping. We further propose to use this robustness classifier to compare approaches, being more reliable than the traditional rectangle metric. Our GQ-STN is able to detect robust grasps on the depth images of the Dex-Net 2.0 dataset with 92.4 % accuracy in a single pass of the network. We finally demonstrate in a physical benchmark that our method can propose robust grasps more often than previous sampling-based methods, while being more than 60 times faster.
Groueix, Thibault. "Learning 3D Generation and Matching." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1024.
Full textThe goal of this thesis is to develop deep learning approaches to model and analyse 3D shapes. Progress in this field could democratize artistic creation of 3D assets which currently requires time and expert skills with technical software.We focus on the design of deep learning solutions for two particular tasks, key to many 3D modeling applications: single-view reconstruction and shape matching.A single-view reconstruction (SVR) method takes as input a single image and predicts the physical world which produced that image. SVR dates back to the early days of computer vision. In particular, in the 1960s, Lawrence G. Roberts proposed to align simple 3D primitives to the input image under the assumption that the physical world is made of cuboids. Another approach proposed by Berthold Horn in the 1970s is to decompose the input image in intrinsic images and use those to predict the depth of every input pixel.Since several configurations of shapes, texture and illumination can explain the same image, both approaches need to form assumptions on the distribution of images and 3D shapes to resolve the ambiguity. In this thesis, we learn these assumptions from large-scale datasets instead of manually designing them. Learning allows us to perform complete object reconstruction, including parts which are not visible in the input image.Shape matching aims at finding correspondences between 3D objects. Solving this task requires both a local and global understanding of 3D shapes which is hard to achieve explicitly. Instead we train neural networks on large-scale datasets to solve this task and capture this knowledge implicitly through their internal parameters.Shape matching supports many 3D modeling applications such as attribute transfer, automatic rigging for animation, or mesh editing.The first technical contribution of this thesis is a new parametric representation of 3D surfaces modeled by neural networks.The choice of data representation is a critical aspect of any 3D reconstruction algorithm. Until recently, most of the approaches in deep 3D model generation were predicting volumetric voxel grids or point clouds, which are discrete representations. Instead, we present an alternative approach that predicts a parametric surface deformation ie a mapping from a template to a target geometry. To demonstrate the benefits of such a representation, we train a deep encoder-decoder for single-view reconstruction using our new representation. Our approach, dubbed AtlasNet, is the first deep single-view reconstruction approach able to reconstruct meshes from images without relying on an independent post-processing, and can do it at arbitrary resolution without memory issues. A more detailed analysis of AtlasNet reveals it also generalizes better to categories it has not been trained on than other deep 3D generation approaches.Our second main contribution is a novel shape matching approach purely based on reconstruction via deformations. We show that the quality of the shape reconstructions is critical to obtain good correspondences, and therefore introduce a test-time optimization scheme to refine the learned deformations. For humans and other deformable shape categories deviating by a near-isometry, our approach can leverage a shape template and isometric regularization of the surface deformations. As category exhibiting non-isometric variations, such as chairs, do not have a clear template, we learn how to deform any shape into any other and leverage cycle-consistency constraints to learn meaningful correspondences. Our reconstruction-for-matching strategy operates directly on point clouds, is robust to many types of perturbations, and outperforms the state of the art by 15% on dense matching of real human scans
Saidane, Zohra. "Reconnaissance de texte dans les images et les vidéos en utilisant les réseaux de neurones à convolutions." Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004685.
Full textVialatte, Jean-Charles. "Convolution et apprentissage profond sur graphes." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0118/document.
Full textConvolutional neural networks have proven to be the deep learning model that performs best on regularly structured datasets like images or sounds. However, they cannot be applied on datasets with an irregular structure (e.g. sensor networks, citation networks, MRIs). In this thesis, we develop an algebraic theory of convolutions on irregular domains. We construct a family of convolutions that are based on group actions (or, more generally, groupoid actions) that acts on the vertex domain and that have properties that depend on the edges. With the help of these convolutions, we propose extensions of convolutional neural netowrks to graph domains. Our researches lead us to propose a generic formulation of the propagation between layers, that we call the neural contraction. From this formulation, we derive many novel neural network models that can be applied on irregular domains. Through benchmarks and experiments, we show that they attain state-of-the-art performances, and beat them in some cases
Mamalet, Franck. "Adéquation algorithme-architecture pour les réseaux de neurones à convolution : application à l'analyse de visages embarquée." Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0068.
Full textProliferation of image sensors in many electronic devices, and increasing processing capabilities of such sensors, open a field of exploration for the implementation and optimization of complex image processing algorithms in order to provide embedded vision systems. This work is a contribution in the research domain of algorithm-architecture matching. It focuses on a class of algorithms called convolution neural network (ConvNet) and its applications in embedded facial analysis. The facial analysis framework, introduced by Garcia et al., was chosen for its state of the art performances in detection/recognition, and also for its homogeneity based on ConvNets. The first contribution of this work deals with an adequacy study of this facial analysis framework with embedded processors. We propose several algorithmic adaptations of ConvNets, and show that they can lead to significant speedup factors (up to 700) on an embedded processor for mobile phone, without performance degradation. We then present a study of ConvNets parallelization capabilities, through N. Farrugia's PhD work. A coarse-grain parallelism exploration of ConvNets, followed by study of internal scheduling of elementary processors, lead to a parameterized parallel architecture on FPGA, able to detect faces at more than 10 VGA frames per second. Finally, we propose an extension of these studies to the learning phase of neural networks. We analyze several hypothesis space restrictions for ConvNets, and show, on a case study, that classification rate performances are almost the same with a training time divided by up to five
Plouet, Erwan. "Convolutional and dynamical spintronic neural networks." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP120.
Full textThis thesis addresses the development of spintronic components for neuromorphic computing, a novel approach aimed at reducing the significant energy consumption of AI applications. The widespread adoption of AI, including very large scale langage models like ChatGPT, has led to increased energy demands, with data centers consuming about 1-2% of global power, and projected to double by 2030. Traditional hardware architectures, which separate memory and processing units, are not well-suited for AI tasks, as neural networks require frequent access to large in-memory parameters, resulting in excessive energy dissipation. Neuromorphic computing, inspired by the human brain, merges memory and processing capabilities in the same device, potentially reducing energy use. Spintronics, which manipulates electron spin rather than charge, offers components that can operate at lower power and provide efficient processing solutions. The thesis is divided into two main parts. The first part focuses on the experimental implementation of a hybrid hardware-software convolutional neural network (CNN) using spintronic components. Spintronic synapses, which operate with radio frequency signals, enable frequency multiplexing to reduce the need for numerous physical connections in neural networks. This research work explores various designs of AMR spin diode-based synapses, each with different specificities, and demonstrates the integration of these synapses into a hardware CNN. A significant achievement was the implementation of a spintronic convolutional layer within a CNN that, when combined with a software fully-connected layer, successfully classified images from the FashionMNIST dataset with an accuracy of 88%, closely matching the performance of the pure software equivalent network. Key findings include the development and precise control of spintronic synapses, the fabrication of synaptic chains for weighted summation in neural networks, and the successful implementation of a hybrid CNN with experimental spintronic components on a complex task. The second part of the thesis explores the use of spintronic nano oscillators (STNOs) for processing time-dependent signals through their transient dynamics. STNOs exhibit nonlinear behaviors that can be utilized for complex tasks like time series classification. A network of simulated STNOs was trained to discriminate between different types of time series, demonstrating superior performance compared to standard reservoir computing methods. We also proposed and evaluated a multilayer network architecture of STNOs for more complex tasks, such as classifying handwritten digits presented pixel-by-pixel. This architecture achieved an average accuracy of 89.83% similar to an equivalent standard continuous time recurrent neural network (CTRNN), indicating the potential of these networks to adapt to various dynamic tasks. Additionally, guidelines were established for matching device dynamics with input timescales, crucial for optimizing performance in networks of dynamic neurons. We demonstrated that multilayer networks of coupled STNOs can be effectively trained via backpropagation through time, highlighting the efficiency and scalability of spintronic neuromorphic computing. This research demonstrated that spintronic networks can be used to implement specific architectures and solve complex tasks. This paves the way for the creation of compact, low-power spintronic neural networks that could be an alternative to AI hardware, offering a sustainable solution to the growing energy demands of AI technologies
Achvar, Didier. "Séparation de sources : généralisation à un modèle convolutif." Montpellier 2, 1993. http://www.theses.fr/1993MON20222.
Full textBooks on the topic "Réseaux neuronaux à convolution"
Kamp, Yves. Réseaux de neurones récursifs pour mémoires associatives. Lausanne: Presses polytechniques et universitaires romandes, 1990.
Find full textMaren, Alianna. Handbook of Neural Computing Applications. San Diego: Academic Press, 1991.
Find full textArtificial Neural Networks in Engineering Conference (1991 St. Louis, Mo.). Intelligent engineering systems through artificial neural networks: Proceedings of the Artificial Neural Networks in Engineering (ANNIE '91) Conference, held November 10-13, 1991, in St. Louis, Missouri, U.S.A. Edited by Dagli Cihan H. 1949-, Kumara Soundar T. 1952-, and Shin Yung C. New York: ASME Press, 1991.
Find full textAmat, Jean-Louis. Techniques avancées pour le traitement de l'information: Réseaux de neurones, logique floue, algorithmes génétiques. 2nd ed. Toulouse: Cépaduès-Ed., 2002.
Find full textNeural Information Processing Systems Conference. Proceedings of the 2003 conference. Cambridge, MA: MIT, 2004.
Find full textHeaton, Jeff. Introduction to neural networks for C#. 2nd ed. St. Louis: Heaton Research Inc., 2008.
Find full textInternational Conference on Neural Information Processing (3rd 1996 Hong Kong). Progress in neural information processing: ICONIP'96 : proceedings of the International Conference on Neural Information Processing, Hong Kong, 24-27 September 1996. New York: Springer, 1996.
Find full text1931-, Taylor John Gerald, ed. Neural networks and their applications. Chichester: UNICOM, 1996.
Find full textRojas, Raúl. Neural networks: A systematic introduction. Berlin: Springer-Verlag, 1996.
Find full textBook chapters on the topic "Réseaux neuronaux à convolution"
ZHANG, Hanwei, Teddy FURON, Laurent AMSALEG, and Yannis AVRITHIS. "Attaques et défenses de réseaux de neurones profonds : le cas de la classification d’images." In Sécurité multimédia 1, 51–85. ISTE Group, 2021. http://dx.doi.org/10.51926/iste.9026.ch2.
Full textLévy, Jean-Claude S. "4 - Complexité et désordre des structures magnétiques, application aux réseaux neuronaux." In Complexité et désordre, 45–62. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-1961-4-005.
Full textLévy, Jean-Claude S. "4 - Complexité et désordre des structures magnétiques, application aux réseaux neuronaux." In Complexité et désordre, 45–62. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-1961-4.c005.
Full textLévy, Jean-Claude S. "4 - Complexité et désordre des structures magnétiques, application aux réseaux neuronaux." In Complexité et désordre, 45–62. EDP Sciences, 2020. https://doi.org/10.1051/978-2-7598-1777-1.c005.
Full textBELMONTE, Romain, Pierre TIRILLY, Ioan Marius BILASCO, Nacim IHADDADENE, and Chaabane DJERABA. "Détection de points de repères faciaux par modélisation spatio-temporelle." In Analyse faciale en conditions non contrôlées, 105–49. ISTE Group, 2024. http://dx.doi.org/10.51926/iste.9111.ch3.
Full textATTO, Abdourrahmane M., Fatima KARBOU, Sophie GIFFARD-ROISIN, and Lionel BOMBRUN. "Clustering fonctionnel de séries d’images par entropies relatives." In Détection de changements et analyse des séries temporelles d’images 1, 121–38. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9056.ch4.
Full textConference papers on the topic "Réseaux neuronaux à convolution"
Mdhaffar, Salima, Antoine Laurent, and Yannick Estève. "Etude de performance des réseaux neuronaux récurrents dans le cadre de la campagne d'évaluation Multi-Genre Broadcast challenge 3 (MGB3)." In XXXIIe Journées d’Études sur la Parole. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/jep.2018-20.
Full textReports on the topic "Réseaux neuronaux à convolution"
Djamai, N., R. A. Fernandes, L. Sun, F. Canisius, and G. Hong. Python version of Simplified Level 2 Prototype Processor for retrieving canopy biophysical variables from Sentinel-2 multispectral data. Natural Resources Canada/CMSS/Information Management, 2024. http://dx.doi.org/10.4095/p8stuehwyc.
Full text