Literatura académica sobre el tema "Recherche Automatique d'Architecture Neuronale"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Recherche Automatique d'Architecture Neuronale".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Recherche Automatique d'Architecture Neuronale"
Hansen, Damien, Emmanuelle Esperança-Rodier, Hervé Blanchon y Valérie Bada. "La traduction littéraire automatique : Adapter la machine à la traduction humaine individualisée". Journal of Data Mining & Digital Humanities Towards robotic translation?, V. The contribution of... (9 de diciembre de 2022). http://dx.doi.org/10.46298/jdmdh.9114.
Texto completoTesis sobre el tema "Recherche Automatique d'Architecture Neuronale"
Pouy, Léo. "OpenNas : un cadre adaptable de recherche automatique d'architecture neuronale". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG089.
Texto completoWhen creating a neural network, the "fine-tuning" stage is essential. During this fine-tuning, the neural network developer must adjust the hyperparameters and the architecture of the network so that it meets the targets. This is a time-consuming and tedious phase, and requires experience on the part of the developer. So, to make it easier to create neural networks, there is a discipline called Automatic Machine Learning (Auto-ML), which seeks to automate the creation of Machine Learning. This thesis is part of this Auto-ML approach and proposes a method for creating and optimizing neural network architectures (Neural Architecture Search, NAS). To this end, a new search space based on block imbrication has been formalized. This space makes it possible to create a neural network from elementary blocks connected in series or in parallel to form compound blocks which can themselves be connected to form an even more complex network. The advantage of this search space is that it can be easily customized to influence the NAS for specific architectures (VGG, Inception, ResNet, etc.) and control the optimization time. Moreover, it is not constrained to any particular optimization algorithm. In this thesis, the formalization of the search space is first described, along with encoding techniques to represent a network from the search space by a natural number (or a list of natural numbers). Optimization strategies applicable to this search space are then proposed. Finally, neural architecture search experiments on different datasets and with different objectives using the developed tool (named OpenNas) are presented
Heuillet, Alexandre. "Exploring deep neural network differentiable architecture design". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG069.
Texto completoArtificial Intelligence (AI) has gained significant popularity in recent years, primarily due to its successful applications in various domains, including textual data analysis, computer vision, and audio processing. The resurgence of deep learning techniques has played a central role in this success. The groundbreaking paper by Krizhevsky et al., AlexNet, narrowed the gap between human and machine performance in image classification tasks. Subsequent papers such as Xception and ResNet have further solidified deep learning as a leading technique, opening new horizons for the AI community. The success of deep learning lies in its architecture, which is manually designed with expert knowledge and empirical validation. However, these architectures lack the certainty of an optimal solution. To address this issue, recent papers introduced the concept of Neural Architecture Search (NAS), enabling the learning of deep architectures. However, most initial approaches focused on large architectures with specific targets (e.g., supervised learning) and relied on computationally expensive optimization techniques such as reinforcement learning and evolutionary algorithms. In this thesis, we further investigate this idea by exploring automatic deep architecture design, with a particular emphasis on differentiable NAS (DNAS), which represents the current trend in NAS due to its computational efficiency. While our primary focus is on Convolutional Neural Networks (CNNs), we also explore Vision Transformers (ViTs) with the goal of designing cost-effective architectures suitable for real-time applications
Veniat, Tom. "Neural Architecture Search under Budget Constraints". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS443.
Texto completoThe recent increase in computation power and the ever-growing amount of data available ignited the rise in popularity of deep learning. However, the expertise, the amount of data, and the computing power necessary to build such algorithms as well as the memory footprint and the inference latency of the resulting system are all obstacles preventing the widespread use of these methods. In this thesis, we propose several methods allowing to make a step towards a more efficient and automated procedure to build deep learning models. First, we focus on learning an efficient architecture for image processing problems. We propose a new model in which we can guide the architecture learning procedure by specifying a fixed budget and cost function. Then, we consider the problem of sequence classification, where a model can be even more efficient by dynamically adapting its size to the complexity of the signal to come. We show that both approaches result in significant budget savings. Finally, we tackle the efficiency problem through the lens of transfer learning. Arguing that a learning procedure can be made even more efficient if, instead of starting tabula rasa, it builds on knowledge acquired during previous experiences. We explore modular architectures in the continual learning scenario and present a new benchmark allowing a fine-grained evaluation of different kinds of transfer
Pham, Huy-Hieu. "Architectures d'apprentissage profond pour la reconnaissance d'actions humaines dans des séquences vidéo RGB-D monoculaires : application à la surveillance dans les transports publics". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30145.
Texto completoThis thesis is dealing with automatic recognition of human actions from monocular RGB-D video sequences. Our main goal is to recognize which human actions occur in unknown videos. This problem is a challenging task due to a number of obstacles caused by the variability of the acquisition conditions, including the lighting, the position, the orientation and the field of view of the camera, as well as the variability of actions which can be performed differently, notably in terms of speed. To tackle these problems, we first review and evaluate the most prominent state-of-the-art techniques to identify the current state of human action recognition in videos. We then propose a new approach for skeleton-based action recognition using Deep Neural Networks (DNNs). Two key questions have been addressed. First, how to efficiently represent the spatio-temporal patterns of skeletal data for fully exploiting the capacity in learning high-level representations of Deep Convolutional Neural Networks (D-CNNs). Second, how to design a powerful D-CNN architecture that is able to learn discriminative features from the proposed representation for classification task. As a result, we introduce two new 3D motion representations called SPMF (Skeleton Posture-Motion Feature) and Enhanced-SPMF that encode skeleton poses and their motions into color images. For learning and classification tasks, we design and train different D-CNN architectures based on the Residual Network (ResNet), Inception-ResNet-v2, Densely Connected Convolutional Network (DenseNet) and Efficient Neural Architecture Search (ENAS) to extract robust features from color-coded images and classify them. Experimental results on various public and challenging human action recognition datasets (MSR Action3D, Kinect Activity Recognition Dataset, SBU Kinect Interaction, and NTU-RGB+D) show that the proposed approach outperforms current state-of-the-art. We also conducted research on the problem of 3D human pose estimation from monocular RGB video sequences and exploited the estimated 3D poses for recognition task. Specifically, a deep learning-based model called OpenPose is deployed to detect 2D human poses. A DNN is then proposed and trained for learning a 2D-to-3D mapping in order to map the detected 2D keypoints into 3D poses. Our experiments on the Human3.6M dataset verified the effectiveness of the proposed method. These obtained results allow opening a new research direction for human action recognition from 3D skeletal data, when the depth cameras are failing. In addition, we collect and introduce in this thesis, CEMEST database, a new RGB-D dataset depicting passengers' behaviors in public transport. It consists of 203 untrimmed real-world surveillance videos of realistic "normal" and "abnormal" events. We achieve promising results on CEMEST with the support of data augmentation and transfer learning techniques. This enables the construction of real-world applications based on deep learning for enhancing public transportation management services
Capítulos de libros sobre el tema "Recherche Automatique d'Architecture Neuronale"
Cárdenas, Janina Di Pierro y Renata De Rugeriis Juárez. "Inteligencia artificial y SoftPower de la traducción asistida y automática: perspectivas en el proceso de enseñanza-aprendizaje de idiomas". En Traduction automatique et usages sociaux des langues. Quelle conséquences pour la diversité linguistique ?, 83–99. Observatoire européen du plurilinguisme, 2021. http://dx.doi.org/10.3917/oep.beacc.2021.01.0083.
Texto completoBallier, Nicolas y Maria Zimina-Poirot. "Littératie de la traduction automatique (TA) neuronale et traduction spécialisée : s’approprier les outils de la TA au travers de projets de recherche interdisciplinaires". En Human Translation and Natural Language Processing Towards a New Consensus? Venice: Fondazione Università Ca’ Foscari, 2023. http://dx.doi.org/10.30687/978-88-6969-762-3/011.
Texto completoLarsonneur, Claire. "Alexa, Siri : la diversité linguistique au prisme des agents conversationnels". En Traduction automatique et usages sociaux des langues. Quelle conséquences pour la diversité linguistique ?, 179–97. Observatoire européen du plurilinguisme, 2021. http://dx.doi.org/10.3917/oep.beacc.2021.01.0179.
Texto completo