Gotowa bibliografia na temat „L'apprentissage profond”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „L'apprentissage profond”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "L'apprentissage profond"
Blondin, Michel. "Vie urbaine et animation sociale". Recherches sociographiques 9, nr 1-2 (12.04.2005): 111–19. http://dx.doi.org/10.7202/055396ar.
Pełny tekst źródłaFleck, Stéphanie, i Luc Massou. "Le numérique pour l’apprentissage collaboratif : nouvelles interfaces, nouvelles interactions". Médiations et médiatisations, nr 5 (29.01.2021): 3–10. http://dx.doi.org/10.52358/mm.vi5.191.
Pełny tekst źródłaADAMCZYK, Willian, Philipp EHRL i Leonardo MONASTERIO. "Compétences et transitions professionnelles au Brésil". Revue internationale du Travail, 10.05.2024. http://dx.doi.org/10.1111/ilrf.12309.
Pełny tekst źródłaNunes, Clarice. "“SOUVENIR DE CLASSE”: MEMÓRIAS E NARRATIVAS A PARTIR DO SENSÍVEL". RevistAleph, nr 20 (1.12.2013). http://dx.doi.org/10.22409/revistaleph.v0i20.24930.
Pełny tekst źródłaRozprawy doktorskie na temat "L'apprentissage profond"
Millan, Mégane. "L'apprentissage profond pour l'évaluation et le retour d'information lors de l'apprentissage de gestes". Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUS057.
Pełny tekst źródłaLearning a new sport or manual work is complex. Indeed, many gestures have to be assimilated in order to reach a good level of skill. However, learning these gestures cannot be done alone. Indeed, it is necessary to see the gesture execution with an expert eye in order to indicate corrections for improvement. However, experts, whether in sports or in manual works, are not always available to analyze and evaluate a novice’s gesture. In order to help experts in this task of analysis, it is possible to develop virtual coaches. Depending on the field, the virtual coach will have more or less skills, but an evaluation according to precise criteria is always mandatory. Providing feedback on mistakes is also essential for the learning of a novice. In this thesis, different solutions for developing the most effective virtual coaches are proposed. First of all, and as mentioned above, it is necessary to evaluate the gestures. From this point of view, a first part consisted in understanding the stakes of automatic gesture analysis, in order to develop an automatic evaluation algorithm that is as efficient as possible. Subsequently, two algorithms for automatic quality evaluation are proposed. These two algorithms, based on deep learning, were then tested on two different gestures databases in order to evaluate their genericity. Once the evaluation has been carried out, it is necessary to provide relevant feedback to the learner on his errors. In order to maintain continuity in the work carried out, this feedback is also based on neural networks and deep learning. A method has been developed based on neural network explanability methods. It allows to go back to the moments of the gestures when errors were made according to the evaluation model. Finally, coupled with semantic segmentation, this method makes it possible to indicate to learners which part of the gesture was badly performed, and to provide them with statistics and a learning curve
Martinez, Coralie. "Classification précoce de séquences temporelles par de l'apprentissage par renforcement profond". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT123.
Pełny tekst źródłaEarly classification (EC) of time series is a recent research topic in the field of sequential data analysis. It consists in assigning a label to some data that is sequentially collected with new data points arriving over time, and the prediction of a label has to be made using as few data points as possible in the sequence. The EC problem is of paramount importance for supporting decision makers in many real-world applications, ranging from process control to fraud detection. It is particularly interesting for applications concerned with the costs induced by the acquisition of data points, or for applications which seek for rapid label prediction in order to take early actions. This is for example the case in the field of health, where it is necessary to provide a medical diagnosis as soon as possible from the sequence of medical observations collected over time. Another example is predictive maintenance with the objective to anticipate the breakdown of a machine from its sensor signals. In this doctoral work, we developed a new approach for this problem, based on the formulation of a sequential decision making problem, that is the EC model has to decide between classifying an incomplete sequence or delaying the prediction to collect additional data points. Specifically, we described this problem as a Partially Observable Markov Decision Process noted EC-POMDP. The approach consists in training an EC agent with Deep Reinforcement Learning (DRL) in an environment characterized by the EC-POMDP. The main motivation for this approach was to offer an end-to-end model for EC which is able to simultaneously learn optimal patterns in the sequences for classification and optimal strategic decisions for the time of prediction. Also, the method allows to set the importance of time against accuracy of the classification in the definition of rewards, according to the application and its willingness to make this compromise. In order to solve the EC-POMDP and model the policy of the EC agent, we applied an existing DRL algorithm, the Double Deep-Q-Network algorithm, whose general principle is to update the policy of the agent during training episodes, using a replay memory of past experiences. We showed that the application of the original algorithm to the EC problem lead to imbalanced memory issues which can weaken the training of the agent. Consequently, to cope with those issues and offer a more robust training of the agent, we adapted the algorithm to the EC-POMDP specificities and we introduced strategies of memory management and episode management. In experiments, we showed that these contributions improved the performance of the agent over the original algorithm, and that we were able to train an EC agent which compromised between speed and accuracy, on each sequence individually. We were also able to train EC agents on public datasets for which we have no expertise, showing that the method is applicable to various domains. Finally, we proposed some strategies to interpret the decisions of the agent, validate or reject them. In experiments, we showed how these solutions can help gain insight in the choice of action made by the agent
Lelong, Thibault. "Reconnaissance des documents avec de l'apprentissage profond pour la réalité augmentée". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS017.
Pełny tekst źródłaThis doctoral project focuses on issues related to the identification of images and documents in augmented reality applications using markers, particularly when using cameras. The research is set in a technological context where interaction through augmented reality is essential in several domains, including industry, which require reliable identification methodologies.In an initial phase, the project assesses various identification and image processing methodologies using a database specially designed to reflect the challenges of the industrial context. This research allows an in-depth analysis of existing methodologies, thus revealing their potentials and limitations in various application scenarios.Subsequently, the project proposes a document detection system aimed at enhancing existing solutions, optimized for environments such as web browsers. Then, an innovative image research methodology is introduced, relying on an analysis of the image in sub-parts to increase the accuracy of identification and avoid image confusions. This approach allows for more precise and adaptive identification, particularly with respect to variations in the layout of the target image.Finally, in the context of collaborative work with ARGO company, a real-time image tracking engine was developed, optimized for low-power devices and web environments. This ensures the deployment of augmented reality web applications and their operation on a wide range of devices, including those with limited processing capabilities.It is noteworthy that the works resulting from this doctoral project have been concretely applied and valorized by the Argo company for commercial purposes, thereby confirming the relevance and viability of the developed methodologies and solutions, and attesting to their significant contribution to the technological and industrial field of augmented reality
Moreau, Thomas. "Représentations Convolutives Parcimonieuses -- application aux signaux physiologiques et interpétabilité de l'apprentissage profond". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN054/document.
Pełny tekst źródłaConvolutional representations extract recurrent patterns which lead to the discovery of local structures in a set of signals. They are well suited to analyze physiological signals which requires interpretable representations in order to understand the relevant information. Moreover, these representations can be linked to deep learning models, as a way to bring interpretability intheir internal representations. In this disserta tion, we describe recent advances on both computational and theoretical aspects of these models.First, we show that the Singular Spectrum Analysis can be used to compute convolutional representations. This representation is dense and we describe an automatized procedure to improve its interpretability. Also, we propose an asynchronous algorithm, called DICOD, based on greedy coordinate descent, to solve convolutional sparse coding for long signals. Our algorithm has super-linear acceleration.In a second part, we focus on the link between representations and neural networks. An extra training step for deep learning, called post-training, is introduced to boost the performances of the trained network by making sure the last layer is optimal. Then, we study the mechanisms which allow to accelerate sparse coding algorithms with neural networks. We show that it is linked to afactorization of the Gram matrix of the dictionary.Finally, we illustrate the relevance of convolutional representations for physiological signals. Convolutional dictionary learning is used to summarize human walk signals and Singular Spectrum Analysis is used to remove the gaze movement in young infant’s oculometric recordings
Phan, Thi Hai Hong. "Reconnaissance d'actions humaines dans des vidéos avec l'apprentissage automatique". Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1038.
Pełny tekst źródłaIn recent years, human action recognition (HAR) has attracted the research attention thanks to its various applications such as intelligent surveillance systems, video indexing, human activities analysis, human-computer interactions and so on. The typical issues that the researchers are envisaging can be listed as the complexity of human motions, the spatial and temporal variations, cluttering, occlusion and change of lighting condition. This thesis focuses on automatic recognizing of the ongoing human actions in a given video. We address this research problem by using both shallow learning and deep learning approaches.First, we began the research work with traditional shallow learning approaches based on hand-scrafted features by introducing a novel feature named Motion of Oriented Magnitudes Patterns (MOMP) descriptor. We then incorporated this discriminative descriptor into simple yet powerful representation techniques such as Bag of Visual Words, Vector of locally aggregated descriptors (VLAD) and Fisher Vector to better represent actions. Also, PCA (Principal Component Analysis) and feature selection (statistical dependency, mutual information) are applied to find out the best subset of features in order to improve the performance and decrease the computational expense. The proposed method obtained the state-of-the-art results on several common benchmarks.Recent deep learning approaches require an intensive computations and large memory usage. They are therefore difficult to be used and deployed on the systems with limited resources. In the second part of this thesis, we present a novel efficient algorithm to compress Convolutional Neural Network models in order to decrease both the computational cost and the run-time memory footprint. We measure the redundancy of parameters based on their relationship using the information theory based criteria, and we then prune the less important ones. The proposed method significantly reduces the model sizes of different networks such as AlexNet, ResNet up to 70% without performance loss on the large-scale image classification task.Traditional approach with the proposed descriptor achieved the great performance for human action recognition but only on small datasets. In order to improve the performance on the large-scale datasets, in the last part of this thesis, we therefore exploit deep learning techniques to classify actions. We introduce the concepts of MOMP Image as an input layer of CNNs as well as incorporate MOMP image into deep neural networks. We then apply our network compression algorithm to accelerate and improve the performance of system. The proposed method reduces the model size, decreases the over-fitting, and thus increases the overall performance of CNN on the large-scale action datasets.Throughout the thesis, we have showed that our algorithms obtain good performance in comparison to the state-of-the-art on challenging action datasets (Weizmann, KTH, UCF Sports, UCF-101 and HMDB51) with low resource required
Poirier, Jasmine. "Segmentation de neurones pour imagerie calcique du poisson zèbre : des méthodes classiques à l'apprentissage profond". Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/36452.
Pełny tekst źródłaThe experimental study of the resilience of a complex network lies on our capacity to reproduceits structural and functional organization. Having chosen the neuronal network of the larvalzebrafish as our animal model for its transparency, we can use techniques such as light-sheet microscopy combined with calcium imaging to image its whole brain more than twice every second, with a cellular spatial resolution. Having both those spatial and temporal resolutions, we have to process and segment a great quantity of data, which can’t be done manually. Wethus have to resort to numerical techniques to segment the neurons and extract their activity. Three segmentation techniques have been compared : adaptive threshold (AT), random deci-sion forests (ML), and a pretrained deep convolutional neural network. While the adaptive threshold technique allow rapid identification and with almost no error of the more active neurons, it generates many more false negatives than the two other methods. On the contrary, the deep convolutional neural network method identify more neurons, but generates more false positives which can be filtered later in the proces. Using the F1 score as our comparison metrics, the neural network (F1= 59,2%) out performs the adaptive threshold (F1= 25,4%) and random decision forests (F1= 48,8%). Even though the performances seem lower compared to results generally shown for deep neural network, we are competitive with the best technique known to this day for neurons segmentation, which is 3dCNN (F1= 65.9%), an algorithm presented in the neurofinder challenge.
Droniou, Alain. "Apprentissage de représentations et robotique développementale : quelques apports de l'apprentissage profond pour la robotique autonome". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066056/document.
Pełny tekst źródłaThis thesis studies the use of deep neural networks to learn high level representations from raw inputs on robots, based on the "manifold hypothesis"
Droniou, Alain. "Apprentissage de représentations et robotique développementale : quelques apports de l'apprentissage profond pour la robotique autonome". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066056.
Pełny tekst źródłaThis thesis studies the use of deep neural networks to learn high level representations from raw inputs on robots, based on the "manifold hypothesis"
Harbaoui, Nesrine. "Diagnostic adaptatif à l'environnement de navigation : apport de l'apprentissage profond pour une localisation sûre et précise". Electronic Thesis or Diss., Université de Lille (2022-....), 2022. http://www.theses.fr/2022ULILB041.
Pełny tekst źródłaFor an autonomous terrestrial transport system, the ability to determine its position is essential in order to allow other functions, such as control or perception, to be safely controlled or perceived. Thus, the criticality of these functions generates important requirements in terms of safety (integrity), availability, accuracy and precision. For land vehicles, meeting these requirements is related to various parameters such as vehicle dynamics, weather conditions, or the navigation context, which includes both the operational environment and the behavior of the host vehicle or user. All of these circumstances can be an obstacle to the reception of Global Navigation Satellite System (GNSS) signals since the environment determines the type and quality of electromagnetic signals available for positioning.Although many navigation and positioning techniques have been developed, none is capable of providing a reliable and accurate position in all contexts. Therefore, in order to deploy a localization function capable of operating in different contexts, based on low cost sensors, mainly GNSS and Inertial Navigation system (IMU), it is necessary, from the design phase, to develop strategies that solve both the antagonism of certain requirements and the adaptation to changing environment/dynamics. In this context, this thesis proposes a diagnostic layer that adapts by deep learning methods to changes in the context and adjusts the trade-off between functional requirements. This layer is integrated in a fault-tolerant data fusion framework through an informational divergence, the α-Rényi divergence, known by its generalization of other divergences such as the Kullback-Leibler divergence, the Bhattacharyya distance. In order to detect and isolate the divergence faults based on the generation of residuals, we offer the solution of selecting the appropriate residual for each situation by fixing the value of the parameter α using artificial intelligence technologies in order to increase the detectability of the defects. In order to increase the availability of the system while maintaining an acceptable level of operational safety, a context-sensitive threshold that adjusts the trade-off between the probability of false alarm and the probability of missed detection is proposed. To test and validate the proposed approaches, two types of data have been provided; real by the PRETIL platform of the CRIStAL laboratory and simulated by the Stella NGC simulator as a part of the ANR LOCSP project
Bourgeais, Victoria. "Interprétation de l'apprentissage profond pour la prédiction de phénotypes à partir de données d'expression de gènes". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG069.
Pełny tekst źródłaDeep learning has been a significant advance in artificial intelligence in recent years. Its main domains of interest are image analysis and natural language processing. One of the major future challenges of this approach is its application to precision medicine. This new form of medicine will make it possible to personalize each stage of a patient's care pathway according to his or her characteristics, in particular molecular characteristics such as gene expression data that inform about the cellular state of a patient. However, deep learning models are considered black boxes as their predictions are not accompanied by an explanation, limiting their use in clinics. The General Data Protection Regulation (GDPR), adopted recently by the European Union, imposes that the machine learning algorithms must be able to explain their decisions to the users. Thus, there is a real need to make neural networks more interpretable, and this is particularly true in the medical field for several reasons. Understanding why a phenotype has been predicted is necessary to ensure that the prediction is based on reliable representations of the patients rather than on irrelevant artifacts present in the training data. Regardless of the model's effectiveness, this will affect any end user's decisions and confidence in the model. Finally, a neural network performing well for the prediction of a certain phenotype may have identified a signature in the data that could open up new research avenues.In the current state of the art, two general approaches exist for interpreting these black-boxes: creating inherently interpretable models or using a third-party method dedicated to the interpretation of the trained neural network. Whatever approach is chosen, the explanation provided generally consists of identifying the important input variables and neurons for the prediction. However, in the context of phenotype prediction from gene expression, these approaches generally do not provide an understandable explanation, as these data are not directly comprehensible by humans. Therefore, we propose novel and original deep learning methods, interpretable by design. The architecture of these methods is defined from one or several knowledge databases. A neuron represents a biological object, and the connections between neurons correspond to the relations between biological objects. Three methods have been developed, listed below in chronological order.Deep GONet is based on a multilayer perceptron constrained by a biological knowledge database, the Gene Ontology (GO), through an adapted regularization term. The explanations of the predictions are provided by a posteriori interpretation method.GraphGONet takes advantage of both a multilayer perceptron and a graph neural network to deal with the semantic richness of GO knowledge. This model has the capacity to generate explanations automatically.BioHAN is only established on a graph neural network and can easily integrate different knowledge databases and their semantics. Interpretation is facilitated by the use of an attention mechanism, enabling the model to focus on the most informative neurons.These methods have been evaluated on diagnostic tasks using real gene expression datasets and have shown competitiveness with state-of-the-art machine learning methods. Our models provide intelligible explanations composed of the most contributive neurons and their associated biological concepts. This feature allows experts to use our tools in a medical setting
Książki na temat "L'apprentissage profond"
The effective teacher's guide to moderate, severe, and profound learning difficulties practical strategies. London: Routledge, 2006.
Znajdź pełny tekst źródłaRussell, Andreas. Intelligence Artificielle et l'Apprentissage Profond. Independently Published, 2018.
Znajdź pełny tekst źródłaDeep Learning: Une Introduction Aux Principes Fondamentaux de l'Apprentissage Profond à l'Aide de Python. Independently Published, 2018.
Znajdź pełny tekst źródłaChildren with Profound/complex Physical and Learning Difficulties. NASEN, 1993.
Znajdź pełny tekst źródłaCzęści książek na temat "L'apprentissage profond"
ATIEH, Mirna, Omar MOHAMMAD, Ali SABRA i Nehme RMAYTI. "IdO, apprentissage profond et cybersécurité dans la maison connectée : une étude". W Cybersécurité des maisons intelligentes, 215–56. ISTE Group, 2024. http://dx.doi.org/10.51926/iste.9086.ch6.
Pełny tekst źródłaAlegria, J., J. Marin, S. Carrillo i Ph Mousty. "Les premiers pas dans l’acquisition de l’orthographe en fonction du caractère profond ou superficiel du système alphabétique : comparaison entre le français et l’espagnol". W L'apprentissage de la lecture, 51–67. Presses universitaires de Rennes, 2003. http://dx.doi.org/10.4000/books.pur.48418.
Pełny tekst źródłaMOLINIER, Matthieu, Jukka MIETTINEN, Dino IENCO, Shi QIU i Zhe ZHU. "Analyse de séries chronologiques d’images satellitaires optiques pour des applications environnementales". W Détection de changements et analyse des séries temporelles d’images 2, 125–74. ISTE Group, 2024. http://dx.doi.org/10.51926/iste.9057.ch4.
Pełny tekst źródła