Academic literature on the topic 'Apprentissage profond par renforcement'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Apprentissage profond par renforcement.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Apprentissage profond par renforcement"
Griffon, L., M. Chennaoui, D. Leger, and M. Strauss. "Apprentissage par renforcement dans la narcolepsie de type 1." Médecine du Sommeil 15, no. 1 (March 2018): 60. http://dx.doi.org/10.1016/j.msom.2018.01.164.
Full textFillières-Riveau, Gauthier, Jean-Marie Favreau, Vincent Barra, and Guillaume Touya. "Génération de cartes tactiles photoréalistes pour personnes déficientes visuelles par apprentissage profond." Revue Internationale de Géomatique 30, no. 1-2 (January 2020): 105–26. http://dx.doi.org/10.3166/rig.2020.00104.
Full textGarcia, Pascal. "Exploration guidée en apprentissage par renforcement. Connaissancesa prioriet relaxation de contraintes." Revue d'intelligence artificielle 20, no. 2-3 (June 1, 2006): 235–75. http://dx.doi.org/10.3166/ria.20.235-275.
Full textDegris, Thomas, Olivier Sigaud, and Pierre-Henri Wuillemin. "Apprentissage par renforcement factorisé pour le comportement de personnages non joueurs." Revue d'intelligence artificielle 23, no. 2-3 (May 13, 2009): 221–51. http://dx.doi.org/10.3166/ria.23.221-251.
Full textHost, Shirley, and Nicolas Sabouret. "Apprentissage par renforcement d'actes de communication dans un système multi-agent." Revue d'intelligence artificielle 24, no. 2 (April 17, 2010): 159–88. http://dx.doi.org/10.3166/ria.24.159-188.
Full textPouliquen, Geoffroy, and Catherine Oppenheim. "Débruitage par apprentissage profond: impact sur les biomarqueurs quantitatifs des tumeurs cérébrales." Journal of Neuroradiology 49, no. 2 (March 2022): 136. http://dx.doi.org/10.1016/j.neurad.2022.01.040.
Full textCaccamo, Emmanuelle, and Fabien Richert. "Les procédés algorithmiques au prisme des approches sémiotiques." Cygne noir, no. 7 (June 1, 2022): 1–16. http://dx.doi.org/10.7202/1089327ar.
Full textChoplin, Arnaud, and Julie Laporte. "Comparaison de deux stratégies pédagogiques dans l’apprentissage du toucher thérapeutique." Revue des sciences de l’éducation 42, no. 3 (June 7, 2017): 187–210. http://dx.doi.org/10.7202/1040089ar.
Full textAltintas, Gulsun, and Isabelle Royer. "Renforcement de la résilience par un apprentissage post-crise : une étude longitudinale sur deux périodes de turbulence." M@n@gement 12, no. 4 (2009): 266. http://dx.doi.org/10.3917/mana.124.0266.
Full textDutech, Alain, and Manuel Samuelides. "Apprentissage par renforcement pour les processus décisionnels de Markov partiellement observés Apprendre une extension sélective du passé." Revue d'intelligence artificielle 17, no. 4 (August 1, 2003): 559–89. http://dx.doi.org/10.3166/ria.17.559-589.
Full textDissertations / Theses on the topic "Apprentissage profond par renforcement"
Zimmer, Matthieu. "Apprentissage par renforcement développemental." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0008/document.
Full textReinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor space
Martinez, Coralie. "Classification précoce de séquences temporelles par de l'apprentissage par renforcement profond." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT123.
Full textEarly classification (EC) of time series is a recent research topic in the field of sequential data analysis. It consists in assigning a label to some data that is sequentially collected with new data points arriving over time, and the prediction of a label has to be made using as few data points as possible in the sequence. The EC problem is of paramount importance for supporting decision makers in many real-world applications, ranging from process control to fraud detection. It is particularly interesting for applications concerned with the costs induced by the acquisition of data points, or for applications which seek for rapid label prediction in order to take early actions. This is for example the case in the field of health, where it is necessary to provide a medical diagnosis as soon as possible from the sequence of medical observations collected over time. Another example is predictive maintenance with the objective to anticipate the breakdown of a machine from its sensor signals. In this doctoral work, we developed a new approach for this problem, based on the formulation of a sequential decision making problem, that is the EC model has to decide between classifying an incomplete sequence or delaying the prediction to collect additional data points. Specifically, we described this problem as a Partially Observable Markov Decision Process noted EC-POMDP. The approach consists in training an EC agent with Deep Reinforcement Learning (DRL) in an environment characterized by the EC-POMDP. The main motivation for this approach was to offer an end-to-end model for EC which is able to simultaneously learn optimal patterns in the sequences for classification and optimal strategic decisions for the time of prediction. Also, the method allows to set the importance of time against accuracy of the classification in the definition of rewards, according to the application and its willingness to make this compromise. In order to solve the EC-POMDP and model the policy of the EC agent, we applied an existing DRL algorithm, the Double Deep-Q-Network algorithm, whose general principle is to update the policy of the agent during training episodes, using a replay memory of past experiences. We showed that the application of the original algorithm to the EC problem lead to imbalanced memory issues which can weaken the training of the agent. Consequently, to cope with those issues and offer a more robust training of the agent, we adapted the algorithm to the EC-POMDP specificities and we introduced strategies of memory management and episode management. In experiments, we showed that these contributions improved the performance of the agent over the original algorithm, and that we were able to train an EC agent which compromised between speed and accuracy, on each sequence individually. We were also able to train EC agents on public datasets for which we have no expertise, showing that the method is applicable to various domains. Finally, we proposed some strategies to interpret the decisions of the agent, validate or reject them. In experiments, we showed how these solutions can help gain insight in the choice of action made by the agent
Paumard, Marie-Morgane. "Résolution automatique de puzzles par apprentissage profond." Thesis, CY Cergy Paris Université, 2020. http://www.theses.fr/2020CYUN1067.
Full textThe objective of this thesis is to develop semantic methods of reassembly in the complicated framework of heritage collections, where some blocks are eroded or missing.The reassembly of archaeological remains is an important task for heritage sciences: it allows to improve the understanding and conservation of ancient vestiges and artifacts. However, some sets of fragments cannot be reassembled with techniques using contour information or visual continuities. It is then necessary to extract semantic information from the fragments and to interpret them. These tasks can be performed automatically thanks to deep learning techniques coupled with a solver, i.e., a constrained decision making algorithm.This thesis proposes two semantic reassembly methods for 2D fragments with erosion and a new dataset and evaluation metrics.The first method, Deepzzle, proposes a neural network followed by a solver. The neural network is composed of two Siamese convolutional networks trained to predict the relative position of two fragments: it is a 9-class classification. The solver uses Dijkstra's algorithm to maximize the joint probability. Deepzzle can address the case of missing and supernumerary fragments, is capable of processing about 15 fragments per puzzle, and has a performance that is 25% better than the state of the art.The second method, Alphazzle, is based on AlphaZero and single-player Monte Carlo Tree Search (MCTS). It is an iterative method that uses deep reinforcement learning: at each step, a fragment is placed on the current reassembly. Two neural networks guide MCTS: an action predictor, which uses the fragment and the current reassembly to propose a strategy, and an evaluator, which is trained to predict the quality of the future result from the current reassembly. Alphazzle takes into account the relationships between all fragments and adapts to puzzles larger than those solved by Deepzzle. Moreover, Alphazzle is compatible with constraints imposed by a heritage framework: at the end of reassembly, MCTS does not access the reward, unlike AlphaZero. Indeed, the reward, which indicates if a puzzle is well solved or not, can only be estimated by the algorithm, because only a conservator can be sure of the quality of a reassembly
Léon, Aurélia. "Apprentissage séquentiel budgétisé pour la classification extrême et la découverte de hiérarchie en apprentissage par renforcement." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS226.
Full textThis thesis deals with the notion of budget to study problems of complexity (it can be computational complexity, a complex task for an agent, or complexity due to a small amount of data). Indeed, the main goal of current techniques in machine learning is usually to obtain the best accuracy, without worrying about the cost of the task. The concept of budget makes it possible to take into account this parameter while maintaining good performances. We first focus on classification problems with a large number of classes: the complexity in those algorithms can be reduced thanks to the use of decision trees (here learned through budgeted reinforcement learning techniques) or the association of each class with a (binary) code. We then deal with reinforcement learning problems and the discovery of a hierarchy that breaks down a (complex) task into simpler tasks to facilitate learning and generalization. Here, this discovery is done by reducing the cognitive effort of the agent (considered in this work as equivalent to the use of an additional observation). Finally, we address problems of understanding and generating instructions in natural language, where data are available in small quantities: we test for this purpose the simultaneous use of an agent that understands and of an agent that generates the instructions
Brenon, Alexis. "Modèle profond pour le contrôle vocal adaptatif d'un habitat intelligent." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM057/document.
Full textSmart-homes, resulting of the merger of home-automation, ubiquitous computing and artificial intelligence, support inhabitants in their activity of daily living to improve their quality of life.Allowing dependent and aged people to live at home longer, these homes provide a first answer to society problems as the dependency tied to the aging population.In voice controlled home, the home has to answer to user's requests covering a range of automated actions (lights, blinds, multimedia control, etc.).To achieve this, the control system of the home need to be aware of the context in which a request has been done, but also to know user habits and preferences.Thus, the system must be able to aggregate information from a heterogeneous home-automation sensors network and take the (variable) user behavior into account.The development of smart home control systems is hard due to the huge variability regarding the home topology and the user habits.Furthermore, the whole set of contextual information need to be represented in a common space in order to be able to reason about them and make decisions.To address these problems, we propose to develop a system which updates continuously its model to adapt itself to the user and which uses raw data from the sensors through a graphical representation.This new method is particularly interesting because it does not require any prior inference step to extract the context.Thus, our system uses deep reinforcement learning; a convolutional neural network allowing to extract contextual information and reinforcement learning used for decision-making.Then, this memoir presents two systems, a first one only based on reinforcement learning showing limits of this approach against real environment with thousands of possible states.Introduction of deep learning allowed to develop the second one, ARCADES, which gives good performances proving that this approach is relevant and opening many ways to improve it
Carrara, Nicolas. "Reinforcement learning for dialogue systems optimization with user adaptation." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I071/document.
Full textThe most powerful artificial intelligence systems are now based on learned statistical models. In order to build efficient models, these systems must collect a huge amount of data on their environment. Personal assistants, smart-homes, voice-servers and other dialogue applications are no exceptions to this statement. A specificity of those systems is that they are designed to interact with humans, and as a consequence, their training data has to be collected from interactions with these humans. As the number of interactions with a single person is often too scarce to train a proper model, the usual approach to maximise the amount of data consists in mixing data collected with different users into a single corpus. However, one limitation of this approach is that, by construction, the trained models are only efficient with an "average" human and do not include any sort of adaptation; this lack of adaptation makes the service unusable for some specific group of persons and leads to a restricted customers base and inclusiveness problems. This thesis proposes solutions to construct Dialogue Systems that are robust to this problem by combining Transfer Learning and Reinforcement Learning. It explores two main ideas: The first idea of this thesis consists in incorporating adaptation in the very first dialogues with a new user. To that extend, we use the knowledge gathered with previous users. But how to scale such systems with a growing database of user interactions? The first proposed approach involves clustering of Dialogue Systems (tailored for their respective user) based on their behaviours. We demonstrated through handcrafted and real user-models experiments how this method improves the dialogue quality for new and unknown users. The second approach extends the Deep Q-learning algorithm with a continuous transfer process.The second idea states that before using a dedicated Dialogue System, the first interactions with a user should be handled carefully by a safe Dialogue System common to all users. The underlying approach is divided in two steps. The first step consists in learning a safe strategy through Reinforcement Learning. To that extent, we introduced a budgeted Reinforcement Learning framework for continuous state space and the underlying extensions of classic Reinforcement Learning algorithms. In particular, the safe version of the Fitted-Q algorithm has been validated, in term of safety and efficiency, on a dialogue system tasks and an autonomous driving problem. The second step consists in using those safe strategies when facing new users; this method is an extension of the classic ε-greedy algorithm
Aklil, Nassim. "Apprentissage actif sous contrainte de budget en robotique et en neurosciences computationnelles. Localisation robotique et modélisation comportementale en environnement non stationnaire." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066225/document.
Full textDecision-making is a highly researched field in science, be it in neuroscience to understand the processes underlying animal decision-making, or in robotics to model efficient and rapid decision-making processes in real environments. In neuroscience, this problem is resolved online with sequential decision-making models based on reinforcement learning. In robotics, the primary objective is efficiency, in order to be deployed in real environments. However, in robotics what can be called the budget and which concerns the limitations inherent to the hardware, such as computation times, limited actions available to the robot or the lifetime of the robot battery, are often not taken into account at the present time. We propose in this thesis to introduce the notion of budget as an explicit constraint in the robotic learning processes applied to a localization task by implementing a model based on work developed in statistical learning that processes data under explicit constraints, limiting the input of data or imposing a more explicit time constraint. In order to discuss an online functioning of this type of budgeted learning algorithms, we also discuss some possible inspirations that could be taken on the side of computational neuroscience. In this context, the alternation between information retrieval for location and the decision to move for a robot may be indirectly linked to the notion of exploration-exploitation compromise. We present our contribution to the modeling of this compromise in animals in a non-stationary task involving different levels of uncertainty, and we make the link with the methods of multi-armed bandits
De, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.
Full textThe thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Pageaud, Simon. "SmartGov : architecture générique pour la co-construction de politiques urbaines basée sur l'apprentissage par renforcement multi-agent." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1128.
Full textIn this thesis, we propose the SmartGov model, coupling multi-agent simulation and multi-agent deep reinforcement learning, to help co-construct urban policies and integrate all stakeholders in the decision process. Smart Cities provide sensor data from the urban areas to increase realism of the simulation in SmartGov.Our first contribution is a generic architecture for multi-agent simulation of the city to study global behavior emergence with realistic agents reacting to political decisions. With a multi-level modeling and a coupling of different dynamics, our tool learns environment specificities and suggests relevant policies. Our second contribution improves autonomy and adaptation of the decision function with multi-agent, multi-level reinforcement learning. A set of clustered agents is distributed over the studied area to learn local specificities without any prior knowledge on the environment. Trust score assignment and individual rewards help reduce non-stationary impact on experience replay in deep reinforcement learning.These contributions bring forth a complete system to co-construct urban policies in the Smart City. We compare our model with different approaches from the literature on a parking fee policy to display the benefits and limits of our contributions
Debard, Quentin. "Automatic learning of next generation human-computer interactions." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI036.
Full textArtificial Intelligence (AI) and Human-Computer Interactions (HCIs) are two research fields with relatively few common work. HCI specialists usually design the way we interact with devices directly from observations and measures of human feedback, manually optimizing the user interface to better fit users’ expectations. This process is hard to optimize: ergonomy, intuitivity and ease of use are key features in a User Interface (UI) that are too complex to be simply modelled from interaction data. This drastically restrains the possible uses of Machine Learning (ML) in this design process. Currently, ML in HCI is mostly applied to gesture recognition and automatic display, e.g. advertisement or item suggestion. It is also used to fine tune an existing UI to better optimize it, but as of now it does not participate in designing new ways to interact with computers. Our main focus in this thesis is to use ML to develop new design strategies for overall better UIs. We want to use ML to build intelligent – understand precise, intuitive and adaptive – user interfaces using minimal handcrafting. We propose a novel approach to UI design: instead of letting the user adapt to the interface, we want the interface and the user to adapt mutually to each other. The goal is to reduce human bias in protocol definition while building co-adaptive interfaces able to further fit individual preferences. In order to do so, we will put to use the different mechanisms available in ML to automatically learn behaviors, build representations and take decisions. We will be experimenting on touch interfaces, as these interfaces are vastly used and can provide easily interpretable problems. The very first part of our work will focus on processing touch data and use supervised learning to build accurate classifiers of touch gestures. The second part will detail how Reinforcement Learning (RL) can be used to model and learn interaction protocols given user actions. Lastly, we will combine these RL models with unsupervised learning to build a setup allowing for the design of new interaction protocols without the need for real user data
Books on the topic "Apprentissage profond par renforcement"
Ontario. Esquisse de cours 12e année: Danse atc4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: Histoire de l'Occident et du monde chy4c cours précollégial. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: English eae4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: Comptabilité de la petite entreprise ban4e. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: Mathématiques de la vie courante mel4e cours préemploi. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: English eae4c cours précollégial. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: The writer's craft eac4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: Français fra4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: Géographie mondiale: le milieu humain cgu4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Find full textOntario. Esquisse de cours 12e année: L'Ontario français chf4o. Vanier, Ont: CFORP, 2002.
Find full textBook chapters on the topic "Apprentissage profond par renforcement"
Tazdaït, Tarik, and Rabia Nessah. "5. Vote et apprentissage par renforcement." In Le paradoxe du vote, 157–77. Éditions de l’École des hautes études en sciences sociales, 2013. http://dx.doi.org/10.4000/books.editionsehess.1931.
Full textHADJADJ-AOUL, Yassine, and Soraya AIT-CHELLOUCHE. "Utilisation de l’apprentissage par renforcement pour la gestion des accès massifs dans les réseaux NB-IoT." In La gestion et le contrôle intelligents des performances et de la sécurité dans l’IoT, 27–55. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9053.ch2.
Full textBENDELLA, Mohammed Salih, and Badr BENMAMMAR. "Impact de la radio cognitive sur le green networking : approche par apprentissage par renforcement." In Gestion du niveau de service dans les environnements émergents. ISTE Group, 2020. http://dx.doi.org/10.51926/iste.9002.ch8.
Full textConference papers on the topic "Apprentissage profond par renforcement"
Fourcade, A. "Apprentissage profond : un troisième oeil pour les praticiens." In 66ème Congrès de la SFCO. Les Ulis, France: EDP Sciences, 2020. http://dx.doi.org/10.1051/sfco/20206601014.
Full textReports on the topic "Apprentissage profond par renforcement"
Melloni, Gian. Le leadership des autorités locales en matière d'assainissement et d'hygiène : expériences et apprentissage de l'Afrique de l'Ouest. Institute of Development Studies (IDS), January 2022. http://dx.doi.org/10.19088/slh.2022.002.
Full text