Gotowa bibliografia na temat „Goal-conditioned reinforcement learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Goal-conditioned reinforcement learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Goal-conditioned reinforcement learning"
Yin, Xiangyu, Sihao Wu, Jiaxu Liu, Meng Fang, Xingyu Zhao, Xiaowei Huang i Wenjie Ruan. "Representation-Based Robustness in Goal-Conditioned Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21761–69. http://dx.doi.org/10.1609/aaai.v38i19.30176.
Pełny tekst źródłaLevine, Alexander, i Soheil Feizi. "Goal-Conditioned Q-learning as Knowledge Distillation". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 8500–8509. http://dx.doi.org/10.1609/aaai.v37i7.26024.
Pełny tekst źródłaYAMADA, Takaya, i Koich OGAWARA. "Goal-Conditioned Reinforcement Learning with Latent Representations using Contrastive Learning". Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2021 (2021): 1P1—I15. http://dx.doi.org/10.1299/jsmermd.2021.1p1-i15.
Pełny tekst źródłaQian, Zhifeng, Mingyu You, Hongjun Zhou i Bin He. "Weakly Supervised Disentangled Representation for Goal-Conditioned Reinforcement Learning". IEEE Robotics and Automation Letters 7, nr 2 (kwiecień 2022): 2202–9. http://dx.doi.org/10.1109/lra.2022.3141148.
Pełny tekst źródłaTANIGUCHI, Asuto, Fumihiro SASAKI i Ryota YAMASHINA. "Goal-Conditioned Reinforcement Learning with Extended Floyd-Warshall method". Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2020 (2020): 2A1—L01. http://dx.doi.org/10.1299/jsmermd.2020.2a1-l01.
Pełny tekst źródłaElguea-Aguinaco, Íñigo, Antonio Serrano-Muñoz, Dimitrios Chrysostomou, Ibai Inziarte-Hidalgo, Simon Bøgh i Nestor Arana-Arexolaleiba. "Goal-Conditioned Reinforcement Learning within a Human-Robot Disassembly Environment". Applied Sciences 12, nr 22 (15.11.2022): 11610. http://dx.doi.org/10.3390/app122211610.
Pełny tekst źródłaLiu, Bo, Yihao Feng, Qiang Liu i Peter Stone. "Metric Residual Network for Sample Efficient Goal-Conditioned Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 8799–806. http://dx.doi.org/10.1609/aaai.v37i7.26058.
Pełny tekst źródłaDing, Hongyu, Yuanze Tang, Qing Wu, Bo Wang, Chunlin Chen i Zhi Wang. "Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning". IEEE/CAA Journal of Automatica Sinica 10, nr 12 (grudzień 2023): 2233–47. http://dx.doi.org/10.1109/jas.2023.123477.
Pełny tekst źródłaXu, Jiawei, Shuxing Li, Rui Yang, Chun Yuan i Lei Han. "Efficient Multi-Goal Reinforcement Learning via Value Consistency Prioritization". Journal of Artificial Intelligence Research 77 (5.06.2023): 355–76. http://dx.doi.org/10.1613/jair.1.14398.
Pełny tekst źródłaFaccio, Francesco, Vincent Herrmann, Aditya Ramesh, Louis Kirsch i Jürgen Schmidhuber. "Goal-Conditioned Generators of Deep Policies". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 7503–11. http://dx.doi.org/10.1609/aaai.v37i6.25912.
Pełny tekst źródłaRozprawy doktorskie na temat "Goal-conditioned reinforcement learning"
Chenu, Alexandre. "Leveraging sequentiality in Robot Learning : Application of the Divide & Conquer paradigm to Neuro-Evolution and Deep Reinforcement Learning". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS342.
Pełny tekst źródła“To succeed, planning alone is insufficient. One must improvise as well.” This quote from Isaac Asimov, founding father of robotics and author of the Three Laws of Robotics, emphasizes the importance of being able to adapt and think on one’s feet to achieve success. Although robots can nowadays resolve highly complex tasks, they still need to gain those crucial adaptability skills to be deployed on a larger scale. Robot Learning uses learning algorithms to tackle this lack of adaptability and to enable robots to solve complex tasks autonomously. Two types of learning algorithms are particularly suitable for robots to learn controllers autonomously: Deep Reinforcement Learning and Neuro-Evolution. However, both classes of algorithms often cannot solve Hard Exploration Problems, that is problems with a long horizon and a sparse reward signal, unless they are guided in their learning process. One can consider different approaches to tackle those problems. An option is to search for a diversity of behaviors rather than a specific one. The idea is that among this diversity, some behaviors will be able to solve the task. We call these algorithms Diversity Search algorithms. A second option consists in guiding the learning process using demonstrations provided by an expert. This is called Learning from Demonstration. However, searching for diverse behaviors or learning from demonstration can be inefficient in some contexts. Indeed, finding diverse behaviors can be tedious if the environment is complex. On the other hand, learning from demonstration can be very difficult if only one demonstration is available. This thesis attempts to improve the effectiveness of Diversity Search and Learning from Demonstration when applied to Hard Exploration Problems. To do so, we assume that complex robotics behaviors can be decomposed into reaching simpler sub-goals. Based on this sequential bias, we try to improve the sample efficiency of Diversity Search and Learning from Demonstration algorithms by adopting Divide & Conquer strategies, which are well-known for their efficiency when the problem is composable. Throughout the thesis, we propose two main strategies. First, after identifying some limitations of Diversity Search algorithms based on Neuro-Evolution, we propose Novelty Search Skill Chaining. This algorithm combines Diversity Search with Skill- Chaining to efficiently navigate maze environments that are difficult to explore for state-of-the-art Diversity Search. In a second set of contributions, we propose the Divide & Conquer Imitation Learning algorithms. The key intuition behind those methods is to decompose the complex task of learning from a single demonstration into several simpler goal-reaching sub-tasks. DCIL-II, the most advanced variant, can learn walking behaviors for under-actuated humanoid robots with unprecedented efficiency. Beyond underlining the effectiveness of the Divide & Conquer paradigm in Robot Learning, this work also highlights the difficulties that can arise when composing behaviors, even in elementary environments. One will inevitably have to address these difficulties before applying these algorithms directly to real robots. It may be necessary for the success of the next generations of robots, as outlined by Asimov
Części książek na temat "Goal-conditioned reinforcement learning"
Steccanella, Lorenzo, i Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning". W Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Pełny tekst źródłaZou, Qiming, i Einoshin Suzuki. "Contrastive Goal Grouping for Policy Generalization in Goal-Conditioned Reinforcement Learning". W Neural Information Processing, 240–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92185-9_20.
Pełny tekst źródłaStreszczenia konferencji na temat "Goal-conditioned reinforcement learning"
Liu, Minghuan, Menghui Zhu i Weinan Zhang. "Goal-Conditioned Reinforcement Learning: Problems and Solutions". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/770.
Pełny tekst źródłaBortkiewicz, Michał, Jakub Łyskawa, Paweł Wawrzyński, Mateusz Ostaszewski, Artur Grudkowski, Bartłomiej Sobieski i Tomasz Trzciński. "Subgoal Reachability in Goal Conditioned Hierarchical Reinforcement Learning". W 16th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012326200003636.
Pełny tekst źródłaYu, Zhe, Kailai Sun, Chenghao Li, Dianyu Zhong, Yiqin Yang i Qianchuan Zhao. "A Goal-Conditioned Reinforcement Learning Algorithm with Environment Modeling". W 2023 42nd Chinese Control Conference (CCC). IEEE, 2023. http://dx.doi.org/10.23919/ccc58697.2023.10240963.
Pełny tekst źródłaZou, Qiming, i Einoshin Suzuki. "Sample-Efficient Goal-Conditioned Reinforcement Learning via Predictive Information Bottleneck for Goal Representation Learning". W 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023. http://dx.doi.org/10.1109/icra48891.2023.10161213.
Pełny tekst źródłaDeng, Yuhong, Chongkun Xia, Xueqian Wang i Lipeng Chen. "Deep Reinforcement Learning Based on Local GNN for Goal-Conditioned Deformable Object Rearranging". W 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022. http://dx.doi.org/10.1109/iros47612.2022.9981669.
Pełny tekst źródłaBagaria, Akhil, i Tom Schaul. "Scaling Goal-based Exploration via Pruning Proto-goals". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/384.
Pełny tekst źródłaSimmons-Edler, Riley, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung i Daniel Lee. "Reward Prediction Error as an Exploration Objective in Deep RL". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/390.
Pełny tekst źródła