Добірка наукової літератури з теми "Reinforcement Learning Generalization"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Reinforcement Learning Generalization".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Reinforcement Learning Generalization"
Kwon, Sunggyu, and Kwang Y. Lee. "GENERALIZATION OF REINFORCEMENT LEARNING WITH CMAC." IFAC Proceedings Volumes 38, no. 1 (2005): 360–65. http://dx.doi.org/10.3182/20050703-6-cz-1902.01138.
Повний текст джерелаWu, Keyu, Min Wu, Zhenghua Chen, Yuecong Xu, and Xiaoli Li. "Generalizing Reinforcement Learning through Fusing Self-Supervised Learning into Intrinsic Motivation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8683–90. http://dx.doi.org/10.1609/aaai.v36i8.20847.
Повний текст джерелаWimmer, G. Elliott, Nathaniel D. Daw, and Daphna Shohamy. "Generalization of value in reinforcement learning by humans." European Journal of Neuroscience 35, no. 7 (April 2012): 1092–104. http://dx.doi.org/10.1111/j.1460-9568.2012.08017.x.
Повний текст джерелаHashemzadeh, Maryam, Reshad Hosseini, and Majid Nili Ahmadabadi. "Clustering subspace generalization to obtain faster reinforcement learning." Evolving Systems 11, no. 1 (July 4, 2019): 89–103. http://dx.doi.org/10.1007/s12530-019-09290-9.
Повний текст джерелаGershman, Samuel J., and Yael Niv. "Novelty and Inductive Generalization in Human Reinforcement Learning." Topics in Cognitive Science 7, no. 3 (March 23, 2015): 391–415. http://dx.doi.org/10.1111/tops.12138.
Повний текст джерелаMatiisen, Tambet, Aqeel Labash, Daniel Majoral, Jaan Aru, and Raul Vicente. "Do Deep Reinforcement Learning Agents Model Intentions?" Stats 6, no. 1 (December 28, 2022): 50–66. http://dx.doi.org/10.3390/stats6010004.
Повний текст джерелаFang, Qiang, Wenzhuo Zhang, and Xitong Wang. "Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine." Electronics 10, no. 16 (August 18, 2021): 1997. http://dx.doi.org/10.3390/electronics10161997.
Повний текст джерелаHatcho, Yasuyo, Kiyohiko Hattori, and Keiki Takadama. "Time Horizon Generalization in Reinforcement Learning: Generalizing Multiple Q-Tables in Q-Learning Agents." Journal of Advanced Computational Intelligence and Intelligent Informatics 13, no. 6 (November 20, 2009): 667–74. http://dx.doi.org/10.20965/jaciii.2009.p0667.
Повний текст джерелаKaelbling, L. P., M. L. Littman, and A. W. Moore. "Reinforcement Learning: A Survey." Journal of Artificial Intelligence Research 4 (May 1, 1996): 237–85. http://dx.doi.org/10.1613/jair.301.
Повний текст джерелаKim, Minbeom, Kyeongha Rho, Yong-duk Kim, and Kyomin Jung. "Action-driven contrastive representation for reinforcement learning." PLOS ONE 17, no. 3 (March 18, 2022): e0265456. http://dx.doi.org/10.1371/journal.pone.0265456.
Повний текст джерелаДисертації з теми "Reinforcement Learning Generalization"
Stanley, Kelly N. "The influence of training structure and instructions on generalized stimulus equivalence classes and typicality effects /." Electronic version (PDF), 2004. http://dl.uncw.edu/etd/2004/stanleyk/kellystanley.html.
Повний текст джерелаWilson, Jeanette E. "Training structure, naming and typically effects in equivalence class formation /." Electronic version (PDF), 2006. http://dl.uncw.edu/etd/2006/wilsonj/jeanettewilson.pdf.
Повний текст джерелаBöhmer, Wendelin [Verfasser], Klaus [Akademischer Betreuer] Obermayer, Klaus [Gutachter] Obermayer, Marc [Gutachter] Toussaint, and Manfred [Gutachter] Opper. "Representation and generalization in autonomous reinforcement learning / Wendelin Böhmer ; Gutachter: Klaus Obermayer, Marc Toussaint, Manfred Opper ; Betreuer: Klaus Obermayer." Berlin : Technische Universität Berlin, 2017. http://d-nb.info/1156183960/34.
Повний текст джерелаSansing, Elizabeth M. "Teaching Observational Learning to Children with Autism: An In-vivo and Video-Model Assessment." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062891/.
Повний текст джерелаLeffler, Bethany R. "Perception-based generalization in model-based reinforcement learning." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051041.
Повний текст джерелаMehta, Bhairav. "On learning and generalization in unstructured taskspaces." Thesis, 2020. http://hdl.handle.net/1866/24327.
Повний текст джерелаRobotic learning holds incredible promise for embodied artificial intelligence, with reinforcement learning seemingly a strong candidate to be the \textit{software} of robots of the future: learning from experience, adapting on the fly, and generalizing to unseen scenarios. However, our current reality requires vast amounts of data to train the simplest of robotic reinforcement learning policies, leading to a surge of interest of training entirely in efficient physics simulators. As the goal is embodied intelligence, policies trained in simulation are transferred onto real hardware for evaluation; yet, as no simulation is a perfect model of the real world, transferred policies run into the sim2real transfer gap: the errors accrued when shifting policies from simulators to the real world due to unmodeled effects in inaccurate, approximate physics models. Domain randomization - the idea of randomizing all physical parameters in a simulator, forcing a policy to be robust to distributional shifts - has proven useful in transferring reinforcement learning policies onto real robots. In practice, however, the method involves a difficult, trial-and-error process, showing high variance in both convergence and performance. We introduce Active Domain Randomization, an algorithm that involves curriculum learning in unstructured task spaces (task spaces where a notion of difficulty - intuitively easy or hard tasks - is not readily available). Active Domain Randomization shows strong performance on zero-shot transfer on real robots. The thesis also introduces other variants of the algorithm, including one that allows for the incorporation of a safety prior and one that is applicable to the field of Meta-Reinforcement Learning. We also analyze curriculum learning from an optimization perspective and attempt to justify the benefit of the algorithm by studying gradient interference.
Книги з теми "Reinforcement Learning Generalization"
Higa, Jennifer J. The effects of stimulus class on dimensional contrast. 1987.
Знайти повний текст джерелаЧастини книг з теми "Reinforcement Learning Generalization"
Fonteneau, Raphael, Susan A. Murphy, Louis Wehenkel, and Damien Ernst. "Towards Min Max Generalization in Reinforcement Learning." In Communications in Computer and Information Science, 61–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19890-8_5.
Повний текст джерелаXudong, Gong, Jia Hongda, Zhou Xing, Feng Dawei, Ding Bo, and Xu Jie. "Improving Policy Generalization for Teacher-Student Reinforcement Learning." In Knowledge Science, Engineering and Management, 39–47. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55393-7_4.
Повний текст джерелаPonsen, Marc, Matthew E. Taylor, and Karl Tuyls. "Abstraction and Generalization in Reinforcement Learning: A Summary and Framework." In Adaptive and Learning Agents, 1–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11814-2_1.
Повний текст джерелаZholus, Artem, and Aleksandr I. Panov. "Case-Based Task Generalization in Model-Based Reinforcement Learning." In Artificial General Intelligence, 344–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93758-4_35.
Повний текст джерелаQian, Yiming, Fangzhou Xiong, and Zhiyong Liu. "Intra-domain Knowledge Generalization in Cross-Domain Lifelong Reinforcement Learning." In Communications in Computer and Information Science, 386–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63823-8_45.
Повний текст джерелаWan, Kejia, Xinhai Xu, and Yuan Li. "Improving Generalization of Reinforcement Learning for Multi-agent Combating Games." In Neural Information Processing, 64–74. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92270-2_6.
Повний текст джерелаNaruse, Keitarou, and Yukinori Kakazu. "Rule Generation and Generalization by Inductive Decision Tree and Reinforcement Learning." In Distributed Autonomous Robotic Systems, 91–98. Tokyo: Springer Japan, 1994. http://dx.doi.org/10.1007/978-4-431-68275-2_9.
Повний текст джерелаShibata, Takeshi, Ryo Yoshinaka, and Takashi Chikayama. "Probabilistic Generalization of Simple Grammars and Its Application to Reinforcement Learning." In Lecture Notes in Computer Science, 348–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11894841_28.
Повний текст джерелаZou, Qiming, and Einoshin Suzuki. "Contrastive Goal Grouping for Policy Generalization in Goal-Conditioned Reinforcement Learning." In Neural Information Processing, 240–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92185-9_20.
Повний текст джерелаLi, Jianghao, Weihong Bi, and Mingda Li. "Hybrid Reinforcement Learning and Uneven Generalization of Learning Space Method for Robot Obstacle Avoidance." In Lecture Notes in Electrical Engineering, 175–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38460-8_20.
Повний текст джерелаТези доповідей конференцій з теми "Reinforcement Learning Generalization"
Hansen, Nicklas, and Xiaolong Wang. "Generalization in Reinforcement Learning by Soft Data Augmentation." In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. http://dx.doi.org/10.1109/icra48506.2021.9561103.
Повний текст джерела"A CAUTIOUS APPROACH TO GENERALIZATION IN REINFORCEMENT LEARNING." In 2nd International Conference on Agents and Artificial Intelligence. SciTePress - Science and and Technology Publications, 2010. http://dx.doi.org/10.5220/0002726900640073.
Повний текст джерелаLiu, Yong, Chunwei Wu, Xidong Xi, Yan Li, Guitao Cao, Wenming Cao, and Hong Wang. "Adversarial Discriminative Feature Separation for Generalization in Reinforcement Learning." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892539.
Повний текст джерелаXu, Yunqiu, Meng Fang, Ling Chen, Yali Du, and Chengqi Zhang. "Generalization in Text-based Games via Hierarchical Reinforcement Learning." In Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.116.
Повний текст джерелаOuyang, Wenbin, Yisen Wang, Shaochen Han, Zhejian Jin, and Paul Weng. "Improving Generalization of Deep Reinforcement Learning-based TSP Solvers." In 2021 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2021. http://dx.doi.org/10.1109/ssci50451.2021.9659970.
Повний текст джерелаKim, Kyungsoo, Jeongsoo Ha, and Yusung Kim. "Self-Predictive Dynamics for Generalization of Vision-based Reinforcement Learning." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/437.
Повний текст джерелаWang, Tianying, Hao Zhang, Wei Qi Toh, Hongyuan Zhu, Cheston Tan, Yan Wu, Yong Liu, and Wei Jing. "Efficient Robotic Task Generalization Using Deep Model Fusion Reinforcement Learning." In 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2019. http://dx.doi.org/10.1109/robio49542.2019.8961391.
Повний текст джерелаOonishi, Hiroya, and Hitoshi Iima. "Improving generalization ability in a puzzle game using reinforcement learning." In 2017 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2017. http://dx.doi.org/10.1109/cig.2017.8080441.
Повний текст джерелаKanagawa, Yuji, and Tomoyuki Kaneko. "Rogue-Gym: A New Challenge for Generalization in Reinforcement Learning." In 2019 IEEE Conference on Games (CoG). IEEE, 2019. http://dx.doi.org/10.1109/cig.2019.8848075.
Повний текст джерелаFang, Fen, Wenyu Liang, Yan Wu, Qianli Xu, and Joo-Hwee Lim. "Improving Generalization of Reinforcement Learning Using a Bilinear Policy Network." In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897349.
Повний текст джерелаЗвіти організацій з теми "Reinforcement Learning Generalization"
A Decision-Making Method for Connected Autonomous Driving Based on Reinforcement Learning. SAE International, December 2020. http://dx.doi.org/10.4271/2020-01-5154.
Повний текст джерела