Gotowa bibliografia na temat „Improper reinforcement learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Improper reinforcement learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Improper reinforcement learning"
Dass, Shuvalaxmi, and Akbar Siami Namin. "Reinforcement Learning for Generating Secure Configurations." Electronics 10, no. 19 (2021): 2392. http://dx.doi.org/10.3390/electronics10192392.
Pełny tekst źródłaZhai, Peng, Jie Luo, Zhiyan Dong, Lihua Zhang, Shunli Wang, and Dingkang Yang. "Robust Adversarial Reinforcement Learning with Dissipation Inequation Constraint." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (2022): 5431–39. http://dx.doi.org/10.1609/aaai.v36i5.20481.
Pełny tekst źródłaChen, Ya-Ling, Yan-Rou Cai, and Ming-Yang Cheng. "Vision-Based Robotic Object Grasping—A Deep Reinforcement Learning Approach." Machines 11, no. 2 (2023): 275. http://dx.doi.org/10.3390/machines11020275.
Pełny tekst źródłaHurtado-Gómez, Julián, Juan David Romo, Ricardo Salazar-Cabrera, Álvaro Pachón de la Cruz, and Juan Manuel Madrid Molina. "Traffic Signal Control System Based on Intelligent Transportation System and Reinforcement Learning." Electronics 10, no. 19 (2021): 2363. http://dx.doi.org/10.3390/electronics10192363.
Pełny tekst źródłaZiwei Pan, Ziwei Pan. "Design of Interactive Cultural Brand Marketing System based on Cloud Service Platform." 網際網路技術學刊 23, no. 2 (2022): 321–34. http://dx.doi.org/10.53106/160792642022032302012.
Pełny tekst źródłaKim, Byeongjun, Gunam Kwon, Chaneun Park, and Nam Kyu Kwon. "The Task Decomposition and Dedicated Reward-System-Based Reinforcement Learning Algorithm for Pick-and-Place." Biomimetics 8, no. 2 (2023): 240. http://dx.doi.org/10.3390/biomimetics8020240.
Pełny tekst źródłaRitonga, Mahyudin, and Fitria Sartika. "Muyûl al-Talâmidh fî Tadrîs al-Qirâ’ah." Jurnal Alfazuna : Jurnal Pembelajaran Bahasa Arab dan Kebahasaaraban 6, no. 1 (2021): 36–52. http://dx.doi.org/10.15642/alfazuna.v6i1.1715.
Pełny tekst źródłaLikas, Aristidis. "A Reinforcement Learning Approach to Online Clustering." Neural Computation 11, no. 8 (1999): 1915–32. http://dx.doi.org/10.1162/089976699300016025.
Pełny tekst źródłaYing-Ming Shi, Ying-Ming Shi, and Zhiyuan Zhang Ying-Ming Shi. "Research on Path Planning Strategy of Rescue Robot Based on Reinforcement Learning." 電腦學刊 33, no. 3 (2022): 187–94. http://dx.doi.org/10.53106/199115992022063303015.
Pełny tekst źródłaSantos, John Paul E., Joseph A. Villarama, Joseph P. Adsuara, Jordan F. Gundran, Aileen G. De Guzman, and Evelyn M. Ben. "Students’ Time Management, Academic Procrastination, and Performance during Online Science and Mathematics Classes." International Journal of Learning, Teaching and Educational Research 21, no. 12 (2022): 142–61. http://dx.doi.org/10.26803/ijlter.21.12.8.
Pełny tekst źródłaRozprawy doktorskie na temat "Improper reinforcement learning"
BRUCHON, NIKY. "Feasibility Investigation on Several Reinforcement Learning Techniques to Improve the Performance of the FERMI Free-Electron Laser." Doctoral thesis, Università degli Studi di Trieste, 2021. http://hdl.handle.net/11368/2982117.
Pełny tekst źródłaKreutmayr, Fabian, and Markus Imlauer. "Application of machine learning to improve to performance of a pressure-controlled system." Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71076.
Pełny tekst źródłaZaki, Mohammadi. "Algorithms for Online Learning in Structured Environments." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6080.
Pełny tekst źródłaChi, Lu-cheng, and 紀律呈. "An Improved Deep Reinforcement Learning with Sparse Rewards." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/eq94pr.
Pełny tekst źródłaHsin-Jung, Huang, and 黃信榮. "Applying Reinforcement Learning to Improve NPC game Character Intelligence." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/38802886766630465543.
Pełny tekst źródłaChen, Chia-Hao, and 陳家豪. "Improve Top ASR Hypothesis with Re-correction by Reinforcement Learning." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/zde779.
Pełny tekst źródłaHsu, Yung-Chi, and 徐永吉. "Improved Safe Reinforcement Learning Based Self Adaptive Evolutionary Algorithms for Neuro-Fuzzy Controller Design." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/43659775487135397105.
Pełny tekst źródłaLin, Ching-Pin, and 林敬斌. "Using Reinforcement Learning to Improve a Simple Intra-day Trading System of Taiwan Stock Index Future." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/34369847383488676186.
Pełny tekst źródłaKsiążki na temat "Improper reinforcement learning"
Urtāns, Ēvalds. Function shaping in deep learning. RTU Press, 2021. http://dx.doi.org/10.7250/9789934226854.
Pełny tekst źródłaRohsenow, Damaris J., and Megan M. Pinkston-Camp. Cognitive-Behavioral Approaches. Edited by Kenneth J. Sher. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199381708.013.010.
Pełny tekst źródłaCarmo, Mafalda. Education Applications & Developments VI. inScience Press, 2021. http://dx.doi.org/10.36315/2021eadvi.
Pełny tekst źródłaCzęści książek na temat "Improper reinforcement learning"
Wang, Kunfu, Ruolin Xing, Wei Feng, and Baiqiao Huang. "A Method of UAV Formation Transformation Based on Reinforcement Learning Multi-agent." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_20.
Pełny tekst źródłaSingh, Moirangthem Tiken, Aninda Chakrabarty, Bhargab Sarma, and Sourav Dutta. "An Improved On-Policy Reinforcement Learning Algorithm." In Advances in Intelligent Systems and Computing. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7394-1_30.
Pełny tekst źródłaMa, Ping, and Hong-Li Zhang. "Improved Artificial Bee Colony Algorithm Based on Reinforcement Learning." In Intelligent Computing Theories and Application. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42294-7_64.
Pełny tekst źródłaDai, Zixiang, and Mingyan Jiang. "An Improved Lion Swarm Algorithm Based on Reinforcement Learning." In Advances in Intelligent Automation and Soft Computing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81007-8_10.
Pełny tekst źródłaKim, Jongrae. "Improved Robustness Analysis of Reinforcement Learning Embedded Control Systems." In Robot Intelligence Technology and Applications 6. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97672-9_10.
Pełny tekst źródłaReid, Mark, and Malcolm Ryan. "Using ILP to Improve Planning in Hierarchical Reinforcement Learning." In Inductive Logic Programming. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44960-4_11.
Pełny tekst źródłaCallegari, Daniel Antonio, and Flávio Moreira de Oliveira. "Applying Reinforcement Learning to Improve MCOE, an Intelligent Learning Environment for Ecology." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10720076_26.
Pełny tekst źródłaFountain, Jake, Josiah Walker, David Budden, Alexandre Mendes, and Stephan K. Chalup. "Motivated Reinforcement Learning for Improved Head Actuation of Humanoid Robots." In RoboCup 2013: Robot World Cup XVII. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44468-9_24.
Pełny tekst źródłaLiu, Jun, Yi Zhou, Yimin Qiu, and Zhongfeng Li. "An Improved Multi-objective Optimization Algorithm Based on Reinforcement Learning." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09677-8_42.
Pełny tekst źródłaZhong, Chen, Chutong Ye, Chenyu Wu, and Ao Zhan. "An Improved Dynamic Spectrum Access Algorithm Based on Reinforcement Learning." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30237-4_2.
Pełny tekst źródłaStreszczenia konferencji na temat "Improper reinforcement learning"
Narvekar, Sanmit. "Curriculum Learning in Reinforcement Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/757.
Pełny tekst źródłaWang, Zhaodong, and Matthew E. Taylor. "Improving Reinforcement Learning with Confidence-Based Demonstrations." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/422.
Pełny tekst źródłaVuong, Tung-Long, Do-Van Nguyen, Tai-Long Nguyen, et al. "Sharing Experience in Multitask Reinforcement Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/505.
Pełny tekst źródłaGabel, Thomas, Christian Lutz, and Martin Riedmiller. "Improved neural fitted Q iteration applied to a novel computer gaming and learning benchmark." In 2011 Ieee Symposium On Adaptive Dynamic Programming And Reinforcement Learning. IEEE, 2011. http://dx.doi.org/10.1109/adprl.2011.5967361.
Pełny tekst źródłaWu, Yuechen, Wei Zhang, and Ke Song. "Master-Slave Curriculum Design for Reinforcement Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/211.
Pełny tekst źródłaQin, Yunxiao, Weiguo Zhang, Jingping Shi, and Jinglong Liu. "Improve PID controller through reinforcement learning." In 2018 IEEE CSAA Guidance, Navigation and Control Conference (GNCC). IEEE, 2018. http://dx.doi.org/10.1109/gncc42960.2018.9019095.
Pełny tekst źródłaDESHPANDE, PRATHAMESH P., KAREN J. DEMILLE, AOWABIN RAHMAN, SUSANTA GHOSH, ASHLEY D. SPEAR, and GREGORY M. ODEGARD. "DESIGNING AN IMPROVED INTERFACE IN GRAPHENE/POLYMER COMPOSITES THROUGH MACHINE LEARNING." In Proceedings for the American Society for Composites-Thirty Seventh Technical Conference. Destech Publications, Inc., 2022. http://dx.doi.org/10.12783/asc37/36458.
Pełny tekst źródłaEaglin, Gerald, and Joshua Vaughan. "Leveraging Conventional Control to Improve Performance of Systems Using Reinforcement Learning." In ASME 2020 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/dscc2020-3307.
Pełny tekst źródłaSong, Haolin, Mingxiao Feng, Wengang Zhou, and Houqiang Li. "MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/470.
Pełny tekst źródłaZhu, Hanhua. "Generalized Representation Learning Methods for Deep Reinforcement Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/748.
Pełny tekst źródłaRaporty organizacyjne na temat "Improper reinforcement learning"
Miles, Gaines E., Yael Edan, F. Tom Turpin, et al. Expert Sensor for Site Specification Application of Agricultural Chemicals. United States Department of Agriculture, 1995. http://dx.doi.org/10.32747/1995.7570567.bard.
Pełny tekst źródłaA Decision-Making Method for Connected Autonomous Driving Based on Reinforcement Learning. SAE International, 2020. http://dx.doi.org/10.4271/2020-01-5154.
Pełny tekst źródła