Artykuły w czasopismach na temat „Safe Reinforcement Learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Safe Reinforcement Learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Horie, Naoto, Tohgoroh Matsui, Koichi Moriyama, Atsuko Mutoh i Nobuhiro Inuzuka. "Multi-objective safe reinforcement learning: the relationship between multi-objective reinforcement learning and safe reinforcement learning". Artificial Life and Robotics 24, nr 3 (8.02.2019): 352–59. http://dx.doi.org/10.1007/s10015-019-00523-3.
Pełny tekst źródłaYang, Yongliang, Kyriakos G. Vamvoudakis i Hamidreza Modares. "Safe reinforcement learning for dynamical games". International Journal of Robust and Nonlinear Control 30, nr 9 (25.03.2020): 3706–26. http://dx.doi.org/10.1002/rnc.4962.
Pełny tekst źródłaXu, Haoran, Xianyuan Zhan i Xiangyu Zhu. "Constraints Penalized Q-learning for Safe Offline Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8753–60. http://dx.doi.org/10.1609/aaai.v36i8.20855.
Pełny tekst źródłaGarcía, Javier, i Fernando Fernández. "Probabilistic Policy Reuse for Safe Reinforcement Learning". ACM Transactions on Autonomous and Adaptive Systems 13, nr 3 (28.03.2019): 1–24. http://dx.doi.org/10.1145/3310090.
Pełny tekst źródłaMannucci, Tommaso, Erik-Jan van Kampen, Cornelis de Visser i Qiping Chu. "Safe Exploration Algorithms for Reinforcement Learning Controllers". IEEE Transactions on Neural Networks and Learning Systems 29, nr 4 (kwiecień 2018): 1069–81. http://dx.doi.org/10.1109/tnnls.2017.2654539.
Pełny tekst źródłaKarthikeyan, P., Wei-Lun Chen i Pao-Ann Hsiung. "Autonomous Intersection Management by Using Reinforcement Learning". Algorithms 15, nr 9 (13.09.2022): 326. http://dx.doi.org/10.3390/a15090326.
Pełny tekst źródłaMazouchi, Majid, Subramanya Nageshrao i Hamidreza Modares. "Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework". IEEE/CAA Journal of Automatica Sinica 9, nr 3 (marzec 2022): 466–81. http://dx.doi.org/10.1109/jas.2021.1004353.
Pełny tekst źródłaCowen-Rivers, Alexander I., Daniel Palenicek, Vincent Moens, Mohammed Amin Abdullah, Aivar Sootla, Jun Wang i Haitham Bou-Ammar. "SAMBA: safe model-based & active reinforcement learning". Machine Learning 111, nr 1 (styczeń 2022): 173–203. http://dx.doi.org/10.1007/s10994-021-06103-6.
Pełny tekst źródłaSerrano-Cuevas, Jonathan, Eduardo F. Morales i Pablo Hernández-Leal. "Safe reinforcement learning using risk mapping by similarity". Adaptive Behavior 28, nr 4 (18.07.2019): 213–24. http://dx.doi.org/10.1177/1059712319859650.
Pełny tekst źródłaAndersen, Per-Arne, Morten Goodwin i Ole-Christoffer Granmo. "Towards safe reinforcement-learning in industrial grid-warehousing". Information Sciences 537 (październik 2020): 467–84. http://dx.doi.org/10.1016/j.ins.2020.06.010.
Pełny tekst źródłaCarr, Steven, Nils Jansen, Sebastian Junges i Ufuk Topcu. "Safe Reinforcement Learning via Shielding under Partial Observability". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14748–56. http://dx.doi.org/10.1609/aaai.v37i12.26723.
Pełny tekst źródłaDai, Juntao, Jiaming Ji, Long Yang, Qian Zheng i Gang Pan. "Augmented Proximal Policy Optimization for Safe Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 7288–95. http://dx.doi.org/10.1609/aaai.v37i6.25888.
Pełny tekst źródłaMarchesini, Enrico, Davide Corsi i Alessandro Farinelli. "Exploring Safer Behaviors for Deep Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 7 (28.06.2022): 7701–9. http://dx.doi.org/10.1609/aaai.v36i7.20737.
Pełny tekst źródłaChen, Hongyi, Yu Zhang, Uzair Aslam Bhatti i Mengxing Huang. "Safe Decision Controller for Autonomous DrivingBased on Deep Reinforcement Learning inNondeterministic Environment". Sensors 23, nr 3 (20.01.2023): 1198. http://dx.doi.org/10.3390/s23031198.
Pełny tekst źródłaRyu, Yoon-Ha, Doukhi Oualid i Deok-Jin Lee. "Research on Safe Reinforcement Controller Using Deep Reinforcement Learning with Control Barrier Function". Journal of Institute of Control, Robotics and Systems 28, nr 11 (30.11.2022): 1013–21. http://dx.doi.org/10.5302/j.icros.2022.22.0187.
Pełny tekst źródłaThananjeyan, Brijen, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn i Ken Goldberg. "Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones". IEEE Robotics and Automation Letters 6, nr 3 (lipiec 2021): 4915–22. http://dx.doi.org/10.1109/lra.2021.3070252.
Pełny tekst źródłaCui, Wenqi, Jiayi Li i Baosen Zhang. "Decentralized safe reinforcement learning for inverter-based voltage control". Electric Power Systems Research 211 (październik 2022): 108609. http://dx.doi.org/10.1016/j.epsr.2022.108609.
Pełny tekst źródłaBasso, Rafael, Balázs Kulcsár, Ivan Sanchez-Diaz i Xiaobo Qu. "Dynamic stochastic electric vehicle routing with safe reinforcement learning". Transportation Research Part E: Logistics and Transportation Review 157 (styczeń 2022): 102496. http://dx.doi.org/10.1016/j.tre.2021.102496.
Pełny tekst źródłaPai, PENG, ZHU Fei, LIU Quan, ZHAO Peiyao i WU Wen. "Achieving Safe Deep Reinforcement Learning via Environment Comprehension Mechanism". Chinese Journal of Electronics 30, nr 6 (listopad 2021): 1049–58. http://dx.doi.org/10.1049/cje.2021.07.025.
Pełny tekst źródłaMowbray, M., P. Petsagkourakis, E. A. del Rio-Chanona i D. Zhang. "Safe chance constrained reinforcement learning for batch process control". Computers & Chemical Engineering 157 (styczeń 2022): 107630. http://dx.doi.org/10.1016/j.compchemeng.2021.107630.
Pełny tekst źródłaZhao, Qingye, Yi Zhang i Xuandong Li. "Safe reinforcement learning for dynamical systems using barrier certificates". Connection Science 34, nr 1 (12.12.2022): 2822–44. http://dx.doi.org/10.1080/09540091.2022.2151567.
Pełny tekst źródłaGros, Sebastien, Mario Zanon i Alberto Bemporad. "Safe Reinforcement Learning via Projection on a Safe Set: How to Achieve Optimality?" IFAC-PapersOnLine 53, nr 2 (2020): 8076–81. http://dx.doi.org/10.1016/j.ifacol.2020.12.2276.
Pełny tekst źródłaLu, Xiaozhen, Liang Xiao, Guohang Niu, Xiangyang Ji i Qian Wang. "Safe Exploration in Wireless Security: A Safe Reinforcement Learning Algorithm With Hierarchical Structure". IEEE Transactions on Information Forensics and Security 17 (2022): 732–43. http://dx.doi.org/10.1109/tifs.2022.3149396.
Pełny tekst źródłaYuan, Zhaocong, Adam W. Hall, Siqi Zhou, Lukas Brunke, Melissa Greeff, Jacopo Panerati i Angela P. Schoellig. "Safe-Control-Gym: A Unified Benchmark Suite for Safe Learning-Based Control and Reinforcement Learning in Robotics". IEEE Robotics and Automation Letters 7, nr 4 (październik 2022): 11142–49. http://dx.doi.org/10.1109/lra.2022.3196132.
Pełny tekst źródłaGarcia, J., i F. Fernandez. "Safe Exploration of State and Action Spaces in Reinforcement Learning". Journal of Artificial Intelligence Research 45 (19.12.2012): 515–64. http://dx.doi.org/10.1613/jair.3761.
Pełny tekst źródłaMa, Yecheng Jason, Andrew Shen, Osbert Bastani i Jayaraman Dinesh. "Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 5 (28.06.2022): 5404–12. http://dx.doi.org/10.1609/aaai.v36i5.20478.
Pełny tekst źródłaChen, Hongyi, i Changliu Liu. "Safe and Sample-Efficient Reinforcement Learning for Clustered Dynamic Environments". IEEE Control Systems Letters 6 (2022): 1928–33. http://dx.doi.org/10.1109/lcsys.2021.3136486.
Pełny tekst źródłaYang, Yongliang, Kyriakos G. Vamvoudakis, Hamidreza Modares, Yixin Yin i Donald C. Wunsch. "Safe Intermittent Reinforcement Learning With Static and Dynamic Event Generators". IEEE Transactions on Neural Networks and Learning Systems 31, nr 12 (grudzień 2020): 5441–55. http://dx.doi.org/10.1109/tnnls.2020.2967871.
Pełny tekst źródłaLi, Hepeng, Zhiqiang Wan i Haibo He. "Constrained EV Charging Scheduling Based on Safe Deep Reinforcement Learning". IEEE Transactions on Smart Grid 11, nr 3 (maj 2020): 2427–39. http://dx.doi.org/10.1109/tsg.2019.2955437.
Pełny tekst źródłaHailemichael, Habtamu, Beshah Ayalew, Lindsey Kerbel, Andrej Ivanco i Keith Loiselle. "Safe Reinforcement Learning for an Energy-Efficient Driver Assistance System". IFAC-PapersOnLine 55, nr 37 (2022): 615–20. http://dx.doi.org/10.1016/j.ifacol.2022.11.250.
Pełny tekst źródłaMINAMOTO, Gaku, Toshimitsu KANEKO i Noriyuki HIRAYAMA. "Autonomous driving with safe reinforcement learning using rule-based judgment". Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022 (2022): 2A2—K03. http://dx.doi.org/10.1299/jsmermd.2022.2a2-k03.
Pełny tekst źródłaPathak, Shashank, Luca Pulina i Armando Tacchella. "Verification and repair of control policies for safe reinforcement learning". Applied Intelligence 48, nr 4 (5.08.2017): 886–908. http://dx.doi.org/10.1007/s10489-017-0999-8.
Pełny tekst źródłaDong, Wenbo, Shaofan Liu i Shiliang Sun. "Safe batch constrained deep reinforcement learning with generative adversarial network". Information Sciences 634 (lipiec 2023): 259–70. http://dx.doi.org/10.1016/j.ins.2023.03.108.
Pełny tekst źródłaKondrup, Flemming, Thomas Jiralerspong, Elaine Lau, Nathan De Lara, Jacob Shkrob, My Duc Tran, Doina Precup i Sumana Basu. "Towards Safe Mechanical Ventilation Treatment Using Deep Offline Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13 (26.06.2023): 15696–702. http://dx.doi.org/10.1609/aaai.v37i13.26862.
Pełny tekst źródłaFu, Yanbo, Wenjie Zhao i Liu Liu. "Safe Reinforcement Learning for Transition Control of Ducted-Fan UAVs". Drones 7, nr 5 (22.05.2023): 332. http://dx.doi.org/10.3390/drones7050332.
Pełny tekst źródłaXiao, Xinhang. "Reinforcement Learning Optimized Intelligent Electricity Dispatching System". Journal of Physics: Conference Series 2215, nr 1 (1.02.2022): 012013. http://dx.doi.org/10.1088/1742-6596/2215/1/012013.
Pełny tekst źródłaYOON, JAE UNG, i JUHONG LEE. "Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving". Korean Institute of Smart Media 11, nr 9 (31.10.2022): 9–20. http://dx.doi.org/10.30693/smj.2022.11.9.9.
Pełny tekst źródłaPerk, Baris Eren, i Gokhan Inalhan. "Safe Motion Planning and Learning for Unmanned Aerial Systems". Aerospace 9, nr 2 (22.01.2022): 56. http://dx.doi.org/10.3390/aerospace9020056.
Pełny tekst źródłaUgurlu, Halil Ibrahim, Xuan Huy Pham i Erdal Kayacan. "Sim-to-Real Deep Reinforcement Learning for Safe End-to-End Planning of Aerial Robots". Robotics 11, nr 5 (13.10.2022): 109. http://dx.doi.org/10.3390/robotics11050109.
Pełny tekst źródłaLu, Songtao, Kaiqing Zhang, Tianyi Chen, Tamer Başar i Lior Horesh. "Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 10 (18.05.2021): 8767–75. http://dx.doi.org/10.1609/aaai.v35i10.17062.
Pełny tekst źródłaJi, Guanglin, Junyan Yan, Jingxin Du, Wanquan Yan, Jibiao Chen, Yongkang Lu, Juan Rojas i Shing Shin Cheng. "Towards Safe Control of Continuum Manipulator Using Shielded Multiagent Reinforcement Learning". IEEE Robotics and Automation Letters 6, nr 4 (październik 2021): 7461–68. http://dx.doi.org/10.1109/lra.2021.3097660.
Pełny tekst źródłaSavage, Thomas, Dongda Zhang, Max Mowbray i Ehecatl Antonio Del Río Chanona. "Model-free safe reinforcement learning for chemical processes using Gaussian processes". IFAC-PapersOnLine 54, nr 3 (2021): 504–9. http://dx.doi.org/10.1016/j.ifacol.2021.08.292.
Pełny tekst źródłaDu, Bin, Bin Lin, Chenming Zhang, Botao Dong i Weidong Zhang. "Safe deep reinforcement learning-based adaptive control for USV interception mission". Ocean Engineering 246 (luty 2022): 110477. http://dx.doi.org/10.1016/j.oceaneng.2021.110477.
Pełny tekst źródłaKim, Dohyeong, i Songhwai Oh. "TRC: Trust Region Conditional Value at Risk for Safe Reinforcement Learning". IEEE Robotics and Automation Letters 7, nr 2 (kwiecień 2022): 2621–28. http://dx.doi.org/10.1109/lra.2022.3141829.
Pełny tekst źródłaGarcía, Javier, i Diogo Shafie. "Teaching a humanoid robot to walk faster through Safe Reinforcement Learning". Engineering Applications of Artificial Intelligence 88 (luty 2020): 103360. http://dx.doi.org/10.1016/j.engappai.2019.103360.
Pełny tekst źródłaCohen, Max H., i Calin Belta. "Safe exploration in model-based reinforcement learning using control barrier functions". Automatica 147 (styczeń 2023): 110684. http://dx.doi.org/10.1016/j.automatica.2022.110684.
Pełny tekst źródłaSelvaraj, Dinesh Cyril, Shailesh Hegde, Nicola Amati, Francesco Deflorio i Carla Fabiana Chiasserini. "A Deep Reinforcement Learning Approach for Efficient, Safe and Comfortable Driving". Applied Sciences 13, nr 9 (23.04.2023): 5272. http://dx.doi.org/10.3390/app13095272.
Pełny tekst źródłaVasilenko, Elizaveta, Niki Vazou i Gilles Barthe. "Safe couplings: coupled refinement types". Proceedings of the ACM on Programming Languages 6, ICFP (29.08.2022): 596–624. http://dx.doi.org/10.1145/3547643.
Pełny tekst źródłaXiao, Wenli, Yiwei Lyu i John M. Dolan. "Tackling Safe and Efficient Multi-Agent Reinforcement Learning via Dynamic Shielding (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13 (26.06.2023): 16362–63. http://dx.doi.org/10.1609/aaai.v37i13.27041.
Pełny tekst źródłaYang, Yanhua, i Ligang Yao. "Optimization Method of Power Equipment Maintenance Plan Decision-Making Based on Deep Reinforcement Learning". Mathematical Problems in Engineering 2021 (15.03.2021): 1–8. http://dx.doi.org/10.1155/2021/9372803.
Pełny tekst źródła