Статті в журналах з теми "Safe Reinforcement Learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Safe Reinforcement Learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Horie, Naoto, Tohgoroh Matsui, Koichi Moriyama, Atsuko Mutoh, and Nobuhiro Inuzuka. "Multi-objective safe reinforcement learning: the relationship between multi-objective reinforcement learning and safe reinforcement learning." Artificial Life and Robotics 24, no. 3 (February 8, 2019): 352–59. http://dx.doi.org/10.1007/s10015-019-00523-3.
Повний текст джерелаYang, Yongliang, Kyriakos G. Vamvoudakis, and Hamidreza Modares. "Safe reinforcement learning for dynamical games." International Journal of Robust and Nonlinear Control 30, no. 9 (March 25, 2020): 3706–26. http://dx.doi.org/10.1002/rnc.4962.
Повний текст джерелаXu, Haoran, Xianyuan Zhan, and Xiangyu Zhu. "Constraints Penalized Q-learning for Safe Offline Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8753–60. http://dx.doi.org/10.1609/aaai.v36i8.20855.
Повний текст джерелаGarcía, Javier, and Fernando Fernández. "Probabilistic Policy Reuse for Safe Reinforcement Learning." ACM Transactions on Autonomous and Adaptive Systems 13, no. 3 (March 28, 2019): 1–24. http://dx.doi.org/10.1145/3310090.
Повний текст джерелаMannucci, Tommaso, Erik-Jan van Kampen, Cornelis de Visser, and Qiping Chu. "Safe Exploration Algorithms for Reinforcement Learning Controllers." IEEE Transactions on Neural Networks and Learning Systems 29, no. 4 (April 2018): 1069–81. http://dx.doi.org/10.1109/tnnls.2017.2654539.
Повний текст джерелаKarthikeyan, P., Wei-Lun Chen, and Pao-Ann Hsiung. "Autonomous Intersection Management by Using Reinforcement Learning." Algorithms 15, no. 9 (September 13, 2022): 326. http://dx.doi.org/10.3390/a15090326.
Повний текст джерелаMazouchi, Majid, Subramanya Nageshrao, and Hamidreza Modares. "Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework." IEEE/CAA Journal of Automatica Sinica 9, no. 3 (March 2022): 466–81. http://dx.doi.org/10.1109/jas.2021.1004353.
Повний текст джерелаCowen-Rivers, Alexander I., Daniel Palenicek, Vincent Moens, Mohammed Amin Abdullah, Aivar Sootla, Jun Wang, and Haitham Bou-Ammar. "SAMBA: safe model-based & active reinforcement learning." Machine Learning 111, no. 1 (January 2022): 173–203. http://dx.doi.org/10.1007/s10994-021-06103-6.
Повний текст джерелаSerrano-Cuevas, Jonathan, Eduardo F. Morales, and Pablo Hernández-Leal. "Safe reinforcement learning using risk mapping by similarity." Adaptive Behavior 28, no. 4 (July 18, 2019): 213–24. http://dx.doi.org/10.1177/1059712319859650.
Повний текст джерелаAndersen, Per-Arne, Morten Goodwin, and Ole-Christoffer Granmo. "Towards safe reinforcement-learning in industrial grid-warehousing." Information Sciences 537 (October 2020): 467–84. http://dx.doi.org/10.1016/j.ins.2020.06.010.
Повний текст джерелаCarr, Steven, Nils Jansen, Sebastian Junges, and Ufuk Topcu. "Safe Reinforcement Learning via Shielding under Partial Observability." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14748–56. http://dx.doi.org/10.1609/aaai.v37i12.26723.
Повний текст джерелаDai, Juntao, Jiaming Ji, Long Yang, Qian Zheng, and Gang Pan. "Augmented Proximal Policy Optimization for Safe Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7288–95. http://dx.doi.org/10.1609/aaai.v37i6.25888.
Повний текст джерелаMarchesini, Enrico, Davide Corsi, and Alessandro Farinelli. "Exploring Safer Behaviors for Deep Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7701–9. http://dx.doi.org/10.1609/aaai.v36i7.20737.
Повний текст джерелаChen, Hongyi, Yu Zhang, Uzair Aslam Bhatti, and Mengxing Huang. "Safe Decision Controller for Autonomous DrivingBased on Deep Reinforcement Learning inNondeterministic Environment." Sensors 23, no. 3 (January 20, 2023): 1198. http://dx.doi.org/10.3390/s23031198.
Повний текст джерелаRyu, Yoon-Ha, Doukhi Oualid, and Deok-Jin Lee. "Research on Safe Reinforcement Controller Using Deep Reinforcement Learning with Control Barrier Function." Journal of Institute of Control, Robotics and Systems 28, no. 11 (November 30, 2022): 1013–21. http://dx.doi.org/10.5302/j.icros.2022.22.0187.
Повний текст джерелаThananjeyan, Brijen, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, and Ken Goldberg. "Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones." IEEE Robotics and Automation Letters 6, no. 3 (July 2021): 4915–22. http://dx.doi.org/10.1109/lra.2021.3070252.
Повний текст джерелаCui, Wenqi, Jiayi Li, and Baosen Zhang. "Decentralized safe reinforcement learning for inverter-based voltage control." Electric Power Systems Research 211 (October 2022): 108609. http://dx.doi.org/10.1016/j.epsr.2022.108609.
Повний текст джерелаBasso, Rafael, Balázs Kulcsár, Ivan Sanchez-Diaz, and Xiaobo Qu. "Dynamic stochastic electric vehicle routing with safe reinforcement learning." Transportation Research Part E: Logistics and Transportation Review 157 (January 2022): 102496. http://dx.doi.org/10.1016/j.tre.2021.102496.
Повний текст джерелаPai, PENG, ZHU Fei, LIU Quan, ZHAO Peiyao, and WU Wen. "Achieving Safe Deep Reinforcement Learning via Environment Comprehension Mechanism." Chinese Journal of Electronics 30, no. 6 (November 2021): 1049–58. http://dx.doi.org/10.1049/cje.2021.07.025.
Повний текст джерелаMowbray, M., P. Petsagkourakis, E. A. del Rio-Chanona, and D. Zhang. "Safe chance constrained reinforcement learning for batch process control." Computers & Chemical Engineering 157 (January 2022): 107630. http://dx.doi.org/10.1016/j.compchemeng.2021.107630.
Повний текст джерелаZhao, Qingye, Yi Zhang, and Xuandong Li. "Safe reinforcement learning for dynamical systems using barrier certificates." Connection Science 34, no. 1 (December 12, 2022): 2822–44. http://dx.doi.org/10.1080/09540091.2022.2151567.
Повний текст джерелаGros, Sebastien, Mario Zanon, and Alberto Bemporad. "Safe Reinforcement Learning via Projection on a Safe Set: How to Achieve Optimality?" IFAC-PapersOnLine 53, no. 2 (2020): 8076–81. http://dx.doi.org/10.1016/j.ifacol.2020.12.2276.
Повний текст джерелаLu, Xiaozhen, Liang Xiao, Guohang Niu, Xiangyang Ji, and Qian Wang. "Safe Exploration in Wireless Security: A Safe Reinforcement Learning Algorithm With Hierarchical Structure." IEEE Transactions on Information Forensics and Security 17 (2022): 732–43. http://dx.doi.org/10.1109/tifs.2022.3149396.
Повний текст джерелаYuan, Zhaocong, Adam W. Hall, Siqi Zhou, Lukas Brunke, Melissa Greeff, Jacopo Panerati, and Angela P. Schoellig. "Safe-Control-Gym: A Unified Benchmark Suite for Safe Learning-Based Control and Reinforcement Learning in Robotics." IEEE Robotics and Automation Letters 7, no. 4 (October 2022): 11142–49. http://dx.doi.org/10.1109/lra.2022.3196132.
Повний текст джерелаGarcia, J., and F. Fernandez. "Safe Exploration of State and Action Spaces in Reinforcement Learning." Journal of Artificial Intelligence Research 45 (December 19, 2012): 515–64. http://dx.doi.org/10.1613/jair.3761.
Повний текст джерелаMa, Yecheng Jason, Andrew Shen, Osbert Bastani, and Jayaraman Dinesh. "Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5404–12. http://dx.doi.org/10.1609/aaai.v36i5.20478.
Повний текст джерелаChen, Hongyi, and Changliu Liu. "Safe and Sample-Efficient Reinforcement Learning for Clustered Dynamic Environments." IEEE Control Systems Letters 6 (2022): 1928–33. http://dx.doi.org/10.1109/lcsys.2021.3136486.
Повний текст джерелаYang, Yongliang, Kyriakos G. Vamvoudakis, Hamidreza Modares, Yixin Yin, and Donald C. Wunsch. "Safe Intermittent Reinforcement Learning With Static and Dynamic Event Generators." IEEE Transactions on Neural Networks and Learning Systems 31, no. 12 (December 2020): 5441–55. http://dx.doi.org/10.1109/tnnls.2020.2967871.
Повний текст джерелаLi, Hepeng, Zhiqiang Wan, and Haibo He. "Constrained EV Charging Scheduling Based on Safe Deep Reinforcement Learning." IEEE Transactions on Smart Grid 11, no. 3 (May 2020): 2427–39. http://dx.doi.org/10.1109/tsg.2019.2955437.
Повний текст джерелаHailemichael, Habtamu, Beshah Ayalew, Lindsey Kerbel, Andrej Ivanco, and Keith Loiselle. "Safe Reinforcement Learning for an Energy-Efficient Driver Assistance System." IFAC-PapersOnLine 55, no. 37 (2022): 615–20. http://dx.doi.org/10.1016/j.ifacol.2022.11.250.
Повний текст джерелаMINAMOTO, Gaku, Toshimitsu KANEKO, and Noriyuki HIRAYAMA. "Autonomous driving with safe reinforcement learning using rule-based judgment." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022 (2022): 2A2—K03. http://dx.doi.org/10.1299/jsmermd.2022.2a2-k03.
Повний текст джерелаPathak, Shashank, Luca Pulina, and Armando Tacchella. "Verification and repair of control policies for safe reinforcement learning." Applied Intelligence 48, no. 4 (August 5, 2017): 886–908. http://dx.doi.org/10.1007/s10489-017-0999-8.
Повний текст джерелаDong, Wenbo, Shaofan Liu, and Shiliang Sun. "Safe batch constrained deep reinforcement learning with generative adversarial network." Information Sciences 634 (July 2023): 259–70. http://dx.doi.org/10.1016/j.ins.2023.03.108.
Повний текст джерелаKondrup, Flemming, Thomas Jiralerspong, Elaine Lau, Nathan De Lara, Jacob Shkrob, My Duc Tran, Doina Precup, and Sumana Basu. "Towards Safe Mechanical Ventilation Treatment Using Deep Offline Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15696–702. http://dx.doi.org/10.1609/aaai.v37i13.26862.
Повний текст джерелаFu, Yanbo, Wenjie Zhao, and Liu Liu. "Safe Reinforcement Learning for Transition Control of Ducted-Fan UAVs." Drones 7, no. 5 (May 22, 2023): 332. http://dx.doi.org/10.3390/drones7050332.
Повний текст джерелаXiao, Xinhang. "Reinforcement Learning Optimized Intelligent Electricity Dispatching System." Journal of Physics: Conference Series 2215, no. 1 (February 1, 2022): 012013. http://dx.doi.org/10.1088/1742-6596/2215/1/012013.
Повний текст джерелаYOON, JAE UNG, and JUHONG LEE. "Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving." Korean Institute of Smart Media 11, no. 9 (October 31, 2022): 9–20. http://dx.doi.org/10.30693/smj.2022.11.9.9.
Повний текст джерелаPerk, Baris Eren, and Gokhan Inalhan. "Safe Motion Planning and Learning for Unmanned Aerial Systems." Aerospace 9, no. 2 (January 22, 2022): 56. http://dx.doi.org/10.3390/aerospace9020056.
Повний текст джерелаUgurlu, Halil Ibrahim, Xuan Huy Pham, and Erdal Kayacan. "Sim-to-Real Deep Reinforcement Learning for Safe End-to-End Planning of Aerial Robots." Robotics 11, no. 5 (October 13, 2022): 109. http://dx.doi.org/10.3390/robotics11050109.
Повний текст джерелаLu, Songtao, Kaiqing Zhang, Tianyi Chen, Tamer Başar, and Lior Horesh. "Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8767–75. http://dx.doi.org/10.1609/aaai.v35i10.17062.
Повний текст джерелаJi, Guanglin, Junyan Yan, Jingxin Du, Wanquan Yan, Jibiao Chen, Yongkang Lu, Juan Rojas, and Shing Shin Cheng. "Towards Safe Control of Continuum Manipulator Using Shielded Multiagent Reinforcement Learning." IEEE Robotics and Automation Letters 6, no. 4 (October 2021): 7461–68. http://dx.doi.org/10.1109/lra.2021.3097660.
Повний текст джерелаSavage, Thomas, Dongda Zhang, Max Mowbray, and Ehecatl Antonio Del Río Chanona. "Model-free safe reinforcement learning for chemical processes using Gaussian processes." IFAC-PapersOnLine 54, no. 3 (2021): 504–9. http://dx.doi.org/10.1016/j.ifacol.2021.08.292.
Повний текст джерелаDu, Bin, Bin Lin, Chenming Zhang, Botao Dong, and Weidong Zhang. "Safe deep reinforcement learning-based adaptive control for USV interception mission." Ocean Engineering 246 (February 2022): 110477. http://dx.doi.org/10.1016/j.oceaneng.2021.110477.
Повний текст джерелаKim, Dohyeong, and Songhwai Oh. "TRC: Trust Region Conditional Value at Risk for Safe Reinforcement Learning." IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 2621–28. http://dx.doi.org/10.1109/lra.2022.3141829.
Повний текст джерелаGarcía, Javier, and Diogo Shafie. "Teaching a humanoid robot to walk faster through Safe Reinforcement Learning." Engineering Applications of Artificial Intelligence 88 (February 2020): 103360. http://dx.doi.org/10.1016/j.engappai.2019.103360.
Повний текст джерелаCohen, Max H., and Calin Belta. "Safe exploration in model-based reinforcement learning using control barrier functions." Automatica 147 (January 2023): 110684. http://dx.doi.org/10.1016/j.automatica.2022.110684.
Повний текст джерелаSelvaraj, Dinesh Cyril, Shailesh Hegde, Nicola Amati, Francesco Deflorio, and Carla Fabiana Chiasserini. "A Deep Reinforcement Learning Approach for Efficient, Safe and Comfortable Driving." Applied Sciences 13, no. 9 (April 23, 2023): 5272. http://dx.doi.org/10.3390/app13095272.
Повний текст джерелаVasilenko, Elizaveta, Niki Vazou, and Gilles Barthe. "Safe couplings: coupled refinement types." Proceedings of the ACM on Programming Languages 6, ICFP (August 29, 2022): 596–624. http://dx.doi.org/10.1145/3547643.
Повний текст джерелаXiao, Wenli, Yiwei Lyu, and John M. Dolan. "Tackling Safe and Efficient Multi-Agent Reinforcement Learning via Dynamic Shielding (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16362–63. http://dx.doi.org/10.1609/aaai.v37i13.27041.
Повний текст джерелаYang, Yanhua, and Ligang Yao. "Optimization Method of Power Equipment Maintenance Plan Decision-Making Based on Deep Reinforcement Learning." Mathematical Problems in Engineering 2021 (March 15, 2021): 1–8. http://dx.doi.org/10.1155/2021/9372803.
Повний текст джерела