Статті в журналах з теми "Safe RL"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Safe RL".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Carr, Steven, Nils Jansen, Sebastian Junges, and Ufuk Topcu. "Safe Reinforcement Learning via Shielding under Partial Observability." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14748–56. http://dx.doi.org/10.1609/aaai.v37i12.26723.
Повний текст джерелаMa, Yecheng Jason, Andrew Shen, Osbert Bastani, and Jayaraman Dinesh. "Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5404–12. http://dx.doi.org/10.1609/aaai.v36i5.20478.
Повний текст джерелаXu, Haoran, Xianyuan Zhan, and Xiangyu Zhu. "Constraints Penalized Q-learning for Safe Offline Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8753–60. http://dx.doi.org/10.1609/aaai.v36i8.20855.
Повний текст джерелаThananjeyan, Brijen, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, and Ken Goldberg. "Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones." IEEE Robotics and Automation Letters 6, no. 3 (July 2021): 4915–22. http://dx.doi.org/10.1109/lra.2021.3070252.
Повний текст джерелаSerrano-Cuevas, Jonathan, Eduardo F. Morales, and Pablo Hernández-Leal. "Safe reinforcement learning using risk mapping by similarity." Adaptive Behavior 28, no. 4 (July 18, 2019): 213–24. http://dx.doi.org/10.1177/1059712319859650.
Повний текст джерелаCheng, Richard, Gábor Orosz, Richard M. Murray, and Joel W. Burdick. "End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3387–95. http://dx.doi.org/10.1609/aaai.v33i01.33013387.
Повний текст джерелаJurj, Sorin Liviu, Dominik Grundt, Tino Werner, Philipp Borchers, Karina Rothemann, and Eike Möhlmann. "Increasing the Safety of Adaptive Cruise Control Using Physics-Guided Reinforcement Learning." Energies 14, no. 22 (November 12, 2021): 7572. http://dx.doi.org/10.3390/en14227572.
Повний текст джерелаSakrihei, Helen. "Using automatic storage for ILL – experiences from the National Repository Library in Norway." Interlending & Document Supply 44, no. 1 (February 15, 2016): 14–16. http://dx.doi.org/10.1108/ilds-11-2015-0035.
Повний текст джерелаDing, Yuhao, and Javad Lavaei. "Provably Efficient Primal-Dual Reinforcement Learning for CMDPs with Non-stationary Objectives and Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7396–404. http://dx.doi.org/10.1609/aaai.v37i6.25900.
Повний текст джерелаTubeuf, Carlotta, Felix Birkelbach, Anton Maly, and René Hofmann. "Increasing the Flexibility of Hydropower with Reinforcement Learning on a Digital Twin Platform." Energies 16, no. 4 (February 11, 2023): 1796. http://dx.doi.org/10.3390/en16041796.
Повний текст джерелаYOON, JAE UNG, and JUHONG LEE. "Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving." Korean Institute of Smart Media 11, no. 9 (October 31, 2022): 9–20. http://dx.doi.org/10.30693/smj.2022.11.9.9.
Повний текст джерелаLin, Xingbin, Deyu Yuan, and Xifei Li. "Reinforcement Learning with Dual Safety Policies for Energy Savings in Building Energy Systems." Buildings 13, no. 3 (February 21, 2023): 580. http://dx.doi.org/10.3390/buildings13030580.
Повний текст джерелаMarchesini, Enrico, Davide Corsi, and Alessandro Farinelli. "Exploring Safer Behaviors for Deep Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7701–9. http://dx.doi.org/10.1609/aaai.v36i7.20737.
Повний текст джерелаEgleston, David, Patricia Ann Castelli, and Thomas George Marx. "Developing, validating, and testing a model of reflective leadership." Leadership & Organization Development Journal 38, no. 7 (September 4, 2017): 886–96. http://dx.doi.org/10.1108/lodj-09-2016-0230.
Повний текст джерелаHuh, Gene, and Wonjae Cha. "Development and Clinical Application of Real-Time Light-Guided Vocal Fold Injection." Journal of The Korean Society of Laryngology, Phoniatrics and Logopedics 33, no. 1 (April 30, 2022): 1–6. http://dx.doi.org/10.22469/jkslp.2022.33.1.1.
Повний текст джерелаRamakrishnan, Ramya, Ece Kamar, Debadeepta Dey, Eric Horvitz, and Julie Shah. "Blind Spot Detection for Safe Sim-to-Real Transfer." Journal of Artificial Intelligence Research 67 (February 4, 2020): 191–234. http://dx.doi.org/10.1613/jair.1.11436.
Повний текст джерелаHao, Hao, Yichen Sun, Xueyun Mei, and Yanjun Zhou. "Reverse Logistics Network Design of Electric Vehicle Batteries considering Recall Risk." Mathematical Problems in Engineering 2021 (August 18, 2021): 1–16. http://dx.doi.org/10.1155/2021/5518049.
Повний текст джерелаRay, Kaustabha, and Ansuman Banerjee. "Horizontal Auto-Scaling for Multi-Access Edge Computing Using Safe Reinforcement Learning." ACM Transactions on Embedded Computing Systems 20, no. 6 (November 30, 2021): 1–33. http://dx.doi.org/10.1145/3475991.
Повний текст джерелаDelgado, Tomás, Marco Sánchez Sorondo, Víctor Braberman, and Sebastián Uchitel. "Exploration Policies for On-the-Fly Controller Synthesis: A Reinforcement Learning Approach." Proceedings of the International Conference on Automated Planning and Scheduling 33, no. 1 (July 1, 2023): 569–77. http://dx.doi.org/10.1609/icaps.v33i1.27238.
Повний текст джерелаBolster, Lauren, Mark Bosch, Brian Brownbridge, and Anurag Saxena. "RAP Trial: Ringer's Lactate and Packed Red Blood Cell Transfusion, An in Vitro Study and Chart Review." Blood 114, no. 22 (November 20, 2009): 2105. http://dx.doi.org/10.1182/blood.v114.22.2105.2105.
Повний текст джерелаRomey, Aurore, Hussaini G. Ularamu, Abdulnaci Bulut, Syed M. Jamal, Salman Khan, Muhammad Ishaq, Michael Eschbaumer, et al. "Field Evaluation of a Safe, Easy, and Low-Cost Protocol for Shipment of Samples from Suspected Cases of Foot-and-Mouth Disease to Diagnostic Laboratories." Transboundary and Emerging Diseases 2023 (August 5, 2023): 1–15. http://dx.doi.org/10.1155/2023/9555213.
Повний текст джерелаDai, Juntao, Jiaming Ji, Long Yang, Qian Zheng, and Gang Pan. "Augmented Proximal Policy Optimization for Safe Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7288–95. http://dx.doi.org/10.1609/aaai.v37i6.25888.
Повний текст джерелаKrstić, Mladen, Giulio Paolo Agnusdei, Pier Paolo Miglietta, Snežana Tadić, and Violeta Roso. "Applicability of Industry 4.0 Technologies in the Reverse Logistics: A Circular Economy Approach Based on COmprehensive Distance Based RAnking (COBRA) Method." Sustainability 14, no. 9 (May 7, 2022): 5632. http://dx.doi.org/10.3390/su14095632.
Повний текст джерелаPrasetyo, Risky Vitria, Abdul Latief Azis, and Soegeng Soegijanto. "Comparison of the efficacy and safety of hydroxyethyl starch 130/0.4 and Ringer's lactate in children with grade III dengue hemorrhagic fever." Paediatrica Indonesiana 49, no. 2 (April 30, 2009): 97. http://dx.doi.org/10.14238/pi49.2.2009.97-103.
Повний текст джерелаBöck, Markus, Julien Malle, Daniel Pasterk, Hrvoje Kukina, Ramin Hasani, and Clemens Heitzinger. "Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning." PLOS ONE 17, no. 11 (November 3, 2022): e0275358. http://dx.doi.org/10.1371/journal.pone.0275358.
Повний текст джерелаLi, Yue, Xiao Yong Bai, Shi Jie Wang, Luo Yi Qin, Yi Chao Tian, and Guang Jie Luo. "Evaluating of the spatial heterogeneity of soil loss tolerance and its effects on erosion risk in the carbonate areas of southern China." Solid Earth 8, no. 3 (May 29, 2017): 661–69. http://dx.doi.org/10.5194/se-8-661-2017.
Повний текст джерелаKondrup, Flemming, Thomas Jiralerspong, Elaine Lau, Nathan De Lara, Jacob Shkrob, My Duc Tran, Doina Precup, and Sumana Basu. "Towards Safe Mechanical Ventilation Treatment Using Deep Offline Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15696–702. http://dx.doi.org/10.1609/aaai.v37i13.26862.
Повний текст джерелаMiyajima, Hirofumi, Noritaka Shigei, Syunki Makino, Hiromi Miyajima, Yohtaro Miyanishi, Shinji Kitagami, and Norio Shiratori. "A proposal of privacy preserving reinforcement learning for secure multiparty computation." Artificial Intelligence Research 6, no. 2 (May 23, 2017): 57. http://dx.doi.org/10.5430/air.v6n2p57.
Повний текст джерелаThananjeyan, Brijen, Ashwin Balakrishna, Ugo Rosolia, Felix Li, Rowan McAllister, Joseph E. Gonzalez, Sergey Levine, Francesco Borrelli, and Ken Goldberg. "Safety Augmented Value Estimation From Demonstrations (SAVED): Safe Deep Model-Based RL for Sparse Cost Robotic Tasks." IEEE Robotics and Automation Letters 5, no. 2 (April 2020): 3612–19. http://dx.doi.org/10.1109/lra.2020.2976272.
Повний текст джерелаRen, Tianzhu, Yuanchang Xie, and Liming Jiang. "Cooperative Highway Work Zone Merge Control Based on Reinforcement Learning in a Connected and Automated Environment." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 10 (July 17, 2020): 363–74. http://dx.doi.org/10.1177/0361198120935873.
Повний текст джерелаReda, Ahmad, and József Vásárhelyi. "Design and Implementation of Reinforcement Learning for Automated Driving Compared to Classical MPC Control." Designs 7, no. 1 (January 29, 2023): 18. http://dx.doi.org/10.3390/designs7010018.
Повний текст джерелаGardille, Arnaud, and Ola Ahmad. "Towards Safe Reinforcement Learning via OOD Dynamics Detection in Autonomous Driving System (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16216–17. http://dx.doi.org/10.1609/aaai.v37i13.26968.
Повний текст джерелаFree, David. "In the News." College & Research Libraries News 80, no. 10 (November 5, 2019): 541. http://dx.doi.org/10.5860/crln.80.10.541.
Повний текст джерелаXu, Xibao, Yushen Chen, and Chengchao Bai. "Deep Reinforcement Learning-Based Accurate Control of Planetary Soft Landing." Sensors 21, no. 23 (December 6, 2021): 8161. http://dx.doi.org/10.3390/s21238161.
Повний текст джерелаSimão, Thiago D., Marnix Suilen, and Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Повний текст джерелаZhang, Linrui, Qin Zhang, Li Shen, Bo Yuan, Xueqian Wang, and Dacheng Tao. "Evaluating Model-Free Reinforcement Learning toward Safety-Critical Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 15313–21. http://dx.doi.org/10.1609/aaai.v37i12.26786.
Повний текст джерелаAngele, Martin K., Nadia Smail, Markus W. Knöferl, Alfred Ayala, William G. Cioffi, and Irshad H. Chaudry. "l-Arginine restores splenocyte functions after trauma and hemorrhage potentially by improving splenic blood flow." American Journal of Physiology-Cell Physiology 276, no. 1 (January 1, 1999): C145—C151. http://dx.doi.org/10.1152/ajpcell.1999.276.1.c145.
Повний текст джерелаStaessens, Tom, Tom Lefebvre, and Guillaume Crevecoeur. "Optimizing Cascaded Control of Mechatronic Systems through Constrained Residual Reinforcement Learning." Machines 11, no. 3 (March 20, 2023): 402. http://dx.doi.org/10.3390/machines11030402.
Повний текст джерелаLv, Kexuan, Xiaofei Pei, Ci Chen, and Jie Xu. "A Safe and Efficient Lane Change Decision-Making Strategy of Autonomous Driving Based on Deep Reinforcement Learning." Mathematics 10, no. 9 (May 5, 2022): 1551. http://dx.doi.org/10.3390/math10091551.
Повний текст джерелаJurj, Sorin Liviu, Tino Werner, Dominik Grundt, Willem Hagemann, and Eike Möhlmann. "Towards Safe and Sustainable Autonomous Vehicles Using Environmentally-Friendly Criticality Metrics." Sustainability 14, no. 12 (June 7, 2022): 6988. http://dx.doi.org/10.3390/su14126988.
Повний текст джерелаMaw, Aye Aye, Maxim Tyan, Tuan Anh Nguyen, and Jae-Woo Lee. "iADA*-RL: Anytime Graph-Based Path Planning with Deep Reinforcement Learning for an Autonomous UAV." Applied Sciences 11, no. 9 (April 27, 2021): 3948. http://dx.doi.org/10.3390/app11093948.
Повний текст джерелаCivetta, Joseph M., and Charles L. Fox. "Advantages of Resuscitation with Balanced Hypertonic Sodium Solution in Disasters." Prehospital and Disaster Medicine 1, S1 (1985): 179–80. http://dx.doi.org/10.1017/s1049023x0004437x.
Повний текст джерелаWysocka, B. A., Z. Kassam, G. Lockwood, J. Brierley, L. Dawson, and J. Ringash. "Assessment of intra and interfractional organ motion during adjuvant radiochemotherapy in gastric cancer." Journal of Clinical Oncology 25, no. 18_suppl (June 20, 2007): 15132. http://dx.doi.org/10.1200/jco.2007.25.18_suppl.15132.
Повний текст джерелаNiu, Tong, and Mohit Bansal. "AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8560–67. http://dx.doi.org/10.1609/aaai.v34i05.6378.
Повний текст джерелаVivek, Kumar, Shah Amiti, Saha Shivshankar, and Choudhary Lalit. "Electrolyte and Haemogram changes post large volume liposuction comparing two different tumescent solutions." Indian Journal of Plastic Surgery 47, no. 03 (September 2014): 386–93. http://dx.doi.org/10.4103/0970-0358.146604.
Повний текст джерелаChebbi, Alif, Massimiliano Tazzari, Cristiana Rizzi, Franco Hernan Gomez Tovar, Sara Villa, Silvia Sbaffoni, Mentore Vaccari, and Andrea Franzetti. "Burkholderia thailandensis E264 as a promising safe rhamnolipids’ producer towards a sustainable valorization of grape marcs and olive mill pomace." Applied Microbiology and Biotechnology 105, no. 9 (April 20, 2021): 3825–42. http://dx.doi.org/10.1007/s00253-021-11292-0.
Повний текст джерелаBrown, Jennifer R., Matthew S. Davids, Jordi Rodon, Pau Abrisqueta, Coumaran Egile, Rodrigo Ruiz-Soto, and Farrukh Awan. "Update On The Safety and Efficacy Of The Pan Class I PI3K Inhibitor SAR245408 (XL147) In Chronic Lymphocytic Leukemia and Non-Hodgkin’s Lymphoma Patients." Blood 122, no. 21 (November 15, 2013): 4170. http://dx.doi.org/10.1182/blood.v122.21.4170.4170.
Повний текст джерелаTripathi, Malati, Ayushma Adhikari, and Bibhushan Neupane. "Misoprostol Versus Oxytocin for Induction of Labour at Term and Post Term Pregnancy of Primigravida." Journal of Universal College of Medical Sciences 6, no. 2 (December 3, 2018): 56–59. http://dx.doi.org/10.3126/jucms.v6i2.22497.
Повний текст джерелаOlupot-Olupot, Peter, Florence Aloroker, Ayub Mpoya, Hellen Mnjalla, George Passi, Margaret Nakuya, Kirsty Houston, et al. "Gastroenteritis Rehydration Of children with Severe Acute Malnutrition (GASTROSAM): A Phase II Randomised Controlled trial: Trial Protocol." Wellcome Open Research 6 (June 23, 2021): 160. http://dx.doi.org/10.12688/wellcomeopenres.16885.1.
Повний текст джерелаJiang, Jianhua, Yangang Ren, Yang Guan, Shengbo Eben Li, Yuming Yin, Dongjie Yu, and Xiaoping Jin. "Integrated decision and control at multi-lane intersections with mixed traffic flow." Journal of Physics: Conference Series 2234, no. 1 (April 1, 2022): 012015. http://dx.doi.org/10.1088/1742-6596/2234/1/012015.
Повний текст джерела