Zeitschriftenartikel zum Thema „Factored reinforcement learning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-45 Zeitschriftenartikel für die Forschung zum Thema "Factored reinforcement learning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Wu, Bo, Yan Peng Feng und Hong Yan Zheng. „A Model-Based Factored Bayesian Reinforcement Learning Approach“. Applied Mechanics and Materials 513-517 (Februar 2014): 1092–95. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1092.
Der volle Inhalt der QuelleLi, Chao, Yupeng Zhang, Jianqi Wang, Yujing Hu, Shaokang Dong, Wenbin Li, Tangjie Lv, Changjie Fan und Yang Gao. „Optimistic Value Instructors for Cooperative Multi-Agent Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 16 (24.03.2024): 17453–60. http://dx.doi.org/10.1609/aaai.v38i16.29694.
Der volle Inhalt der QuelleKveton, Branislav, und Georgios Theocharous. „Structured Kernel-Based Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 27, Nr. 1 (30.06.2013): 569–75. http://dx.doi.org/10.1609/aaai.v27i1.8669.
Der volle Inhalt der QuelleSimão, Thiago D., und Matthijs T. J. Spaan. „Safe Policy Improvement with Baseline Bootstrapping in Factored Environments“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4967–74. http://dx.doi.org/10.1609/aaai.v33i01.33014967.
Der volle Inhalt der QuelleTruong, Van Binh, und Long Bao Le. „Electric vehicle charging design: The factored action based reinforcement learning approach“. Applied Energy 359 (April 2024): 122737. http://dx.doi.org/10.1016/j.apenergy.2024.122737.
Der volle Inhalt der QuelleSIMM, Jaak, Masashi SUGIYAMA und Hirotaka HACHIYA. „Multi-Task Approach to Reinforcement Learning for Factored-State Markov Decision Problems“. IEICE Transactions on Information and Systems E95.D, Nr. 10 (2012): 2426–37. http://dx.doi.org/10.1587/transinf.e95.d.2426.
Der volle Inhalt der QuelleWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu und Peter Stone. „Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 14 (24.03.2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Der volle Inhalt der QuelleMohamad Hafiz Abu Bakar, Abu Ubaidah bin Shamsudin, Ruzairi Abdul Rahim, Zubair Adil Soomro und Andi Adrianshah. „Comparison Method Q-Learning and SARSA for Simulation of Drone Controller using Reinforcement Learning“. Journal of Advanced Research in Applied Sciences and Engineering Technology 30, Nr. 3 (15.05.2023): 69–78. http://dx.doi.org/10.37934/araset.30.3.6978.
Der volle Inhalt der QuelleKong, Minseok, und Jungmin So. „Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning“. Applied Sciences 13, Nr. 1 (03.01.2023): 633. http://dx.doi.org/10.3390/app13010633.
Der volle Inhalt der QuelleMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein und Marcello Restelli. „Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Der volle Inhalt der QuelleSui, Dong, Chenyu Ma und Chunjie Wei. „Tactical Conflict Solver Assisting Air Traffic Controllers Using Deep Reinforcement Learning“. Aerospace 10, Nr. 2 (15.02.2023): 182. http://dx.doi.org/10.3390/aerospace10020182.
Der volle Inhalt der QuelleHao, Zheng, Haowei Zhang und Yipu Zhang. „Stock Portfolio Management by Using Fuzzy Ensemble Deep Reinforcement Learning Algorithm“. Journal of Risk and Financial Management 16, Nr. 3 (15.03.2023): 201. http://dx.doi.org/10.3390/jrfm16030201.
Der volle Inhalt der QuelleChu, Yunfei, Zhinong Wei, Guoqiang Sun, Haixiang Zang, Sheng Chen und Yizhou Zhou. „Optimal home energy management strategy: A reinforcement learning method with actor-critic using Kronecker-factored trust region“. Electric Power Systems Research 212 (November 2022): 108617. http://dx.doi.org/10.1016/j.epsr.2022.108617.
Der volle Inhalt der QuelleAbdulhai, Marwa, Dong-Ki Kim, Matthew Riemer, Miao Liu, Gerald Tesauro und Jonathan P. How. „Context-Specific Representation Abstraction for Deep Option Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 6 (28.06.2022): 5959–67. http://dx.doi.org/10.1609/aaai.v36i6.20541.
Der volle Inhalt der QuelleLi, Hengjie, Jianghao Zhu, Yun Zhou, Qi Feng und Donghan Feng. „Charging Station Management Strategy for Returns Maximization via Improved TD3 Deep Reinforcement Learning“. International Transactions on Electrical Energy Systems 2022 (15.12.2022): 1–14. http://dx.doi.org/10.1155/2022/6854620.
Der volle Inhalt der QuelleGavane, Vaibhav. „A Measure of Real-Time Intelligence“. Journal of Artificial General Intelligence 4, Nr. 1 (01.03.2013): 31–48. http://dx.doi.org/10.2478/jagi-2013-0003.
Der volle Inhalt der QuelleYedukondalu, Gangolu, Yasmeen Yasmeen, G. Vinoda Reddy, Ravindra Changala, Mahesh Kotha, Adapa Gopi und Annapurna Gummadi. „Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 11s (07.10.2023): 38–48. http://dx.doi.org/10.17762/ijritcc.v11i11s.8068.
Der volle Inhalt der QuelleLi, Guangliang, Randy Gomez, Keisuke Nakamura und Bo He. „Human-Centered Reinforcement Learning: A Survey“. IEEE Transactions on Human-Machine Systems 49, Nr. 4 (August 2019): 337–49. http://dx.doi.org/10.1109/thms.2019.2912447.
Der volle Inhalt der QuelleLi, Zhuoran, Chao Zeng, Zhen Deng, Qinling Xu, Bingwei He und Jianwei Zhang. „Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework“. IEEE Systems, Man, and Cybernetics Magazine 10, Nr. 1 (Januar 2024): 17–27. http://dx.doi.org/10.1109/msmc.2022.3231416.
Der volle Inhalt der QuelleWhite, Jack, Tatiana Kameneva und Chris McCarthy. „Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach“. IEEE Transactions on Human-Machine Systems 52, Nr. 1 (Februar 2022): 123–33. http://dx.doi.org/10.1109/thms.2021.3121661.
Der volle Inhalt der QuelleChihara, Takanori, und Jiro Sakamoto. „Generating deceleration behavior of automatic driving by reinforcement learning that reflects passenger discomfort“. International Journal of Industrial Ergonomics 91 (September 2022): 103343. http://dx.doi.org/10.1016/j.ergon.2022.103343.
Der volle Inhalt der QuelleWang, Zhe, Helai Huang, Jinjun Tang, Xianwei Meng und Lipeng Hu. „Velocity control in car-following behavior with autonomous vehicles using reinforcement learning“. Accident Analysis & Prevention 174 (September 2022): 106729. http://dx.doi.org/10.1016/j.aap.2022.106729.
Der volle Inhalt der QuelleSalehi, V., T. T. Tran, B. Veitch und D. Smith. „A reinforcement learning development of the FRAM for functional reward-based assessments of complex systems performance“. International Journal of Industrial Ergonomics 88 (März 2022): 103271. http://dx.doi.org/10.1016/j.ergon.2022.103271.
Der volle Inhalt der QuelleMatarese, Marco, Alessandra Sciutti, Francesco Rea und Silvia Rossi. „Toward Robots’ Behavioral Transparency of Temporal Difference Reinforcement Learning With a Human Teacher“. IEEE Transactions on Human-Machine Systems 51, Nr. 6 (Dezember 2021): 578–89. http://dx.doi.org/10.1109/thms.2021.3116119.
Der volle Inhalt der QuelleRoy, Ananya, Moinul Hossain und Yasunori Muromachi. „A deep reinforcement learning-based intelligent intervention framework for real-time proactive road safety management“. Accident Analysis & Prevention 165 (Februar 2022): 106512. http://dx.doi.org/10.1016/j.aap.2021.106512.
Der volle Inhalt der QuelleGong, Yaobang, Mohamed Abdel-Aty, Jinghui Yuan und Qing Cai. „Multi-Objective reinforcement learning approach for improving safety at intersections with adaptive traffic signal control“. Accident Analysis & Prevention 144 (September 2020): 105655. http://dx.doi.org/10.1016/j.aap.2020.105655.
Der volle Inhalt der QuelleYang, Kui, Mohammed Quddus und Constantinos Antoniou. „Developing a new real-time traffic safety management framework for urban expressways utilizing reinforcement learning tree“. Accident Analysis & Prevention 178 (Dezember 2022): 106848. http://dx.doi.org/10.1016/j.aap.2022.106848.
Der volle Inhalt der QuelleQin, ShuJin, ZhiLiang Bi, Jiacun Wang, Shixin Liu, XiWang Guo, Ziyan Zhao und Liang Qi. „Value-Based Reinforcement Learning for Selective Disassembly Sequence Optimization Problems: Demonstrating and Comparing a Proposed Model“. IEEE Systems, Man, and Cybernetics Magazine 10, Nr. 2 (April 2024): 24–31. http://dx.doi.org/10.1109/msmc.2023.3303615.
Der volle Inhalt der QuelleYan, Longhao, Ping Wang, Fan Qi, Zhuohang Xu, Ronghui Zhang und Yu Han. „A task-level emergency experience reuse method for freeway accidents onsite disposal with policy distilled reinforcement learning“. Accident Analysis & Prevention 190 (September 2023): 107179. http://dx.doi.org/10.1016/j.aap.2023.107179.
Der volle Inhalt der QuelleNasernejad, Payam, Tarek Sayed und Rushdi Alsaleh. „Modeling pedestrian behavior in pedestrian-vehicle near misses: A continuous Gaussian Process Inverse Reinforcement Learning (GP-IRL) approach“. Accident Analysis & Prevention 161 (Oktober 2021): 106355. http://dx.doi.org/10.1016/j.aap.2021.106355.
Der volle Inhalt der QuelleGuo, Hongyu, Kun Xie und Mehdi Keyvan-Ekbatani. „Modeling driver’s evasive behavior during safety–critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning“. Accident Analysis & Prevention 186 (Juni 2023): 107063. http://dx.doi.org/10.1016/j.aap.2023.107063.
Der volle Inhalt der QuelleJin, Jieling, Ye Li, Helai Huang, Yuxuan Dong und Pan Liu. „A variable speed limit control approach for freeway tunnels based on the model-based reinforcement learning framework with safety perception“. Accident Analysis & Prevention 201 (Juni 2024): 107570. http://dx.doi.org/10.1016/j.aap.2024.107570.
Der volle Inhalt der QuelleVandaele, Mathilde, und Sanna Stålhammar. „“Hope dies, action begins?” The role of hope for proactive sustainability engagement among university students“. International Journal of Sustainability in Higher Education 23, Nr. 8 (25.08.2022): 272–89. http://dx.doi.org/10.1108/ijshe-11-2021-0463.
Der volle Inhalt der QuelleZhang, Gongquan, Fangrong Chang, Jieling Jin, Fan Yang und Helai Huang. „Multi-objective deep reinforcement learning approach for adaptive traffic signal control system with concurrent optimization of safety, efficiency, and decarbonization at intersections“. Accident Analysis & Prevention 199 (Mai 2024): 107451. http://dx.doi.org/10.1016/j.aap.2023.107451.
Der volle Inhalt der QuelleHoffmann, Patrick, Kirill Gorelik und Valentin Ivanov. „Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework“. International Journal of Automotive Engineering 15, Nr. 1 (2024): 19–26. http://dx.doi.org/10.20485/jsaeijae.15.1_19.
Der volle Inhalt der QuelleWu, Bo, Yanpeng Feng und Hongyan Zheng. „Model-based Bayesian Reinforcement Learning in Factored Markov Decision Process“. Journal of Computers 9, Nr. 4 (01.04.2014). http://dx.doi.org/10.4304/jcp.9.4.845-850.
Der volle Inhalt der QuelleXu, Jianyu, Bin Liu, Xiujie Zhao und Xiao-Lin Wang. „Online reinforcement learning for condition-based group maintenance using factored Markov decision processes“. European Journal of Operational Research, November 2023. http://dx.doi.org/10.1016/j.ejor.2023.11.039.
Der volle Inhalt der QuelleAmato, Christopher, und Frans Oliehoek. „Scalable Planning and Learning for Multiagent POMDPs“. Proceedings of the AAAI Conference on Artificial Intelligence 29, Nr. 1 (18.02.2015). http://dx.doi.org/10.1609/aaai.v29i1.9439.
Der volle Inhalt der QuelleStreet, Charlie, Masoumeh Mansouri und Bruno Lacerda. „Formal Modelling for Multi-Robot Systems Under Uncertainty“. Current Robotics Reports, 15.08.2023. http://dx.doi.org/10.1007/s43154-023-00104-0.
Der volle Inhalt der QuelleXie, Ziyang, Lu Lu, Hanwen Wang, Bingyi Su, Yunan Liu und Xu Xu. „Improving Workers’ Musculoskeletal Health During Human-Robot Collaboration Through Reinforcement Learning“. Human Factors: The Journal of the Human Factors and Ergonomics Society, 22.05.2023, 001872082311775. http://dx.doi.org/10.1177/00187208231177574.
Der volle Inhalt der QuelleRigoli, Lillian, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, Simon Hosking, Christopher Best und Michael J. Richardson. „A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems“. Journal of Cognitive Engineering and Decision Making, 25.04.2022, 155534342210929. http://dx.doi.org/10.1177/15553434221092930.
Der volle Inhalt der QuelleFragkos, Georgios, Jay Johnson und Eirini Eleni Tsiropoulou. „Dynamic Role-Based Access Control Policy for Smart Grid Applications: An Offline Deep Reinforcement Learning Approach“. IEEE Transactions on Human-Machine Systems, 2022, 1–13. http://dx.doi.org/10.1109/thms.2022.3163185.
Der volle Inhalt der QuelleSun, Yuxiang, Bo Yuan, Qi Xiang, Jiawei Zhou, Jiahui Yu, Di Dai und Xianzhong Zhou. „Intelligent Decision-Making and Human Language Communication Based on Deep Reinforcement Learning in a Wargame Environment“. IEEE Transactions on Human-Machine Systems, 2022, 1–14. http://dx.doi.org/10.1109/thms.2022.3225867.
Der volle Inhalt der QuelleJokinen, Jussi P. P., Tuomo Kujala und Antti Oulasvirta. „Multitasking in Driving as Optimal Adaptation Under Uncertainty“. Human Factors: The Journal of the Human Factors and Ergonomics Society, 30.07.2020, 001872082092768. http://dx.doi.org/10.1177/0018720820927687.
Der volle Inhalt der QuelleFerrão, Maria Eugénia, und Cristiano Fernandes. „O efeito-escola e a mudança - dá para mudar? Evidências da investigação Brasileira“. REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación 1, Nr. 1 (02.07.2016). http://dx.doi.org/10.15366/reice2003.1.1.005.
Der volle Inhalt der Quelle