Статті в журналах з теми "Factored reinforcement learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-45 статей у журналах для дослідження на тему "Factored reinforcement learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Wu, Bo, Yan Peng Feng, and Hong Yan Zheng. "A Model-Based Factored Bayesian Reinforcement Learning Approach." Applied Mechanics and Materials 513-517 (February 2014): 1092–95. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1092.
Повний текст джерелаLi, Chao, Yupeng Zhang, Jianqi Wang, Yujing Hu, Shaokang Dong, Wenbin Li, Tangjie Lv, Changjie Fan, and Yang Gao. "Optimistic Value Instructors for Cooperative Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17453–60. http://dx.doi.org/10.1609/aaai.v38i16.29694.
Повний текст джерелаKveton, Branislav, and Georgios Theocharous. "Structured Kernel-Based Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 569–75. http://dx.doi.org/10.1609/aaai.v27i1.8669.
Повний текст джерелаSimão, Thiago D., and Matthijs T. J. Spaan. "Safe Policy Improvement with Baseline Bootstrapping in Factored Environments." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4967–74. http://dx.doi.org/10.1609/aaai.v33i01.33014967.
Повний текст джерелаTruong, Van Binh, and Long Bao Le. "Electric vehicle charging design: The factored action based reinforcement learning approach." Applied Energy 359 (April 2024): 122737. http://dx.doi.org/10.1016/j.apenergy.2024.122737.
Повний текст джерелаSIMM, Jaak, Masashi SUGIYAMA, and Hirotaka HACHIYA. "Multi-Task Approach to Reinforcement Learning for Factored-State Markov Decision Problems." IEICE Transactions on Information and Systems E95.D, no. 10 (2012): 2426–37. http://dx.doi.org/10.1587/transinf.e95.d.2426.
Повний текст джерелаWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu, and Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Повний текст джерелаMohamad Hafiz Abu Bakar, Abu Ubaidah bin Shamsudin, Ruzairi Abdul Rahim, Zubair Adil Soomro, and Andi Adrianshah. "Comparison Method Q-Learning and SARSA for Simulation of Drone Controller using Reinforcement Learning." Journal of Advanced Research in Applied Sciences and Engineering Technology 30, no. 3 (May 15, 2023): 69–78. http://dx.doi.org/10.37934/araset.30.3.6978.
Повний текст джерелаKong, Minseok, and Jungmin So. "Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning." Applied Sciences 13, no. 1 (January 3, 2023): 633. http://dx.doi.org/10.3390/app13010633.
Повний текст джерелаMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein, and Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Повний текст джерелаSui, Dong, Chenyu Ma, and Chunjie Wei. "Tactical Conflict Solver Assisting Air Traffic Controllers Using Deep Reinforcement Learning." Aerospace 10, no. 2 (February 15, 2023): 182. http://dx.doi.org/10.3390/aerospace10020182.
Повний текст джерелаHao, Zheng, Haowei Zhang, and Yipu Zhang. "Stock Portfolio Management by Using Fuzzy Ensemble Deep Reinforcement Learning Algorithm." Journal of Risk and Financial Management 16, no. 3 (March 15, 2023): 201. http://dx.doi.org/10.3390/jrfm16030201.
Повний текст джерелаChu, Yunfei, Zhinong Wei, Guoqiang Sun, Haixiang Zang, Sheng Chen, and Yizhou Zhou. "Optimal home energy management strategy: A reinforcement learning method with actor-critic using Kronecker-factored trust region." Electric Power Systems Research 212 (November 2022): 108617. http://dx.doi.org/10.1016/j.epsr.2022.108617.
Повний текст джерелаAbdulhai, Marwa, Dong-Ki Kim, Matthew Riemer, Miao Liu, Gerald Tesauro, and Jonathan P. How. "Context-Specific Representation Abstraction for Deep Option Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 5959–67. http://dx.doi.org/10.1609/aaai.v36i6.20541.
Повний текст джерелаLi, Hengjie, Jianghao Zhu, Yun Zhou, Qi Feng, and Donghan Feng. "Charging Station Management Strategy for Returns Maximization via Improved TD3 Deep Reinforcement Learning." International Transactions on Electrical Energy Systems 2022 (December 15, 2022): 1–14. http://dx.doi.org/10.1155/2022/6854620.
Повний текст джерелаGavane, Vaibhav. "A Measure of Real-Time Intelligence." Journal of Artificial General Intelligence 4, no. 1 (March 1, 2013): 31–48. http://dx.doi.org/10.2478/jagi-2013-0003.
Повний текст джерелаYedukondalu, Gangolu, Yasmeen Yasmeen, G. Vinoda Reddy, Ravindra Changala, Mahesh Kotha, Adapa Gopi, and Annapurna Gummadi. "Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 11s (October 7, 2023): 38–48. http://dx.doi.org/10.17762/ijritcc.v11i11s.8068.
Повний текст джерелаLi, Guangliang, Randy Gomez, Keisuke Nakamura, and Bo He. "Human-Centered Reinforcement Learning: A Survey." IEEE Transactions on Human-Machine Systems 49, no. 4 (August 2019): 337–49. http://dx.doi.org/10.1109/thms.2019.2912447.
Повний текст джерелаLi, Zhuoran, Chao Zeng, Zhen Deng, Qinling Xu, Bingwei He, and Jianwei Zhang. "Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework." IEEE Systems, Man, and Cybernetics Magazine 10, no. 1 (January 2024): 17–27. http://dx.doi.org/10.1109/msmc.2022.3231416.
Повний текст джерелаWhite, Jack, Tatiana Kameneva, and Chris McCarthy. "Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach." IEEE Transactions on Human-Machine Systems 52, no. 1 (February 2022): 123–33. http://dx.doi.org/10.1109/thms.2021.3121661.
Повний текст джерелаChihara, Takanori, and Jiro Sakamoto. "Generating deceleration behavior of automatic driving by reinforcement learning that reflects passenger discomfort." International Journal of Industrial Ergonomics 91 (September 2022): 103343. http://dx.doi.org/10.1016/j.ergon.2022.103343.
Повний текст джерелаWang, Zhe, Helai Huang, Jinjun Tang, Xianwei Meng, and Lipeng Hu. "Velocity control in car-following behavior with autonomous vehicles using reinforcement learning." Accident Analysis & Prevention 174 (September 2022): 106729. http://dx.doi.org/10.1016/j.aap.2022.106729.
Повний текст джерелаSalehi, V., T. T. Tran, B. Veitch, and D. Smith. "A reinforcement learning development of the FRAM for functional reward-based assessments of complex systems performance." International Journal of Industrial Ergonomics 88 (March 2022): 103271. http://dx.doi.org/10.1016/j.ergon.2022.103271.
Повний текст джерелаMatarese, Marco, Alessandra Sciutti, Francesco Rea, and Silvia Rossi. "Toward Robots’ Behavioral Transparency of Temporal Difference Reinforcement Learning With a Human Teacher." IEEE Transactions on Human-Machine Systems 51, no. 6 (December 2021): 578–89. http://dx.doi.org/10.1109/thms.2021.3116119.
Повний текст джерелаRoy, Ananya, Moinul Hossain, and Yasunori Muromachi. "A deep reinforcement learning-based intelligent intervention framework for real-time proactive road safety management." Accident Analysis & Prevention 165 (February 2022): 106512. http://dx.doi.org/10.1016/j.aap.2021.106512.
Повний текст джерелаGong, Yaobang, Mohamed Abdel-Aty, Jinghui Yuan, and Qing Cai. "Multi-Objective reinforcement learning approach for improving safety at intersections with adaptive traffic signal control." Accident Analysis & Prevention 144 (September 2020): 105655. http://dx.doi.org/10.1016/j.aap.2020.105655.
Повний текст джерелаYang, Kui, Mohammed Quddus, and Constantinos Antoniou. "Developing a new real-time traffic safety management framework for urban expressways utilizing reinforcement learning tree." Accident Analysis & Prevention 178 (December 2022): 106848. http://dx.doi.org/10.1016/j.aap.2022.106848.
Повний текст джерелаQin, ShuJin, ZhiLiang Bi, Jiacun Wang, Shixin Liu, XiWang Guo, Ziyan Zhao, and Liang Qi. "Value-Based Reinforcement Learning for Selective Disassembly Sequence Optimization Problems: Demonstrating and Comparing a Proposed Model." IEEE Systems, Man, and Cybernetics Magazine 10, no. 2 (April 2024): 24–31. http://dx.doi.org/10.1109/msmc.2023.3303615.
Повний текст джерелаYan, Longhao, Ping Wang, Fan Qi, Zhuohang Xu, Ronghui Zhang, and Yu Han. "A task-level emergency experience reuse method for freeway accidents onsite disposal with policy distilled reinforcement learning." Accident Analysis & Prevention 190 (September 2023): 107179. http://dx.doi.org/10.1016/j.aap.2023.107179.
Повний текст джерелаNasernejad, Payam, Tarek Sayed, and Rushdi Alsaleh. "Modeling pedestrian behavior in pedestrian-vehicle near misses: A continuous Gaussian Process Inverse Reinforcement Learning (GP-IRL) approach." Accident Analysis & Prevention 161 (October 2021): 106355. http://dx.doi.org/10.1016/j.aap.2021.106355.
Повний текст джерелаGuo, Hongyu, Kun Xie, and Mehdi Keyvan-Ekbatani. "Modeling driver’s evasive behavior during safety–critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning." Accident Analysis & Prevention 186 (June 2023): 107063. http://dx.doi.org/10.1016/j.aap.2023.107063.
Повний текст джерелаJin, Jieling, Ye Li, Helai Huang, Yuxuan Dong, and Pan Liu. "A variable speed limit control approach for freeway tunnels based on the model-based reinforcement learning framework with safety perception." Accident Analysis & Prevention 201 (June 2024): 107570. http://dx.doi.org/10.1016/j.aap.2024.107570.
Повний текст джерелаVandaele, Mathilde, and Sanna Stålhammar. "“Hope dies, action begins?” The role of hope for proactive sustainability engagement among university students." International Journal of Sustainability in Higher Education 23, no. 8 (August 25, 2022): 272–89. http://dx.doi.org/10.1108/ijshe-11-2021-0463.
Повний текст джерелаZhang, Gongquan, Fangrong Chang, Jieling Jin, Fan Yang, and Helai Huang. "Multi-objective deep reinforcement learning approach for adaptive traffic signal control system with concurrent optimization of safety, efficiency, and decarbonization at intersections." Accident Analysis & Prevention 199 (May 2024): 107451. http://dx.doi.org/10.1016/j.aap.2023.107451.
Повний текст джерелаHoffmann, Patrick, Kirill Gorelik, and Valentin Ivanov. "Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework." International Journal of Automotive Engineering 15, no. 1 (2024): 19–26. http://dx.doi.org/10.20485/jsaeijae.15.1_19.
Повний текст джерелаWu, Bo, Yanpeng Feng, and Hongyan Zheng. "Model-based Bayesian Reinforcement Learning in Factored Markov Decision Process." Journal of Computers 9, no. 4 (April 1, 2014). http://dx.doi.org/10.4304/jcp.9.4.845-850.
Повний текст джерелаXu, Jianyu, Bin Liu, Xiujie Zhao, and Xiao-Lin Wang. "Online reinforcement learning for condition-based group maintenance using factored Markov decision processes." European Journal of Operational Research, November 2023. http://dx.doi.org/10.1016/j.ejor.2023.11.039.
Повний текст джерелаAmato, Christopher, and Frans Oliehoek. "Scalable Planning and Learning for Multiagent POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 29, no. 1 (February 18, 2015). http://dx.doi.org/10.1609/aaai.v29i1.9439.
Повний текст джерелаStreet, Charlie, Masoumeh Mansouri, and Bruno Lacerda. "Formal Modelling for Multi-Robot Systems Under Uncertainty." Current Robotics Reports, August 15, 2023. http://dx.doi.org/10.1007/s43154-023-00104-0.
Повний текст джерелаXie, Ziyang, Lu Lu, Hanwen Wang, Bingyi Su, Yunan Liu, and Xu Xu. "Improving Workers’ Musculoskeletal Health During Human-Robot Collaboration Through Reinforcement Learning." Human Factors: The Journal of the Human Factors and Ergonomics Society, May 22, 2023, 001872082311775. http://dx.doi.org/10.1177/00187208231177574.
Повний текст джерелаRigoli, Lillian, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, Simon Hosking, Christopher Best, and Michael J. Richardson. "A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems." Journal of Cognitive Engineering and Decision Making, April 25, 2022, 155534342210929. http://dx.doi.org/10.1177/15553434221092930.
Повний текст джерелаFragkos, Georgios, Jay Johnson, and Eirini Eleni Tsiropoulou. "Dynamic Role-Based Access Control Policy for Smart Grid Applications: An Offline Deep Reinforcement Learning Approach." IEEE Transactions on Human-Machine Systems, 2022, 1–13. http://dx.doi.org/10.1109/thms.2022.3163185.
Повний текст джерелаSun, Yuxiang, Bo Yuan, Qi Xiang, Jiawei Zhou, Jiahui Yu, Di Dai, and Xianzhong Zhou. "Intelligent Decision-Making and Human Language Communication Based on Deep Reinforcement Learning in a Wargame Environment." IEEE Transactions on Human-Machine Systems, 2022, 1–14. http://dx.doi.org/10.1109/thms.2022.3225867.
Повний текст джерелаJokinen, Jussi P. P., Tuomo Kujala, and Antti Oulasvirta. "Multitasking in Driving as Optimal Adaptation Under Uncertainty." Human Factors: The Journal of the Human Factors and Ergonomics Society, July 30, 2020, 001872082092768. http://dx.doi.org/10.1177/0018720820927687.
Повний текст джерелаFerrão, Maria Eugénia, and Cristiano Fernandes. "O efeito-escola e a mudança - dá para mudar? Evidências da investigação Brasileira." REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación 1, no. 1 (July 2, 2016). http://dx.doi.org/10.15366/reice2003.1.1.005.
Повний текст джерела