Artículos de revistas sobre el tema "Factored reinforcement learning"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 45 mejores artículos de revistas para su investigación sobre el tema "Factored reinforcement learning".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Wu, Bo, Yan Peng Feng y Hong Yan Zheng. "A Model-Based Factored Bayesian Reinforcement Learning Approach". Applied Mechanics and Materials 513-517 (febrero de 2014): 1092–95. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1092.
Texto completoLi, Chao, Yupeng Zhang, Jianqi Wang, Yujing Hu, Shaokang Dong, Wenbin Li, Tangjie Lv, Changjie Fan y Yang Gao. "Optimistic Value Instructors for Cooperative Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 17453–60. http://dx.doi.org/10.1609/aaai.v38i16.29694.
Texto completoKveton, Branislav y Georgios Theocharous. "Structured Kernel-Based Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junio de 2013): 569–75. http://dx.doi.org/10.1609/aaai.v27i1.8669.
Texto completoSimão, Thiago D. y Matthijs T. J. Spaan. "Safe Policy Improvement with Baseline Bootstrapping in Factored Environments". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 4967–74. http://dx.doi.org/10.1609/aaai.v33i01.33014967.
Texto completoTruong, Van Binh y Long Bao Le. "Electric vehicle charging design: The factored action based reinforcement learning approach". Applied Energy 359 (abril de 2024): 122737. http://dx.doi.org/10.1016/j.apenergy.2024.122737.
Texto completoSIMM, Jaak, Masashi SUGIYAMA y Hirotaka HACHIYA. "Multi-Task Approach to Reinforcement Learning for Factored-State Markov Decision Problems". IEICE Transactions on Information and Systems E95.D, n.º 10 (2012): 2426–37. http://dx.doi.org/10.1587/transinf.e95.d.2426.
Texto completoWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu y Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de marzo de 2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Texto completoMohamad Hafiz Abu Bakar, Abu Ubaidah bin Shamsudin, Ruzairi Abdul Rahim, Zubair Adil Soomro y Andi Adrianshah. "Comparison Method Q-Learning and SARSA for Simulation of Drone Controller using Reinforcement Learning". Journal of Advanced Research in Applied Sciences and Engineering Technology 30, n.º 3 (15 de mayo de 2023): 69–78. http://dx.doi.org/10.37934/araset.30.3.6978.
Texto completoKong, Minseok y Jungmin So. "Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning". Applied Sciences 13, n.º 1 (3 de enero de 2023): 633. http://dx.doi.org/10.3390/app13010633.
Texto completoMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein y Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junio de 2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Texto completoSui, Dong, Chenyu Ma y Chunjie Wei. "Tactical Conflict Solver Assisting Air Traffic Controllers Using Deep Reinforcement Learning". Aerospace 10, n.º 2 (15 de febrero de 2023): 182. http://dx.doi.org/10.3390/aerospace10020182.
Texto completoHao, Zheng, Haowei Zhang y Yipu Zhang. "Stock Portfolio Management by Using Fuzzy Ensemble Deep Reinforcement Learning Algorithm". Journal of Risk and Financial Management 16, n.º 3 (15 de marzo de 2023): 201. http://dx.doi.org/10.3390/jrfm16030201.
Texto completoChu, Yunfei, Zhinong Wei, Guoqiang Sun, Haixiang Zang, Sheng Chen y Yizhou Zhou. "Optimal home energy management strategy: A reinforcement learning method with actor-critic using Kronecker-factored trust region". Electric Power Systems Research 212 (noviembre de 2022): 108617. http://dx.doi.org/10.1016/j.epsr.2022.108617.
Texto completoAbdulhai, Marwa, Dong-Ki Kim, Matthew Riemer, Miao Liu, Gerald Tesauro y Jonathan P. How. "Context-Specific Representation Abstraction for Deep Option Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junio de 2022): 5959–67. http://dx.doi.org/10.1609/aaai.v36i6.20541.
Texto completoLi, Hengjie, Jianghao Zhu, Yun Zhou, Qi Feng y Donghan Feng. "Charging Station Management Strategy for Returns Maximization via Improved TD3 Deep Reinforcement Learning". International Transactions on Electrical Energy Systems 2022 (15 de diciembre de 2022): 1–14. http://dx.doi.org/10.1155/2022/6854620.
Texto completoGavane, Vaibhav. "A Measure of Real-Time Intelligence". Journal of Artificial General Intelligence 4, n.º 1 (1 de marzo de 2013): 31–48. http://dx.doi.org/10.2478/jagi-2013-0003.
Texto completoYedukondalu, Gangolu, Yasmeen Yasmeen, G. Vinoda Reddy, Ravindra Changala, Mahesh Kotha, Adapa Gopi y Annapurna Gummadi. "Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 11s (7 de octubre de 2023): 38–48. http://dx.doi.org/10.17762/ijritcc.v11i11s.8068.
Texto completoLi, Guangliang, Randy Gomez, Keisuke Nakamura y Bo He. "Human-Centered Reinforcement Learning: A Survey". IEEE Transactions on Human-Machine Systems 49, n.º 4 (agosto de 2019): 337–49. http://dx.doi.org/10.1109/thms.2019.2912447.
Texto completoLi, Zhuoran, Chao Zeng, Zhen Deng, Qinling Xu, Bingwei He y Jianwei Zhang. "Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework". IEEE Systems, Man, and Cybernetics Magazine 10, n.º 1 (enero de 2024): 17–27. http://dx.doi.org/10.1109/msmc.2022.3231416.
Texto completoWhite, Jack, Tatiana Kameneva y Chris McCarthy. "Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach". IEEE Transactions on Human-Machine Systems 52, n.º 1 (febrero de 2022): 123–33. http://dx.doi.org/10.1109/thms.2021.3121661.
Texto completoChihara, Takanori y Jiro Sakamoto. "Generating deceleration behavior of automatic driving by reinforcement learning that reflects passenger discomfort". International Journal of Industrial Ergonomics 91 (septiembre de 2022): 103343. http://dx.doi.org/10.1016/j.ergon.2022.103343.
Texto completoWang, Zhe, Helai Huang, Jinjun Tang, Xianwei Meng y Lipeng Hu. "Velocity control in car-following behavior with autonomous vehicles using reinforcement learning". Accident Analysis & Prevention 174 (septiembre de 2022): 106729. http://dx.doi.org/10.1016/j.aap.2022.106729.
Texto completoSalehi, V., T. T. Tran, B. Veitch y D. Smith. "A reinforcement learning development of the FRAM for functional reward-based assessments of complex systems performance". International Journal of Industrial Ergonomics 88 (marzo de 2022): 103271. http://dx.doi.org/10.1016/j.ergon.2022.103271.
Texto completoMatarese, Marco, Alessandra Sciutti, Francesco Rea y Silvia Rossi. "Toward Robots’ Behavioral Transparency of Temporal Difference Reinforcement Learning With a Human Teacher". IEEE Transactions on Human-Machine Systems 51, n.º 6 (diciembre de 2021): 578–89. http://dx.doi.org/10.1109/thms.2021.3116119.
Texto completoRoy, Ananya, Moinul Hossain y Yasunori Muromachi. "A deep reinforcement learning-based intelligent intervention framework for real-time proactive road safety management". Accident Analysis & Prevention 165 (febrero de 2022): 106512. http://dx.doi.org/10.1016/j.aap.2021.106512.
Texto completoGong, Yaobang, Mohamed Abdel-Aty, Jinghui Yuan y Qing Cai. "Multi-Objective reinforcement learning approach for improving safety at intersections with adaptive traffic signal control". Accident Analysis & Prevention 144 (septiembre de 2020): 105655. http://dx.doi.org/10.1016/j.aap.2020.105655.
Texto completoYang, Kui, Mohammed Quddus y Constantinos Antoniou. "Developing a new real-time traffic safety management framework for urban expressways utilizing reinforcement learning tree". Accident Analysis & Prevention 178 (diciembre de 2022): 106848. http://dx.doi.org/10.1016/j.aap.2022.106848.
Texto completoQin, ShuJin, ZhiLiang Bi, Jiacun Wang, Shixin Liu, XiWang Guo, Ziyan Zhao y Liang Qi. "Value-Based Reinforcement Learning for Selective Disassembly Sequence Optimization Problems: Demonstrating and Comparing a Proposed Model". IEEE Systems, Man, and Cybernetics Magazine 10, n.º 2 (abril de 2024): 24–31. http://dx.doi.org/10.1109/msmc.2023.3303615.
Texto completoYan, Longhao, Ping Wang, Fan Qi, Zhuohang Xu, Ronghui Zhang y Yu Han. "A task-level emergency experience reuse method for freeway accidents onsite disposal with policy distilled reinforcement learning". Accident Analysis & Prevention 190 (septiembre de 2023): 107179. http://dx.doi.org/10.1016/j.aap.2023.107179.
Texto completoNasernejad, Payam, Tarek Sayed y Rushdi Alsaleh. "Modeling pedestrian behavior in pedestrian-vehicle near misses: A continuous Gaussian Process Inverse Reinforcement Learning (GP-IRL) approach". Accident Analysis & Prevention 161 (octubre de 2021): 106355. http://dx.doi.org/10.1016/j.aap.2021.106355.
Texto completoGuo, Hongyu, Kun Xie y Mehdi Keyvan-Ekbatani. "Modeling driver’s evasive behavior during safety–critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning". Accident Analysis & Prevention 186 (junio de 2023): 107063. http://dx.doi.org/10.1016/j.aap.2023.107063.
Texto completoJin, Jieling, Ye Li, Helai Huang, Yuxuan Dong y Pan Liu. "A variable speed limit control approach for freeway tunnels based on the model-based reinforcement learning framework with safety perception". Accident Analysis & Prevention 201 (junio de 2024): 107570. http://dx.doi.org/10.1016/j.aap.2024.107570.
Texto completoVandaele, Mathilde y Sanna Stålhammar. "“Hope dies, action begins?” The role of hope for proactive sustainability engagement among university students". International Journal of Sustainability in Higher Education 23, n.º 8 (25 de agosto de 2022): 272–89. http://dx.doi.org/10.1108/ijshe-11-2021-0463.
Texto completoZhang, Gongquan, Fangrong Chang, Jieling Jin, Fan Yang y Helai Huang. "Multi-objective deep reinforcement learning approach for adaptive traffic signal control system with concurrent optimization of safety, efficiency, and decarbonization at intersections". Accident Analysis & Prevention 199 (mayo de 2024): 107451. http://dx.doi.org/10.1016/j.aap.2023.107451.
Texto completoHoffmann, Patrick, Kirill Gorelik y Valentin Ivanov. "Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework". International Journal of Automotive Engineering 15, n.º 1 (2024): 19–26. http://dx.doi.org/10.20485/jsaeijae.15.1_19.
Texto completoWu, Bo, Yanpeng Feng y Hongyan Zheng. "Model-based Bayesian Reinforcement Learning in Factored Markov Decision Process". Journal of Computers 9, n.º 4 (1 de abril de 2014). http://dx.doi.org/10.4304/jcp.9.4.845-850.
Texto completoXu, Jianyu, Bin Liu, Xiujie Zhao y Xiao-Lin Wang. "Online reinforcement learning for condition-based group maintenance using factored Markov decision processes". European Journal of Operational Research, noviembre de 2023. http://dx.doi.org/10.1016/j.ejor.2023.11.039.
Texto completoAmato, Christopher y Frans Oliehoek. "Scalable Planning and Learning for Multiagent POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 29, n.º 1 (18 de febrero de 2015). http://dx.doi.org/10.1609/aaai.v29i1.9439.
Texto completoStreet, Charlie, Masoumeh Mansouri y Bruno Lacerda. "Formal Modelling for Multi-Robot Systems Under Uncertainty". Current Robotics Reports, 15 de agosto de 2023. http://dx.doi.org/10.1007/s43154-023-00104-0.
Texto completoXie, Ziyang, Lu Lu, Hanwen Wang, Bingyi Su, Yunan Liu y Xu Xu. "Improving Workers’ Musculoskeletal Health During Human-Robot Collaboration Through Reinforcement Learning". Human Factors: The Journal of the Human Factors and Ergonomics Society, 22 de mayo de 2023, 001872082311775. http://dx.doi.org/10.1177/00187208231177574.
Texto completoRigoli, Lillian, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, Simon Hosking, Christopher Best y Michael J. Richardson. "A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems". Journal of Cognitive Engineering and Decision Making, 25 de abril de 2022, 155534342210929. http://dx.doi.org/10.1177/15553434221092930.
Texto completoFragkos, Georgios, Jay Johnson y Eirini Eleni Tsiropoulou. "Dynamic Role-Based Access Control Policy for Smart Grid Applications: An Offline Deep Reinforcement Learning Approach". IEEE Transactions on Human-Machine Systems, 2022, 1–13. http://dx.doi.org/10.1109/thms.2022.3163185.
Texto completoSun, Yuxiang, Bo Yuan, Qi Xiang, Jiawei Zhou, Jiahui Yu, Di Dai y Xianzhong Zhou. "Intelligent Decision-Making and Human Language Communication Based on Deep Reinforcement Learning in a Wargame Environment". IEEE Transactions on Human-Machine Systems, 2022, 1–14. http://dx.doi.org/10.1109/thms.2022.3225867.
Texto completoJokinen, Jussi P. P., Tuomo Kujala y Antti Oulasvirta. "Multitasking in Driving as Optimal Adaptation Under Uncertainty". Human Factors: The Journal of the Human Factors and Ergonomics Society, 30 de julio de 2020, 001872082092768. http://dx.doi.org/10.1177/0018720820927687.
Texto completoFerrão, Maria Eugénia y Cristiano Fernandes. "O efeito-escola e a mudança - dá para mudar? Evidências da investigação Brasileira". REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación 1, n.º 1 (2 de julio de 2016). http://dx.doi.org/10.15366/reice2003.1.1.005.
Texto completo