Artigos de revistas sobre o tema "Factored reinforcement learning"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 45 melhores artigos de revistas para estudos sobre o assunto "Factored reinforcement learning".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Wu, Bo, Yan Peng Feng e Hong Yan Zheng. "A Model-Based Factored Bayesian Reinforcement Learning Approach". Applied Mechanics and Materials 513-517 (fevereiro de 2014): 1092–95. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1092.
Texto completo da fonteLi, Chao, Yupeng Zhang, Jianqi Wang, Yujing Hu, Shaokang Dong, Wenbin Li, Tangjie Lv, Changjie Fan e Yang Gao. "Optimistic Value Instructors for Cooperative Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de março de 2024): 17453–60. http://dx.doi.org/10.1609/aaai.v38i16.29694.
Texto completo da fonteKveton, Branislav, e Georgios Theocharous. "Structured Kernel-Based Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junho de 2013): 569–75. http://dx.doi.org/10.1609/aaai.v27i1.8669.
Texto completo da fonteSimão, Thiago D., e Matthijs T. J. Spaan. "Safe Policy Improvement with Baseline Bootstrapping in Factored Environments". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 4967–74. http://dx.doi.org/10.1609/aaai.v33i01.33014967.
Texto completo da fonteTruong, Van Binh, e Long Bao Le. "Electric vehicle charging design: The factored action based reinforcement learning approach". Applied Energy 359 (abril de 2024): 122737. http://dx.doi.org/10.1016/j.apenergy.2024.122737.
Texto completo da fonteSIMM, Jaak, Masashi SUGIYAMA e Hirotaka HACHIYA. "Multi-Task Approach to Reinforcement Learning for Factored-State Markov Decision Problems". IEICE Transactions on Information and Systems E95.D, n.º 10 (2012): 2426–37. http://dx.doi.org/10.1587/transinf.e95.d.2426.
Texto completo da fonteWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu e Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de março de 2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Texto completo da fonteMohamad Hafiz Abu Bakar, Abu Ubaidah bin Shamsudin, Ruzairi Abdul Rahim, Zubair Adil Soomro e Andi Adrianshah. "Comparison Method Q-Learning and SARSA for Simulation of Drone Controller using Reinforcement Learning". Journal of Advanced Research in Applied Sciences and Engineering Technology 30, n.º 3 (15 de maio de 2023): 69–78. http://dx.doi.org/10.37934/araset.30.3.6978.
Texto completo da fonteKong, Minseok, e Jungmin So. "Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning". Applied Sciences 13, n.º 1 (3 de janeiro de 2023): 633. http://dx.doi.org/10.3390/app13010633.
Texto completo da fonteMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein e Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junho de 2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Texto completo da fonteSui, Dong, Chenyu Ma e Chunjie Wei. "Tactical Conflict Solver Assisting Air Traffic Controllers Using Deep Reinforcement Learning". Aerospace 10, n.º 2 (15 de fevereiro de 2023): 182. http://dx.doi.org/10.3390/aerospace10020182.
Texto completo da fonteHao, Zheng, Haowei Zhang e Yipu Zhang. "Stock Portfolio Management by Using Fuzzy Ensemble Deep Reinforcement Learning Algorithm". Journal of Risk and Financial Management 16, n.º 3 (15 de março de 2023): 201. http://dx.doi.org/10.3390/jrfm16030201.
Texto completo da fonteChu, Yunfei, Zhinong Wei, Guoqiang Sun, Haixiang Zang, Sheng Chen e Yizhou Zhou. "Optimal home energy management strategy: A reinforcement learning method with actor-critic using Kronecker-factored trust region". Electric Power Systems Research 212 (novembro de 2022): 108617. http://dx.doi.org/10.1016/j.epsr.2022.108617.
Texto completo da fonteAbdulhai, Marwa, Dong-Ki Kim, Matthew Riemer, Miao Liu, Gerald Tesauro e Jonathan P. How. "Context-Specific Representation Abstraction for Deep Option Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junho de 2022): 5959–67. http://dx.doi.org/10.1609/aaai.v36i6.20541.
Texto completo da fonteLi, Hengjie, Jianghao Zhu, Yun Zhou, Qi Feng e Donghan Feng. "Charging Station Management Strategy for Returns Maximization via Improved TD3 Deep Reinforcement Learning". International Transactions on Electrical Energy Systems 2022 (15 de dezembro de 2022): 1–14. http://dx.doi.org/10.1155/2022/6854620.
Texto completo da fonteGavane, Vaibhav. "A Measure of Real-Time Intelligence". Journal of Artificial General Intelligence 4, n.º 1 (1 de março de 2013): 31–48. http://dx.doi.org/10.2478/jagi-2013-0003.
Texto completo da fonteYedukondalu, Gangolu, Yasmeen Yasmeen, G. Vinoda Reddy, Ravindra Changala, Mahesh Kotha, Adapa Gopi e Annapurna Gummadi. "Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 11s (7 de outubro de 2023): 38–48. http://dx.doi.org/10.17762/ijritcc.v11i11s.8068.
Texto completo da fonteLi, Guangliang, Randy Gomez, Keisuke Nakamura e Bo He. "Human-Centered Reinforcement Learning: A Survey". IEEE Transactions on Human-Machine Systems 49, n.º 4 (agosto de 2019): 337–49. http://dx.doi.org/10.1109/thms.2019.2912447.
Texto completo da fonteLi, Zhuoran, Chao Zeng, Zhen Deng, Qinling Xu, Bingwei He e Jianwei Zhang. "Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework". IEEE Systems, Man, and Cybernetics Magazine 10, n.º 1 (janeiro de 2024): 17–27. http://dx.doi.org/10.1109/msmc.2022.3231416.
Texto completo da fonteWhite, Jack, Tatiana Kameneva e Chris McCarthy. "Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach". IEEE Transactions on Human-Machine Systems 52, n.º 1 (fevereiro de 2022): 123–33. http://dx.doi.org/10.1109/thms.2021.3121661.
Texto completo da fonteChihara, Takanori, e Jiro Sakamoto. "Generating deceleration behavior of automatic driving by reinforcement learning that reflects passenger discomfort". International Journal of Industrial Ergonomics 91 (setembro de 2022): 103343. http://dx.doi.org/10.1016/j.ergon.2022.103343.
Texto completo da fonteWang, Zhe, Helai Huang, Jinjun Tang, Xianwei Meng e Lipeng Hu. "Velocity control in car-following behavior with autonomous vehicles using reinforcement learning". Accident Analysis & Prevention 174 (setembro de 2022): 106729. http://dx.doi.org/10.1016/j.aap.2022.106729.
Texto completo da fonteSalehi, V., T. T. Tran, B. Veitch e D. Smith. "A reinforcement learning development of the FRAM for functional reward-based assessments of complex systems performance". International Journal of Industrial Ergonomics 88 (março de 2022): 103271. http://dx.doi.org/10.1016/j.ergon.2022.103271.
Texto completo da fonteMatarese, Marco, Alessandra Sciutti, Francesco Rea e Silvia Rossi. "Toward Robots’ Behavioral Transparency of Temporal Difference Reinforcement Learning With a Human Teacher". IEEE Transactions on Human-Machine Systems 51, n.º 6 (dezembro de 2021): 578–89. http://dx.doi.org/10.1109/thms.2021.3116119.
Texto completo da fonteRoy, Ananya, Moinul Hossain e Yasunori Muromachi. "A deep reinforcement learning-based intelligent intervention framework for real-time proactive road safety management". Accident Analysis & Prevention 165 (fevereiro de 2022): 106512. http://dx.doi.org/10.1016/j.aap.2021.106512.
Texto completo da fonteGong, Yaobang, Mohamed Abdel-Aty, Jinghui Yuan e Qing Cai. "Multi-Objective reinforcement learning approach for improving safety at intersections with adaptive traffic signal control". Accident Analysis & Prevention 144 (setembro de 2020): 105655. http://dx.doi.org/10.1016/j.aap.2020.105655.
Texto completo da fonteYang, Kui, Mohammed Quddus e Constantinos Antoniou. "Developing a new real-time traffic safety management framework for urban expressways utilizing reinforcement learning tree". Accident Analysis & Prevention 178 (dezembro de 2022): 106848. http://dx.doi.org/10.1016/j.aap.2022.106848.
Texto completo da fonteQin, ShuJin, ZhiLiang Bi, Jiacun Wang, Shixin Liu, XiWang Guo, Ziyan Zhao e Liang Qi. "Value-Based Reinforcement Learning for Selective Disassembly Sequence Optimization Problems: Demonstrating and Comparing a Proposed Model". IEEE Systems, Man, and Cybernetics Magazine 10, n.º 2 (abril de 2024): 24–31. http://dx.doi.org/10.1109/msmc.2023.3303615.
Texto completo da fonteYan, Longhao, Ping Wang, Fan Qi, Zhuohang Xu, Ronghui Zhang e Yu Han. "A task-level emergency experience reuse method for freeway accidents onsite disposal with policy distilled reinforcement learning". Accident Analysis & Prevention 190 (setembro de 2023): 107179. http://dx.doi.org/10.1016/j.aap.2023.107179.
Texto completo da fonteNasernejad, Payam, Tarek Sayed e Rushdi Alsaleh. "Modeling pedestrian behavior in pedestrian-vehicle near misses: A continuous Gaussian Process Inverse Reinforcement Learning (GP-IRL) approach". Accident Analysis & Prevention 161 (outubro de 2021): 106355. http://dx.doi.org/10.1016/j.aap.2021.106355.
Texto completo da fonteGuo, Hongyu, Kun Xie e Mehdi Keyvan-Ekbatani. "Modeling driver’s evasive behavior during safety–critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning". Accident Analysis & Prevention 186 (junho de 2023): 107063. http://dx.doi.org/10.1016/j.aap.2023.107063.
Texto completo da fonteJin, Jieling, Ye Li, Helai Huang, Yuxuan Dong e Pan Liu. "A variable speed limit control approach for freeway tunnels based on the model-based reinforcement learning framework with safety perception". Accident Analysis & Prevention 201 (junho de 2024): 107570. http://dx.doi.org/10.1016/j.aap.2024.107570.
Texto completo da fonteVandaele, Mathilde, e Sanna Stålhammar. "“Hope dies, action begins?” The role of hope for proactive sustainability engagement among university students". International Journal of Sustainability in Higher Education 23, n.º 8 (25 de agosto de 2022): 272–89. http://dx.doi.org/10.1108/ijshe-11-2021-0463.
Texto completo da fonteZhang, Gongquan, Fangrong Chang, Jieling Jin, Fan Yang e Helai Huang. "Multi-objective deep reinforcement learning approach for adaptive traffic signal control system with concurrent optimization of safety, efficiency, and decarbonization at intersections". Accident Analysis & Prevention 199 (maio de 2024): 107451. http://dx.doi.org/10.1016/j.aap.2023.107451.
Texto completo da fonteHoffmann, Patrick, Kirill Gorelik e Valentin Ivanov. "Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework". International Journal of Automotive Engineering 15, n.º 1 (2024): 19–26. http://dx.doi.org/10.20485/jsaeijae.15.1_19.
Texto completo da fonteWu, Bo, Yanpeng Feng e Hongyan Zheng. "Model-based Bayesian Reinforcement Learning in Factored Markov Decision Process". Journal of Computers 9, n.º 4 (1 de abril de 2014). http://dx.doi.org/10.4304/jcp.9.4.845-850.
Texto completo da fonteXu, Jianyu, Bin Liu, Xiujie Zhao e Xiao-Lin Wang. "Online reinforcement learning for condition-based group maintenance using factored Markov decision processes". European Journal of Operational Research, novembro de 2023. http://dx.doi.org/10.1016/j.ejor.2023.11.039.
Texto completo da fonteAmato, Christopher, e Frans Oliehoek. "Scalable Planning and Learning for Multiagent POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 29, n.º 1 (18 de fevereiro de 2015). http://dx.doi.org/10.1609/aaai.v29i1.9439.
Texto completo da fonteStreet, Charlie, Masoumeh Mansouri e Bruno Lacerda. "Formal Modelling for Multi-Robot Systems Under Uncertainty". Current Robotics Reports, 15 de agosto de 2023. http://dx.doi.org/10.1007/s43154-023-00104-0.
Texto completo da fonteXie, Ziyang, Lu Lu, Hanwen Wang, Bingyi Su, Yunan Liu e Xu Xu. "Improving Workers’ Musculoskeletal Health During Human-Robot Collaboration Through Reinforcement Learning". Human Factors: The Journal of the Human Factors and Ergonomics Society, 22 de maio de 2023, 001872082311775. http://dx.doi.org/10.1177/00187208231177574.
Texto completo da fonteRigoli, Lillian, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, Simon Hosking, Christopher Best e Michael J. Richardson. "A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems". Journal of Cognitive Engineering and Decision Making, 25 de abril de 2022, 155534342210929. http://dx.doi.org/10.1177/15553434221092930.
Texto completo da fonteFragkos, Georgios, Jay Johnson e Eirini Eleni Tsiropoulou. "Dynamic Role-Based Access Control Policy for Smart Grid Applications: An Offline Deep Reinforcement Learning Approach". IEEE Transactions on Human-Machine Systems, 2022, 1–13. http://dx.doi.org/10.1109/thms.2022.3163185.
Texto completo da fonteSun, Yuxiang, Bo Yuan, Qi Xiang, Jiawei Zhou, Jiahui Yu, Di Dai e Xianzhong Zhou. "Intelligent Decision-Making and Human Language Communication Based on Deep Reinforcement Learning in a Wargame Environment". IEEE Transactions on Human-Machine Systems, 2022, 1–14. http://dx.doi.org/10.1109/thms.2022.3225867.
Texto completo da fonteJokinen, Jussi P. P., Tuomo Kujala e Antti Oulasvirta. "Multitasking in Driving as Optimal Adaptation Under Uncertainty". Human Factors: The Journal of the Human Factors and Ergonomics Society, 30 de julho de 2020, 001872082092768. http://dx.doi.org/10.1177/0018720820927687.
Texto completo da fonteFerrão, Maria Eugénia, e Cristiano Fernandes. "O efeito-escola e a mudança - dá para mudar? Evidências da investigação Brasileira". REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación 1, n.º 1 (2 de julho de 2016). http://dx.doi.org/10.15366/reice2003.1.1.005.
Texto completo da fonte