Artykuły w czasopismach na temat „Factored reinforcement learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 45 najlepszych artykułów w czasopismach naukowych na temat „Factored reinforcement learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Wu, Bo, Yan Peng Feng i Hong Yan Zheng. "A Model-Based Factored Bayesian Reinforcement Learning Approach". Applied Mechanics and Materials 513-517 (luty 2014): 1092–95. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1092.
Pełny tekst źródłaLi, Chao, Yupeng Zhang, Jianqi Wang, Yujing Hu, Shaokang Dong, Wenbin Li, Tangjie Lv, Changjie Fan i Yang Gao. "Optimistic Value Instructors for Cooperative Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 16 (24.03.2024): 17453–60. http://dx.doi.org/10.1609/aaai.v38i16.29694.
Pełny tekst źródłaKveton, Branislav, i Georgios Theocharous. "Structured Kernel-Based Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (30.06.2013): 569–75. http://dx.doi.org/10.1609/aaai.v27i1.8669.
Pełny tekst źródłaSimão, Thiago D., i Matthijs T. J. Spaan. "Safe Policy Improvement with Baseline Bootstrapping in Factored Environments". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4967–74. http://dx.doi.org/10.1609/aaai.v33i01.33014967.
Pełny tekst źródłaTruong, Van Binh, i Long Bao Le. "Electric vehicle charging design: The factored action based reinforcement learning approach". Applied Energy 359 (kwiecień 2024): 122737. http://dx.doi.org/10.1016/j.apenergy.2024.122737.
Pełny tekst źródłaSIMM, Jaak, Masashi SUGIYAMA i Hirotaka HACHIYA. "Multi-Task Approach to Reinforcement Learning for Factored-State Markov Decision Problems". IEICE Transactions on Information and Systems E95.D, nr 10 (2012): 2426–37. http://dx.doi.org/10.1587/transinf.e95.d.2426.
Pełny tekst źródłaWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu i Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 14 (24.03.2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Pełny tekst źródłaMohamad Hafiz Abu Bakar, Abu Ubaidah bin Shamsudin, Ruzairi Abdul Rahim, Zubair Adil Soomro i Andi Adrianshah. "Comparison Method Q-Learning and SARSA for Simulation of Drone Controller using Reinforcement Learning". Journal of Advanced Research in Applied Sciences and Engineering Technology 30, nr 3 (15.05.2023): 69–78. http://dx.doi.org/10.37934/araset.30.3.6978.
Pełny tekst źródłaKong, Minseok, i Jungmin So. "Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning". Applied Sciences 13, nr 1 (3.01.2023): 633. http://dx.doi.org/10.3390/app13010633.
Pełny tekst źródłaMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein i Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 8 (26.06.2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Pełny tekst źródłaSui, Dong, Chenyu Ma i Chunjie Wei. "Tactical Conflict Solver Assisting Air Traffic Controllers Using Deep Reinforcement Learning". Aerospace 10, nr 2 (15.02.2023): 182. http://dx.doi.org/10.3390/aerospace10020182.
Pełny tekst źródłaHao, Zheng, Haowei Zhang i Yipu Zhang. "Stock Portfolio Management by Using Fuzzy Ensemble Deep Reinforcement Learning Algorithm". Journal of Risk and Financial Management 16, nr 3 (15.03.2023): 201. http://dx.doi.org/10.3390/jrfm16030201.
Pełny tekst źródłaChu, Yunfei, Zhinong Wei, Guoqiang Sun, Haixiang Zang, Sheng Chen i Yizhou Zhou. "Optimal home energy management strategy: A reinforcement learning method with actor-critic using Kronecker-factored trust region". Electric Power Systems Research 212 (listopad 2022): 108617. http://dx.doi.org/10.1016/j.epsr.2022.108617.
Pełny tekst źródłaAbdulhai, Marwa, Dong-Ki Kim, Matthew Riemer, Miao Liu, Gerald Tesauro i Jonathan P. How. "Context-Specific Representation Abstraction for Deep Option Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 5959–67. http://dx.doi.org/10.1609/aaai.v36i6.20541.
Pełny tekst źródłaLi, Hengjie, Jianghao Zhu, Yun Zhou, Qi Feng i Donghan Feng. "Charging Station Management Strategy for Returns Maximization via Improved TD3 Deep Reinforcement Learning". International Transactions on Electrical Energy Systems 2022 (15.12.2022): 1–14. http://dx.doi.org/10.1155/2022/6854620.
Pełny tekst źródłaGavane, Vaibhav. "A Measure of Real-Time Intelligence". Journal of Artificial General Intelligence 4, nr 1 (1.03.2013): 31–48. http://dx.doi.org/10.2478/jagi-2013-0003.
Pełny tekst źródłaYedukondalu, Gangolu, Yasmeen Yasmeen, G. Vinoda Reddy, Ravindra Changala, Mahesh Kotha, Adapa Gopi i Annapurna Gummadi. "Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 11s (7.10.2023): 38–48. http://dx.doi.org/10.17762/ijritcc.v11i11s.8068.
Pełny tekst źródłaLi, Guangliang, Randy Gomez, Keisuke Nakamura i Bo He. "Human-Centered Reinforcement Learning: A Survey". IEEE Transactions on Human-Machine Systems 49, nr 4 (sierpień 2019): 337–49. http://dx.doi.org/10.1109/thms.2019.2912447.
Pełny tekst źródłaLi, Zhuoran, Chao Zeng, Zhen Deng, Qinling Xu, Bingwei He i Jianwei Zhang. "Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework". IEEE Systems, Man, and Cybernetics Magazine 10, nr 1 (styczeń 2024): 17–27. http://dx.doi.org/10.1109/msmc.2022.3231416.
Pełny tekst źródłaWhite, Jack, Tatiana Kameneva i Chris McCarthy. "Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach". IEEE Transactions on Human-Machine Systems 52, nr 1 (luty 2022): 123–33. http://dx.doi.org/10.1109/thms.2021.3121661.
Pełny tekst źródłaChihara, Takanori, i Jiro Sakamoto. "Generating deceleration behavior of automatic driving by reinforcement learning that reflects passenger discomfort". International Journal of Industrial Ergonomics 91 (wrzesień 2022): 103343. http://dx.doi.org/10.1016/j.ergon.2022.103343.
Pełny tekst źródłaWang, Zhe, Helai Huang, Jinjun Tang, Xianwei Meng i Lipeng Hu. "Velocity control in car-following behavior with autonomous vehicles using reinforcement learning". Accident Analysis & Prevention 174 (wrzesień 2022): 106729. http://dx.doi.org/10.1016/j.aap.2022.106729.
Pełny tekst źródłaSalehi, V., T. T. Tran, B. Veitch i D. Smith. "A reinforcement learning development of the FRAM for functional reward-based assessments of complex systems performance". International Journal of Industrial Ergonomics 88 (marzec 2022): 103271. http://dx.doi.org/10.1016/j.ergon.2022.103271.
Pełny tekst źródłaMatarese, Marco, Alessandra Sciutti, Francesco Rea i Silvia Rossi. "Toward Robots’ Behavioral Transparency of Temporal Difference Reinforcement Learning With a Human Teacher". IEEE Transactions on Human-Machine Systems 51, nr 6 (grudzień 2021): 578–89. http://dx.doi.org/10.1109/thms.2021.3116119.
Pełny tekst źródłaRoy, Ananya, Moinul Hossain i Yasunori Muromachi. "A deep reinforcement learning-based intelligent intervention framework for real-time proactive road safety management". Accident Analysis & Prevention 165 (luty 2022): 106512. http://dx.doi.org/10.1016/j.aap.2021.106512.
Pełny tekst źródłaGong, Yaobang, Mohamed Abdel-Aty, Jinghui Yuan i Qing Cai. "Multi-Objective reinforcement learning approach for improving safety at intersections with adaptive traffic signal control". Accident Analysis & Prevention 144 (wrzesień 2020): 105655. http://dx.doi.org/10.1016/j.aap.2020.105655.
Pełny tekst źródłaYang, Kui, Mohammed Quddus i Constantinos Antoniou. "Developing a new real-time traffic safety management framework for urban expressways utilizing reinforcement learning tree". Accident Analysis & Prevention 178 (grudzień 2022): 106848. http://dx.doi.org/10.1016/j.aap.2022.106848.
Pełny tekst źródłaQin, ShuJin, ZhiLiang Bi, Jiacun Wang, Shixin Liu, XiWang Guo, Ziyan Zhao i Liang Qi. "Value-Based Reinforcement Learning for Selective Disassembly Sequence Optimization Problems: Demonstrating and Comparing a Proposed Model". IEEE Systems, Man, and Cybernetics Magazine 10, nr 2 (kwiecień 2024): 24–31. http://dx.doi.org/10.1109/msmc.2023.3303615.
Pełny tekst źródłaYan, Longhao, Ping Wang, Fan Qi, Zhuohang Xu, Ronghui Zhang i Yu Han. "A task-level emergency experience reuse method for freeway accidents onsite disposal with policy distilled reinforcement learning". Accident Analysis & Prevention 190 (wrzesień 2023): 107179. http://dx.doi.org/10.1016/j.aap.2023.107179.
Pełny tekst źródłaNasernejad, Payam, Tarek Sayed i Rushdi Alsaleh. "Modeling pedestrian behavior in pedestrian-vehicle near misses: A continuous Gaussian Process Inverse Reinforcement Learning (GP-IRL) approach". Accident Analysis & Prevention 161 (październik 2021): 106355. http://dx.doi.org/10.1016/j.aap.2021.106355.
Pełny tekst źródłaGuo, Hongyu, Kun Xie i Mehdi Keyvan-Ekbatani. "Modeling driver’s evasive behavior during safety–critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning". Accident Analysis & Prevention 186 (czerwiec 2023): 107063. http://dx.doi.org/10.1016/j.aap.2023.107063.
Pełny tekst źródłaJin, Jieling, Ye Li, Helai Huang, Yuxuan Dong i Pan Liu. "A variable speed limit control approach for freeway tunnels based on the model-based reinforcement learning framework with safety perception". Accident Analysis & Prevention 201 (czerwiec 2024): 107570. http://dx.doi.org/10.1016/j.aap.2024.107570.
Pełny tekst źródłaVandaele, Mathilde, i Sanna Stålhammar. "“Hope dies, action begins?” The role of hope for proactive sustainability engagement among university students". International Journal of Sustainability in Higher Education 23, nr 8 (25.08.2022): 272–89. http://dx.doi.org/10.1108/ijshe-11-2021-0463.
Pełny tekst źródłaZhang, Gongquan, Fangrong Chang, Jieling Jin, Fan Yang i Helai Huang. "Multi-objective deep reinforcement learning approach for adaptive traffic signal control system with concurrent optimization of safety, efficiency, and decarbonization at intersections". Accident Analysis & Prevention 199 (maj 2024): 107451. http://dx.doi.org/10.1016/j.aap.2023.107451.
Pełny tekst źródłaHoffmann, Patrick, Kirill Gorelik i Valentin Ivanov. "Comparison of Reinforcement Learning and Model Predictive Control for Automated Generation of Optimal Control for Dynamic Systems within a Design Space Exploration Framework". International Journal of Automotive Engineering 15, nr 1 (2024): 19–26. http://dx.doi.org/10.20485/jsaeijae.15.1_19.
Pełny tekst źródłaWu, Bo, Yanpeng Feng i Hongyan Zheng. "Model-based Bayesian Reinforcement Learning in Factored Markov Decision Process". Journal of Computers 9, nr 4 (1.04.2014). http://dx.doi.org/10.4304/jcp.9.4.845-850.
Pełny tekst źródłaXu, Jianyu, Bin Liu, Xiujie Zhao i Xiao-Lin Wang. "Online reinforcement learning for condition-based group maintenance using factored Markov decision processes". European Journal of Operational Research, listopad 2023. http://dx.doi.org/10.1016/j.ejor.2023.11.039.
Pełny tekst źródłaAmato, Christopher, i Frans Oliehoek. "Scalable Planning and Learning for Multiagent POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 29, nr 1 (18.02.2015). http://dx.doi.org/10.1609/aaai.v29i1.9439.
Pełny tekst źródłaStreet, Charlie, Masoumeh Mansouri i Bruno Lacerda. "Formal Modelling for Multi-Robot Systems Under Uncertainty". Current Robotics Reports, 15.08.2023. http://dx.doi.org/10.1007/s43154-023-00104-0.
Pełny tekst źródłaXie, Ziyang, Lu Lu, Hanwen Wang, Bingyi Su, Yunan Liu i Xu Xu. "Improving Workers’ Musculoskeletal Health During Human-Robot Collaboration Through Reinforcement Learning". Human Factors: The Journal of the Human Factors and Ergonomics Society, 22.05.2023, 001872082311775. http://dx.doi.org/10.1177/00187208231177574.
Pełny tekst źródłaRigoli, Lillian, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, Simon Hosking, Christopher Best i Michael J. Richardson. "A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems". Journal of Cognitive Engineering and Decision Making, 25.04.2022, 155534342210929. http://dx.doi.org/10.1177/15553434221092930.
Pełny tekst źródłaFragkos, Georgios, Jay Johnson i Eirini Eleni Tsiropoulou. "Dynamic Role-Based Access Control Policy for Smart Grid Applications: An Offline Deep Reinforcement Learning Approach". IEEE Transactions on Human-Machine Systems, 2022, 1–13. http://dx.doi.org/10.1109/thms.2022.3163185.
Pełny tekst źródłaSun, Yuxiang, Bo Yuan, Qi Xiang, Jiawei Zhou, Jiahui Yu, Di Dai i Xianzhong Zhou. "Intelligent Decision-Making and Human Language Communication Based on Deep Reinforcement Learning in a Wargame Environment". IEEE Transactions on Human-Machine Systems, 2022, 1–14. http://dx.doi.org/10.1109/thms.2022.3225867.
Pełny tekst źródłaJokinen, Jussi P. P., Tuomo Kujala i Antti Oulasvirta. "Multitasking in Driving as Optimal Adaptation Under Uncertainty". Human Factors: The Journal of the Human Factors and Ergonomics Society, 30.07.2020, 001872082092768. http://dx.doi.org/10.1177/0018720820927687.
Pełny tekst źródłaFerrão, Maria Eugénia, i Cristiano Fernandes. "O efeito-escola e a mudança - dá para mudar? Evidências da investigação Brasileira". REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación 1, nr 1 (2.07.2016). http://dx.doi.org/10.15366/reice2003.1.1.005.
Pełny tekst źródła