Articoli di riviste sul tema "Causal reinforcement learning"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Causal reinforcement learning".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
Madumal, Prashan, Tim Miller, Liz Sonenberg e Frank Vetere. "Explainable Reinforcement Learning through a Causal Lens". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 03 (3 aprile 2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.
Li, Dezhi, Yunjun Lu, Jianping Wu, Wenlu Zhou e Guangjun Zeng. "Causal Reinforcement Learning for Knowledge Graph Reasoning". Applied Sciences 14, n. 6 (15 marzo 2024): 2498. http://dx.doi.org/10.3390/app14062498.
Yang, Dezhi, Guoxian Yu, Jun Wang, Zhengtian Wu e Maozu Guo. "Reinforcement Causal Structure Learning on Order Graph". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 9 (26 giugno 2023): 10737–44. http://dx.doi.org/10.1609/aaai.v37i9.26274.
Madumal, Prashan. "Explainable Agency in Reinforcement Learning Agents". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 10 (3 aprile 2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.
Herlau, Tue, e Rasmus Larsen. "Reinforcement Learning of Causal Variables Using Mediation Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 6 (28 giugno 2022): 6910–17. http://dx.doi.org/10.1609/aaai.v36i6.20648.
Duong, Tri Dung, Qian Li e Guandong Xu. "Stochastic intervention for causal inference via reinforcement learning". Neurocomputing 482 (aprile 2022): 40–49. http://dx.doi.org/10.1016/j.neucom.2022.01.086.
Zhang, Wei, Xuesong Wang, Haoyu Wang e Yuhu Cheng. "Causal Meta-Reinforcement Learning for Multimodal Remote Sensing Data Classification". Remote Sensing 16, n. 6 (16 marzo 2024): 1055. http://dx.doi.org/10.3390/rs16061055.
Veselic, Sebastijan, Gerhard Jocham, Christian Gausterer, Bernhard Wagner, Miriam Ernhoefer-Reßler, Rupert Lanzenberger, Christoph Eisenegger, Claus Lamm e Annabel Losecaat Vermeer. "A causal role of estradiol in human reinforcement learning". Hormones and Behavior 134 (agosto 2021): 105022. http://dx.doi.org/10.1016/j.yhbeh.2021.105022.
Zhou, Zhengyuan, Michael Bloem e Nicholas Bambos. "Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning". IEEE Transactions on Automatic Control 63, n. 9 (settembre 2018): 2787–802. http://dx.doi.org/10.1109/tac.2017.2775960.
Wang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu e Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 14 (24 marzo 2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Du, Xiao, Yutong Ye, Pengyu Zhang, Yaning Yang, Mingsong Chen e Ting Wang. "Situation-Dependent Causal Influence-Based Cooperative Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 16 (24 marzo 2024): 17362–70. http://dx.doi.org/10.1609/aaai.v38i16.29684.
Skalse, Joar, e Alessandro Abate. "Misspecification in Inverse Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 15136–43. http://dx.doi.org/10.1609/aaai.v37i12.26766.
Buehner, Marc J., e Jon May. "Abolishing the effect of reinforcement delay on human causal learning". Quarterly Journal of Experimental Psychology Section B 57, n. 2b (aprile 2004): 179–91. http://dx.doi.org/10.1080/02724990344000123.
Yang, Shantian, Bo Yang, Zheng Zeng e Zhongfeng Kang. "Causal inference multi-agent reinforcement learning for traffic signal control". Information Fusion 94 (giugno 2023): 243–56. http://dx.doi.org/10.1016/j.inffus.2023.02.009.
Mutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein e Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 8 (26 giugno 2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Eka, Eka Madya, Yunyun Yudiana e Komarudin. "Effect of reinceforcement on physical learning on motivation learning". Gladi : Jurnal Ilmu Keolahragaan 13, n. 1 (31 marzo 2022): 41–46. http://dx.doi.org/10.21009/gjik.131.04.
Mehta, Neville, Soumya Ray, Prasad Tadepalli e Thomas Dietterich. "Automatic Discovery and Transfer of Task Hierarchies in Reinforcement Learning". AI Magazine 32, n. 1 (16 marzo 2011): 35. http://dx.doi.org/10.1609/aimag.v32i1.2342.
Valverde, Gabriel, David Quesada, Pedro Larrañaga e Concha Bielza. "Causal reinforcement learning based on Bayesian networks applied to industrial settings". Engineering Applications of Artificial Intelligence 125 (ottobre 2023): 106657. http://dx.doi.org/10.1016/j.engappai.2023.106657.
Sun, Yuewen, Erli Wang, Biwei Huang, Chaochao Lu, Lu Feng, Changyin Sun e Kun Zhang. "ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 14 (24 marzo 2024): 15193–201. http://dx.doi.org/10.1609/aaai.v38i14.29442.
Liefeng Zhu, Liefeng Zhu, e Yongbiao Luo Liefeng Zhu. "Application of Bayesian Networks and Reinforcement Learning in Intelligent Control Systems in Uncertain Environments". 電腦學刊 35, n. 2 (aprile 2024): 001–16. http://dx.doi.org/10.53106/199115992024043502001.
Buehner, Marc J., e Jon May. "Rethinking Temporal Contiguity and the Judgement of Causality: Effects of Prior Knowledge, Experience, and Reinforcement Procedure". Quarterly Journal of Experimental Psychology Section A 56, n. 5 (luglio 2003): 865–90. http://dx.doi.org/10.1080/02724980244000675.
Sanghvi, Navyata, Shinnosuke Usami, Mohit Sharma, Joachim Groeger e Kris Kitani. "Inverse Reinforcement Learning with Explicit Policy Estimates". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 11 (18 maggio 2021): 9472–80. http://dx.doi.org/10.1609/aaai.v35i11.17141.
Agarwal, Anish. "Causal Inference for Social and Engineering Systems". ACM SIGMETRICS Performance Evaluation Review 50, n. 3 (30 dicembre 2022): 7–11. http://dx.doi.org/10.1145/3579342.3579345.
Gao, Haichuan, Tianren Zhang, Zhile Yang, Yuqing Guo, Jinsheng Ren, Shangqi Guo e Feng Chen. "Fast Counterfactual Inference for History-Based Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 6 (26 giugno 2023): 7613–23. http://dx.doi.org/10.1609/aaai.v37i6.25924.
Martinez-Gil, Francisco, Miguel Lozano, Ignacio García-Fernández, Pau Romero, Dolors Serra e Rafael Sebastián. "Using Inverse Reinforcement Learning with Real Trajectories to Get More Trustworthy Pedestrian Simulations". Mathematics 8, n. 9 (2 settembre 2020): 1479. http://dx.doi.org/10.3390/math8091479.
Lee, Kyungjae, Sungjoon Choi e Songhwai Oh. "Sparse Markov Decision Processes With Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning". IEEE Robotics and Automation Letters 3, n. 3 (luglio 2018): 1466–73. http://dx.doi.org/10.1109/lra.2018.2800085.
Ghorbel, N., S.-A. Addouche e A. El Mhamedi. "Forward management of spare parts stock shortages via causal reasoning using reinforcement learning". IFAC-PapersOnLine 48, n. 3 (2015): 1061–66. http://dx.doi.org/10.1016/j.ifacol.2015.06.224.
Nadim, Karim, Mohamed-Salah Ouali, Hakim Ghezzaz e Ahmed Ragab. "Learn-to-supervise: Causal reinforcement learning for high-level control in industrial processes". Engineering Applications of Artificial Intelligence 126 (novembre 2023): 106853. http://dx.doi.org/10.1016/j.engappai.2023.106853.
Zhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu e Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 8 (28 giugno 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Zhou, Haoran, Junliang Lu, Ziyu Li e Xinyi Zhang. "Study on whether marriage affects depression based on causal inference". Applied and Computational Engineering 6, n. 1 (14 giugno 2023): 1661–72. http://dx.doi.org/10.54254/2755-2721/6/20230827.
Djeumou, Franck, Murat Cubuktepe, Craig Lennon e Ufuk Topcu. "Task-Guided Inverse Reinforcement Learning under Partial Information". Proceedings of the International Conference on Automated Planning and Scheduling 32 (13 giugno 2022): 53–61. http://dx.doi.org/10.1609/icaps.v32i1.19785.
Edmonds, Mark, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu e Song-Chun Zhu. "Theory-Based Causal Transfer:Integrating Instance-Level Induction and Abstract-Level Structure Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 02 (3 aprile 2020): 1283–91. http://dx.doi.org/10.1609/aaai.v34i02.5483.
Wang, Yuchen, Mitsuhiro Hayashibe e Dai Owaki. "Data-Driven Policy Learning Methods from Biological Behavior: A Systematic Review". Applied Sciences 14, n. 10 (9 maggio 2024): 4038. http://dx.doi.org/10.3390/app14104038.
Barnby, Joseph M., Mitul A. Mehta e Michael Moutoussis. "The computational relationship between reinforcement learning, social inference, and paranoia". PLOS Computational Biology 18, n. 7 (25 luglio 2022): e1010326. http://dx.doi.org/10.1371/journal.pcbi.1010326.
Mokhtarian, Ehsan, Mohmmadsadegh Khorasani, Jalal Etesami e Negar Kiyavash. "Novel Ordering-Based Approaches for Causal Structure Learning in the Presence of Unobserved Variables". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 10 (26 giugno 2023): 12260–68. http://dx.doi.org/10.1609/aaai.v37i10.26445.
Yang, Chao-Han Huck, I.-Te Danny Hung, Yi Ouyang e Pin-Yu Chen. "Training a Resilient Q-network against Observational Interference". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 8 (28 giugno 2022): 8814–22. http://dx.doi.org/10.1609/aaai.v36i8.20862.
Hasanah, Uswatun, Luluk Salimah Oktavia e Putri Silaturrahmi. "INCREASING STUDENTS’ LEARNING INTEREST THROUGH BLENDED LEARNING IN THE EDUCATIONAL PSYCHOLOGY COURSE". JURNAL PAJAR (Pendidikan dan Pengajaran) 7, n. 1 (31 gennaio 2023): 181. http://dx.doi.org/10.33578/pjr.v7i1.9069.
Weissengruber, Sebastian, Sang Wan Lee, John P. O’Doherty e Christian C. Ruff. "Neurostimulation Reveals Context-Dependent Arbitration Between Model-Based and Model-Free Reinforcement Learning". Cerebral Cortex 29, n. 11 (19 marzo 2019): 4850–62. http://dx.doi.org/10.1093/cercor/bhz019.
Zhang, Yuzhu, e Hao Xu. "Reconfigurable-Intelligent-Surface-Enhanced Dynamic Resource Allocation for the Social Internet of Electric Vehicle Charging Networks with Causal-Structure-Based Reinforcement Learning". Future Internet 16, n. 5 (11 maggio 2024): 165. http://dx.doi.org/10.3390/fi16050165.
Elder, Jacob, Tyler Davis e Brent L. Hughes. "Learning About the Self: Motives for Coherence and Positivity Constrain Learning From Self-Relevant Social Feedback". Psychological Science 33, n. 4 (28 marzo 2022): 629–47. http://dx.doi.org/10.1177/09567976211045934.
NISHINA, Kyosuke, e Shigeru FUJITA. "A World Model Reinforcement Learning Method That Is Not Distracted by Background Information by Using Representation Learning via Invariant Causal Mechanisms for Non-Contrastive Learning". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 36, n. 1 (15 febbraio 2024): 571–81. http://dx.doi.org/10.3156/jsoft.36.1_571.
Kawato, Mitsuo, e Aurelio Cortese. "From internal models toward metacognitive AI". Biological Cybernetics 115, n. 5 (ottobre 2021): 415–30. http://dx.doi.org/10.1007/s00422-021-00904-7.
Liu, Xiuwen, Xinghua Lei, Xin Li e Sirui Chen. "Self-Interested Coalitional Crowdsensing for Multi-Agent Interactive Environment Monitoring". Sensors 24, n. 2 (14 gennaio 2024): 509. http://dx.doi.org/10.3390/s24020509.
Syarah, Evi, Asdar Asdar e Mas'ud Muhamadiyah. "Pengaruh Pemberian Penguatan Terhadap Motivasi Belajar Siswa Pada Mata Pelajaran Bahasa Indonesia Kelas V SDN Se-Kecamatan Suppa Kabupaten Pinrang". Bosowa Journal of Education 2, n. 1 (24 dicembre 2021): 33–39. http://dx.doi.org/10.35965/bje.v2i1.1178.
Wang, Zhicheng, Biwei Huang, Shikui Tu, Kun Zhang e Lei Xu. "DeepTrader: A Deep Reinforcement Learning Approach for Risk-Return Balanced Portfolio Management with Market Conditions Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 1 (18 maggio 2021): 643–50. http://dx.doi.org/10.1609/aaai.v35i1.16144.
Zhang, Xianjie, Yu Liu, Wenjun Li e Chen Gong. "Pruning the Communication Bandwidth between Reinforcement Learning Agents through Causal Inference: An Innovative Approach to Designing a Smart Grid Power System". Sensors 22, n. 20 (13 ottobre 2022): 7785. http://dx.doi.org/10.3390/s22207785.
McMilin, Emily. "Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 18778–88. http://dx.doi.org/10.1609/aaai.v38i17.29842.
Palacios Garay, Jessica Paola, Jorge Luis Escalante, Juan Carlos Chumacero Calle, Inocenta Marivel Cavarjal Bautista, Segundo Perez-Saavedra e Jose Nieto-Gamboa. "Impact of Emotional Style on Academic Goals in Pandemic Times". International Journal of Higher Education 9, n. 9 (2 novembre 2020): 21. http://dx.doi.org/10.5430/ijhe.v9n9p21.
Shen, Lingdong, Chunlei Huo, Nuo Xu, Chaowei Han e Zichen Wang. "Learn How to See: Collaborative Embodied Learning for Object Detection and Camera Adjusting". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 5 (24 marzo 2024): 4793–801. http://dx.doi.org/10.1609/aaai.v38i5.28281.
van der Oord, Saskia, e Gail Tripp. "How to Improve Behavioral Parent and Teacher Training for Children with ADHD: Integrating Empirical Research on Learning and Motivation into Treatment". Clinical Child and Family Psychology Review 23, n. 4 (24 settembre 2020): 577–604. http://dx.doi.org/10.1007/s10567-020-00327-z.