Статті в журналах з теми "Causal reinforcement learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Causal reinforcement learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Madumal, Prashan, Tim Miller, Liz Sonenberg, and Frank Vetere. "Explainable Reinforcement Learning through a Causal Lens." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.
Повний текст джерелаLi, Dezhi, Yunjun Lu, Jianping Wu, Wenlu Zhou, and Guangjun Zeng. "Causal Reinforcement Learning for Knowledge Graph Reasoning." Applied Sciences 14, no. 6 (March 15, 2024): 2498. http://dx.doi.org/10.3390/app14062498.
Повний текст джерелаYang, Dezhi, Guoxian Yu, Jun Wang, Zhengtian Wu, and Maozu Guo. "Reinforcement Causal Structure Learning on Order Graph." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10737–44. http://dx.doi.org/10.1609/aaai.v37i9.26274.
Повний текст джерелаMadumal, Prashan. "Explainable Agency in Reinforcement Learning Agents." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 3, 2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.
Повний текст джерелаHerlau, Tue, and Rasmus Larsen. "Reinforcement Learning of Causal Variables Using Mediation Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6910–17. http://dx.doi.org/10.1609/aaai.v36i6.20648.
Повний текст джерелаDuong, Tri Dung, Qian Li, and Guandong Xu. "Stochastic intervention for causal inference via reinforcement learning." Neurocomputing 482 (April 2022): 40–49. http://dx.doi.org/10.1016/j.neucom.2022.01.086.
Повний текст джерелаZhang, Wei, Xuesong Wang, Haoyu Wang, and Yuhu Cheng. "Causal Meta-Reinforcement Learning for Multimodal Remote Sensing Data Classification." Remote Sensing 16, no. 6 (March 16, 2024): 1055. http://dx.doi.org/10.3390/rs16061055.
Повний текст джерелаVeselic, Sebastijan, Gerhard Jocham, Christian Gausterer, Bernhard Wagner, Miriam Ernhoefer-Reßler, Rupert Lanzenberger, Christoph Eisenegger, Claus Lamm, and Annabel Losecaat Vermeer. "A causal role of estradiol in human reinforcement learning." Hormones and Behavior 134 (August 2021): 105022. http://dx.doi.org/10.1016/j.yhbeh.2021.105022.
Повний текст джерелаZhou, Zhengyuan, Michael Bloem, and Nicholas Bambos. "Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning." IEEE Transactions on Automatic Control 63, no. 9 (September 2018): 2787–802. http://dx.doi.org/10.1109/tac.2017.2775960.
Повний текст джерелаWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu, and Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Повний текст джерелаDu, Xiao, Yutong Ye, Pengyu Zhang, Yaning Yang, Mingsong Chen, and Ting Wang. "Situation-Dependent Causal Influence-Based Cooperative Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17362–70. http://dx.doi.org/10.1609/aaai.v38i16.29684.
Повний текст джерелаSkalse, Joar, and Alessandro Abate. "Misspecification in Inverse Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 15136–43. http://dx.doi.org/10.1609/aaai.v37i12.26766.
Повний текст джерелаBuehner, Marc J., and Jon May. "Abolishing the effect of reinforcement delay on human causal learning." Quarterly Journal of Experimental Psychology Section B 57, no. 2b (April 2004): 179–91. http://dx.doi.org/10.1080/02724990344000123.
Повний текст джерелаYang, Shantian, Bo Yang, Zheng Zeng, and Zhongfeng Kang. "Causal inference multi-agent reinforcement learning for traffic signal control." Information Fusion 94 (June 2023): 243–56. http://dx.doi.org/10.1016/j.inffus.2023.02.009.
Повний текст джерелаMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein, and Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Повний текст джерелаEka, Eka Madya, Yunyun Yudiana, and Komarudin. "Effect of reinceforcement on physical learning on motivation learning." Gladi : Jurnal Ilmu Keolahragaan 13, no. 1 (March 31, 2022): 41–46. http://dx.doi.org/10.21009/gjik.131.04.
Повний текст джерелаMehta, Neville, Soumya Ray, Prasad Tadepalli, and Thomas Dietterich. "Automatic Discovery and Transfer of Task Hierarchies in Reinforcement Learning." AI Magazine 32, no. 1 (March 16, 2011): 35. http://dx.doi.org/10.1609/aimag.v32i1.2342.
Повний текст джерелаValverde, Gabriel, David Quesada, Pedro Larrañaga, and Concha Bielza. "Causal reinforcement learning based on Bayesian networks applied to industrial settings." Engineering Applications of Artificial Intelligence 125 (October 2023): 106657. http://dx.doi.org/10.1016/j.engappai.2023.106657.
Повний текст джерелаSun, Yuewen, Erli Wang, Biwei Huang, Chaochao Lu, Lu Feng, Changyin Sun, and Kun Zhang. "ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15193–201. http://dx.doi.org/10.1609/aaai.v38i14.29442.
Повний текст джерелаLiefeng Zhu, Liefeng Zhu, and Yongbiao Luo Liefeng Zhu. "Application of Bayesian Networks and Reinforcement Learning in Intelligent Control Systems in Uncertain Environments." 電腦學刊 35, no. 2 (April 2024): 001–16. http://dx.doi.org/10.53106/199115992024043502001.
Повний текст джерелаBuehner, Marc J., and Jon May. "Rethinking Temporal Contiguity and the Judgement of Causality: Effects of Prior Knowledge, Experience, and Reinforcement Procedure." Quarterly Journal of Experimental Psychology Section A 56, no. 5 (July 2003): 865–90. http://dx.doi.org/10.1080/02724980244000675.
Повний текст джерелаSanghvi, Navyata, Shinnosuke Usami, Mohit Sharma, Joachim Groeger, and Kris Kitani. "Inverse Reinforcement Learning with Explicit Policy Estimates." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9472–80. http://dx.doi.org/10.1609/aaai.v35i11.17141.
Повний текст джерелаAgarwal, Anish. "Causal Inference for Social and Engineering Systems." ACM SIGMETRICS Performance Evaluation Review 50, no. 3 (December 30, 2022): 7–11. http://dx.doi.org/10.1145/3579342.3579345.
Повний текст джерелаGao, Haichuan, Tianren Zhang, Zhile Yang, Yuqing Guo, Jinsheng Ren, Shangqi Guo, and Feng Chen. "Fast Counterfactual Inference for History-Based Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7613–23. http://dx.doi.org/10.1609/aaai.v37i6.25924.
Повний текст джерелаMartinez-Gil, Francisco, Miguel Lozano, Ignacio García-Fernández, Pau Romero, Dolors Serra, and Rafael Sebastián. "Using Inverse Reinforcement Learning with Real Trajectories to Get More Trustworthy Pedestrian Simulations." Mathematics 8, no. 9 (September 2, 2020): 1479. http://dx.doi.org/10.3390/math8091479.
Повний текст джерелаLee, Kyungjae, Sungjoon Choi, and Songhwai Oh. "Sparse Markov Decision Processes With Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning." IEEE Robotics and Automation Letters 3, no. 3 (July 2018): 1466–73. http://dx.doi.org/10.1109/lra.2018.2800085.
Повний текст джерелаGhorbel, N., S.-A. Addouche, and A. El Mhamedi. "Forward management of spare parts stock shortages via causal reasoning using reinforcement learning." IFAC-PapersOnLine 48, no. 3 (2015): 1061–66. http://dx.doi.org/10.1016/j.ifacol.2015.06.224.
Повний текст джерелаNadim, Karim, Mohamed-Salah Ouali, Hakim Ghezzaz, and Ahmed Ragab. "Learn-to-supervise: Causal reinforcement learning for high-level control in industrial processes." Engineering Applications of Artificial Intelligence 126 (November 2023): 106853. http://dx.doi.org/10.1016/j.engappai.2023.106853.
Повний текст джерелаZhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu, and Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Повний текст джерелаZhou, Haoran, Junliang Lu, Ziyu Li, and Xinyi Zhang. "Study on whether marriage affects depression based on causal inference." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 1661–72. http://dx.doi.org/10.54254/2755-2721/6/20230827.
Повний текст джерелаDjeumou, Franck, Murat Cubuktepe, Craig Lennon, and Ufuk Topcu. "Task-Guided Inverse Reinforcement Learning under Partial Information." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 53–61. http://dx.doi.org/10.1609/icaps.v32i1.19785.
Повний текст джерелаEdmonds, Mark, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu, and Song-Chun Zhu. "Theory-Based Causal Transfer:Integrating Instance-Level Induction and Abstract-Level Structure Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1283–91. http://dx.doi.org/10.1609/aaai.v34i02.5483.
Повний текст джерелаWang, Yuchen, Mitsuhiro Hayashibe, and Dai Owaki. "Data-Driven Policy Learning Methods from Biological Behavior: A Systematic Review." Applied Sciences 14, no. 10 (May 9, 2024): 4038. http://dx.doi.org/10.3390/app14104038.
Повний текст джерелаBarnby, Joseph M., Mitul A. Mehta, and Michael Moutoussis. "The computational relationship between reinforcement learning, social inference, and paranoia." PLOS Computational Biology 18, no. 7 (July 25, 2022): e1010326. http://dx.doi.org/10.1371/journal.pcbi.1010326.
Повний текст джерелаMokhtarian, Ehsan, Mohmmadsadegh Khorasani, Jalal Etesami, and Negar Kiyavash. "Novel Ordering-Based Approaches for Causal Structure Learning in the Presence of Unobserved Variables." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 12260–68. http://dx.doi.org/10.1609/aaai.v37i10.26445.
Повний текст джерелаYang, Chao-Han Huck, I.-Te Danny Hung, Yi Ouyang, and Pin-Yu Chen. "Training a Resilient Q-network against Observational Interference." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8814–22. http://dx.doi.org/10.1609/aaai.v36i8.20862.
Повний текст джерелаHasanah, Uswatun, Luluk Salimah Oktavia, and Putri Silaturrahmi. "INCREASING STUDENTS’ LEARNING INTEREST THROUGH BLENDED LEARNING IN THE EDUCATIONAL PSYCHOLOGY COURSE." JURNAL PAJAR (Pendidikan dan Pengajaran) 7, no. 1 (January 31, 2023): 181. http://dx.doi.org/10.33578/pjr.v7i1.9069.
Повний текст джерелаWeissengruber, Sebastian, Sang Wan Lee, John P. O’Doherty, and Christian C. Ruff. "Neurostimulation Reveals Context-Dependent Arbitration Between Model-Based and Model-Free Reinforcement Learning." Cerebral Cortex 29, no. 11 (March 19, 2019): 4850–62. http://dx.doi.org/10.1093/cercor/bhz019.
Повний текст джерелаZhang, Yuzhu, and Hao Xu. "Reconfigurable-Intelligent-Surface-Enhanced Dynamic Resource Allocation for the Social Internet of Electric Vehicle Charging Networks with Causal-Structure-Based Reinforcement Learning." Future Internet 16, no. 5 (May 11, 2024): 165. http://dx.doi.org/10.3390/fi16050165.
Повний текст джерелаElder, Jacob, Tyler Davis, and Brent L. Hughes. "Learning About the Self: Motives for Coherence and Positivity Constrain Learning From Self-Relevant Social Feedback." Psychological Science 33, no. 4 (March 28, 2022): 629–47. http://dx.doi.org/10.1177/09567976211045934.
Повний текст джерелаNISHINA, Kyosuke, and Shigeru FUJITA. "A World Model Reinforcement Learning Method That Is Not Distracted by Background Information by Using Representation Learning via Invariant Causal Mechanisms for Non-Contrastive Learning." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 36, no. 1 (February 15, 2024): 571–81. http://dx.doi.org/10.3156/jsoft.36.1_571.
Повний текст джерелаKawato, Mitsuo, and Aurelio Cortese. "From internal models toward metacognitive AI." Biological Cybernetics 115, no. 5 (October 2021): 415–30. http://dx.doi.org/10.1007/s00422-021-00904-7.
Повний текст джерелаLiu, Xiuwen, Xinghua Lei, Xin Li, and Sirui Chen. "Self-Interested Coalitional Crowdsensing for Multi-Agent Interactive Environment Monitoring." Sensors 24, no. 2 (January 14, 2024): 509. http://dx.doi.org/10.3390/s24020509.
Повний текст джерелаSyarah, Evi, Asdar Asdar, and Mas'ud Muhamadiyah. "Pengaruh Pemberian Penguatan Terhadap Motivasi Belajar Siswa Pada Mata Pelajaran Bahasa Indonesia Kelas V SDN Se-Kecamatan Suppa Kabupaten Pinrang." Bosowa Journal of Education 2, no. 1 (December 24, 2021): 33–39. http://dx.doi.org/10.35965/bje.v2i1.1178.
Повний текст джерелаWang, Zhicheng, Biwei Huang, Shikui Tu, Kun Zhang, and Lei Xu. "DeepTrader: A Deep Reinforcement Learning Approach for Risk-Return Balanced Portfolio Management with Market Conditions Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 643–50. http://dx.doi.org/10.1609/aaai.v35i1.16144.
Повний текст джерелаZhang, Xianjie, Yu Liu, Wenjun Li, and Chen Gong. "Pruning the Communication Bandwidth between Reinforcement Learning Agents through Causal Inference: An Innovative Approach to Designing a Smart Grid Power System." Sensors 22, no. 20 (October 13, 2022): 7785. http://dx.doi.org/10.3390/s22207785.
Повний текст джерелаMcMilin, Emily. "Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (March 24, 2024): 18778–88. http://dx.doi.org/10.1609/aaai.v38i17.29842.
Повний текст джерелаPalacios Garay, Jessica Paola, Jorge Luis Escalante, Juan Carlos Chumacero Calle, Inocenta Marivel Cavarjal Bautista, Segundo Perez-Saavedra, and Jose Nieto-Gamboa. "Impact of Emotional Style on Academic Goals in Pandemic Times." International Journal of Higher Education 9, no. 9 (November 2, 2020): 21. http://dx.doi.org/10.5430/ijhe.v9n9p21.
Повний текст джерелаShen, Lingdong, Chunlei Huo, Nuo Xu, Chaowei Han, and Zichen Wang. "Learn How to See: Collaborative Embodied Learning for Object Detection and Camera Adjusting." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (March 24, 2024): 4793–801. http://dx.doi.org/10.1609/aaai.v38i5.28281.
Повний текст джерелаvan der Oord, Saskia, and Gail Tripp. "How to Improve Behavioral Parent and Teacher Training for Children with ADHD: Integrating Empirical Research on Learning and Motivation into Treatment." Clinical Child and Family Psychology Review 23, no. 4 (September 24, 2020): 577–604. http://dx.doi.org/10.1007/s10567-020-00327-z.
Повний текст джерела