Zeitschriftenartikel zum Thema „Causal reinforcement learning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Causal reinforcement learning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Madumal, Prashan, Tim Miller, Liz Sonenberg und Frank Vetere. „Explainable Reinforcement Learning through a Causal Lens“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 03 (03.04.2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.
Der volle Inhalt der QuelleLi, Dezhi, Yunjun Lu, Jianping Wu, Wenlu Zhou und Guangjun Zeng. „Causal Reinforcement Learning for Knowledge Graph Reasoning“. Applied Sciences 14, Nr. 6 (15.03.2024): 2498. http://dx.doi.org/10.3390/app14062498.
Der volle Inhalt der QuelleYang, Dezhi, Guoxian Yu, Jun Wang, Zhengtian Wu und Maozu Guo. „Reinforcement Causal Structure Learning on Order Graph“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 9 (26.06.2023): 10737–44. http://dx.doi.org/10.1609/aaai.v37i9.26274.
Der volle Inhalt der QuelleMadumal, Prashan. „Explainable Agency in Reinforcement Learning Agents“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 10 (03.04.2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.
Der volle Inhalt der QuelleHerlau, Tue, und Rasmus Larsen. „Reinforcement Learning of Causal Variables Using Mediation Analysis“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 6 (28.06.2022): 6910–17. http://dx.doi.org/10.1609/aaai.v36i6.20648.
Der volle Inhalt der QuelleDuong, Tri Dung, Qian Li und Guandong Xu. „Stochastic intervention for causal inference via reinforcement learning“. Neurocomputing 482 (April 2022): 40–49. http://dx.doi.org/10.1016/j.neucom.2022.01.086.
Der volle Inhalt der QuelleZhang, Wei, Xuesong Wang, Haoyu Wang und Yuhu Cheng. „Causal Meta-Reinforcement Learning for Multimodal Remote Sensing Data Classification“. Remote Sensing 16, Nr. 6 (16.03.2024): 1055. http://dx.doi.org/10.3390/rs16061055.
Der volle Inhalt der QuelleVeselic, Sebastijan, Gerhard Jocham, Christian Gausterer, Bernhard Wagner, Miriam Ernhoefer-Reßler, Rupert Lanzenberger, Christoph Eisenegger, Claus Lamm und Annabel Losecaat Vermeer. „A causal role of estradiol in human reinforcement learning“. Hormones and Behavior 134 (August 2021): 105022. http://dx.doi.org/10.1016/j.yhbeh.2021.105022.
Der volle Inhalt der QuelleZhou, Zhengyuan, Michael Bloem und Nicholas Bambos. „Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning“. IEEE Transactions on Automatic Control 63, Nr. 9 (September 2018): 2787–802. http://dx.doi.org/10.1109/tac.2017.2775960.
Der volle Inhalt der QuelleWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu und Peter Stone. „Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 14 (24.03.2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Der volle Inhalt der QuelleDu, Xiao, Yutong Ye, Pengyu Zhang, Yaning Yang, Mingsong Chen und Ting Wang. „Situation-Dependent Causal Influence-Based Cooperative Multi-Agent Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 16 (24.03.2024): 17362–70. http://dx.doi.org/10.1609/aaai.v38i16.29684.
Der volle Inhalt der QuelleSkalse, Joar, und Alessandro Abate. „Misspecification in Inverse Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 12 (26.06.2023): 15136–43. http://dx.doi.org/10.1609/aaai.v37i12.26766.
Der volle Inhalt der QuelleBuehner, Marc J., und Jon May. „Abolishing the effect of reinforcement delay on human causal learning“. Quarterly Journal of Experimental Psychology Section B 57, Nr. 2b (April 2004): 179–91. http://dx.doi.org/10.1080/02724990344000123.
Der volle Inhalt der QuelleYang, Shantian, Bo Yang, Zheng Zeng und Zhongfeng Kang. „Causal inference multi-agent reinforcement learning for traffic signal control“. Information Fusion 94 (Juni 2023): 243–56. http://dx.doi.org/10.1016/j.inffus.2023.02.009.
Der volle Inhalt der QuelleMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein und Marcello Restelli. „Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Der volle Inhalt der QuelleEka, Eka Madya, Yunyun Yudiana und Komarudin. „Effect of reinceforcement on physical learning on motivation learning“. Gladi : Jurnal Ilmu Keolahragaan 13, Nr. 1 (31.03.2022): 41–46. http://dx.doi.org/10.21009/gjik.131.04.
Der volle Inhalt der QuelleMehta, Neville, Soumya Ray, Prasad Tadepalli und Thomas Dietterich. „Automatic Discovery and Transfer of Task Hierarchies in Reinforcement Learning“. AI Magazine 32, Nr. 1 (16.03.2011): 35. http://dx.doi.org/10.1609/aimag.v32i1.2342.
Der volle Inhalt der QuelleValverde, Gabriel, David Quesada, Pedro Larrañaga und Concha Bielza. „Causal reinforcement learning based on Bayesian networks applied to industrial settings“. Engineering Applications of Artificial Intelligence 125 (Oktober 2023): 106657. http://dx.doi.org/10.1016/j.engappai.2023.106657.
Der volle Inhalt der QuelleSun, Yuewen, Erli Wang, Biwei Huang, Chaochao Lu, Lu Feng, Changyin Sun und Kun Zhang. „ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 14 (24.03.2024): 15193–201. http://dx.doi.org/10.1609/aaai.v38i14.29442.
Der volle Inhalt der QuelleLiefeng Zhu, Liefeng Zhu, und Yongbiao Luo Liefeng Zhu. „Application of Bayesian Networks and Reinforcement Learning in Intelligent Control Systems in Uncertain Environments“. 電腦學刊 35, Nr. 2 (April 2024): 001–16. http://dx.doi.org/10.53106/199115992024043502001.
Der volle Inhalt der QuelleBuehner, Marc J., und Jon May. „Rethinking Temporal Contiguity and the Judgement of Causality: Effects of Prior Knowledge, Experience, and Reinforcement Procedure“. Quarterly Journal of Experimental Psychology Section A 56, Nr. 5 (Juli 2003): 865–90. http://dx.doi.org/10.1080/02724980244000675.
Der volle Inhalt der QuelleSanghvi, Navyata, Shinnosuke Usami, Mohit Sharma, Joachim Groeger und Kris Kitani. „Inverse Reinforcement Learning with Explicit Policy Estimates“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 11 (18.05.2021): 9472–80. http://dx.doi.org/10.1609/aaai.v35i11.17141.
Der volle Inhalt der QuelleAgarwal, Anish. „Causal Inference for Social and Engineering Systems“. ACM SIGMETRICS Performance Evaluation Review 50, Nr. 3 (30.12.2022): 7–11. http://dx.doi.org/10.1145/3579342.3579345.
Der volle Inhalt der QuelleGao, Haichuan, Tianren Zhang, Zhile Yang, Yuqing Guo, Jinsheng Ren, Shangqi Guo und Feng Chen. „Fast Counterfactual Inference for History-Based Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 7613–23. http://dx.doi.org/10.1609/aaai.v37i6.25924.
Der volle Inhalt der QuelleMartinez-Gil, Francisco, Miguel Lozano, Ignacio García-Fernández, Pau Romero, Dolors Serra und Rafael Sebastián. „Using Inverse Reinforcement Learning with Real Trajectories to Get More Trustworthy Pedestrian Simulations“. Mathematics 8, Nr. 9 (02.09.2020): 1479. http://dx.doi.org/10.3390/math8091479.
Der volle Inhalt der QuelleLee, Kyungjae, Sungjoon Choi und Songhwai Oh. „Sparse Markov Decision Processes With Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning“. IEEE Robotics and Automation Letters 3, Nr. 3 (Juli 2018): 1466–73. http://dx.doi.org/10.1109/lra.2018.2800085.
Der volle Inhalt der QuelleGhorbel, N., S.-A. Addouche und A. El Mhamedi. „Forward management of spare parts stock shortages via causal reasoning using reinforcement learning“. IFAC-PapersOnLine 48, Nr. 3 (2015): 1061–66. http://dx.doi.org/10.1016/j.ifacol.2015.06.224.
Der volle Inhalt der QuelleNadim, Karim, Mohamed-Salah Ouali, Hakim Ghezzaz und Ahmed Ragab. „Learn-to-supervise: Causal reinforcement learning for high-level control in industrial processes“. Engineering Applications of Artificial Intelligence 126 (November 2023): 106853. http://dx.doi.org/10.1016/j.engappai.2023.106853.
Der volle Inhalt der QuelleZhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu und Kun Zhang. „Invariant Action Effect Model for Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Der volle Inhalt der QuelleZhou, Haoran, Junliang Lu, Ziyu Li und Xinyi Zhang. „Study on whether marriage affects depression based on causal inference“. Applied and Computational Engineering 6, Nr. 1 (14.06.2023): 1661–72. http://dx.doi.org/10.54254/2755-2721/6/20230827.
Der volle Inhalt der QuelleDjeumou, Franck, Murat Cubuktepe, Craig Lennon und Ufuk Topcu. „Task-Guided Inverse Reinforcement Learning under Partial Information“. Proceedings of the International Conference on Automated Planning and Scheduling 32 (13.06.2022): 53–61. http://dx.doi.org/10.1609/icaps.v32i1.19785.
Der volle Inhalt der QuelleEdmonds, Mark, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu und Song-Chun Zhu. „Theory-Based Causal Transfer:Integrating Instance-Level Induction and Abstract-Level Structure Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 02 (03.04.2020): 1283–91. http://dx.doi.org/10.1609/aaai.v34i02.5483.
Der volle Inhalt der QuelleWang, Yuchen, Mitsuhiro Hayashibe und Dai Owaki. „Data-Driven Policy Learning Methods from Biological Behavior: A Systematic Review“. Applied Sciences 14, Nr. 10 (09.05.2024): 4038. http://dx.doi.org/10.3390/app14104038.
Der volle Inhalt der QuelleBarnby, Joseph M., Mitul A. Mehta und Michael Moutoussis. „The computational relationship between reinforcement learning, social inference, and paranoia“. PLOS Computational Biology 18, Nr. 7 (25.07.2022): e1010326. http://dx.doi.org/10.1371/journal.pcbi.1010326.
Der volle Inhalt der QuelleMokhtarian, Ehsan, Mohmmadsadegh Khorasani, Jalal Etesami und Negar Kiyavash. „Novel Ordering-Based Approaches for Causal Structure Learning in the Presence of Unobserved Variables“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 10 (26.06.2023): 12260–68. http://dx.doi.org/10.1609/aaai.v37i10.26445.
Der volle Inhalt der QuelleYang, Chao-Han Huck, I.-Te Danny Hung, Yi Ouyang und Pin-Yu Chen. „Training a Resilient Q-network against Observational Interference“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 8814–22. http://dx.doi.org/10.1609/aaai.v36i8.20862.
Der volle Inhalt der QuelleHasanah, Uswatun, Luluk Salimah Oktavia und Putri Silaturrahmi. „INCREASING STUDENTS’ LEARNING INTEREST THROUGH BLENDED LEARNING IN THE EDUCATIONAL PSYCHOLOGY COURSE“. JURNAL PAJAR (Pendidikan dan Pengajaran) 7, Nr. 1 (31.01.2023): 181. http://dx.doi.org/10.33578/pjr.v7i1.9069.
Der volle Inhalt der QuelleWeissengruber, Sebastian, Sang Wan Lee, John P. O’Doherty und Christian C. Ruff. „Neurostimulation Reveals Context-Dependent Arbitration Between Model-Based and Model-Free Reinforcement Learning“. Cerebral Cortex 29, Nr. 11 (19.03.2019): 4850–62. http://dx.doi.org/10.1093/cercor/bhz019.
Der volle Inhalt der QuelleZhang, Yuzhu, und Hao Xu. „Reconfigurable-Intelligent-Surface-Enhanced Dynamic Resource Allocation for the Social Internet of Electric Vehicle Charging Networks with Causal-Structure-Based Reinforcement Learning“. Future Internet 16, Nr. 5 (11.05.2024): 165. http://dx.doi.org/10.3390/fi16050165.
Der volle Inhalt der QuelleElder, Jacob, Tyler Davis und Brent L. Hughes. „Learning About the Self: Motives for Coherence and Positivity Constrain Learning From Self-Relevant Social Feedback“. Psychological Science 33, Nr. 4 (28.03.2022): 629–47. http://dx.doi.org/10.1177/09567976211045934.
Der volle Inhalt der QuelleNISHINA, Kyosuke, und Shigeru FUJITA. „A World Model Reinforcement Learning Method That Is Not Distracted by Background Information by Using Representation Learning via Invariant Causal Mechanisms for Non-Contrastive Learning“. Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 36, Nr. 1 (15.02.2024): 571–81. http://dx.doi.org/10.3156/jsoft.36.1_571.
Der volle Inhalt der QuelleKawato, Mitsuo, und Aurelio Cortese. „From internal models toward metacognitive AI“. Biological Cybernetics 115, Nr. 5 (Oktober 2021): 415–30. http://dx.doi.org/10.1007/s00422-021-00904-7.
Der volle Inhalt der QuelleLiu, Xiuwen, Xinghua Lei, Xin Li und Sirui Chen. „Self-Interested Coalitional Crowdsensing for Multi-Agent Interactive Environment Monitoring“. Sensors 24, Nr. 2 (14.01.2024): 509. http://dx.doi.org/10.3390/s24020509.
Der volle Inhalt der QuelleSyarah, Evi, Asdar Asdar und Mas'ud Muhamadiyah. „Pengaruh Pemberian Penguatan Terhadap Motivasi Belajar Siswa Pada Mata Pelajaran Bahasa Indonesia Kelas V SDN Se-Kecamatan Suppa Kabupaten Pinrang“. Bosowa Journal of Education 2, Nr. 1 (24.12.2021): 33–39. http://dx.doi.org/10.35965/bje.v2i1.1178.
Der volle Inhalt der QuelleWang, Zhicheng, Biwei Huang, Shikui Tu, Kun Zhang und Lei Xu. „DeepTrader: A Deep Reinforcement Learning Approach for Risk-Return Balanced Portfolio Management with Market Conditions Embedding“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 1 (18.05.2021): 643–50. http://dx.doi.org/10.1609/aaai.v35i1.16144.
Der volle Inhalt der QuelleZhang, Xianjie, Yu Liu, Wenjun Li und Chen Gong. „Pruning the Communication Bandwidth between Reinforcement Learning Agents through Causal Inference: An Innovative Approach to Designing a Smart Grid Power System“. Sensors 22, Nr. 20 (13.10.2022): 7785. http://dx.doi.org/10.3390/s22207785.
Der volle Inhalt der QuelleMcMilin, Emily. „Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 17 (24.03.2024): 18778–88. http://dx.doi.org/10.1609/aaai.v38i17.29842.
Der volle Inhalt der QuellePalacios Garay, Jessica Paola, Jorge Luis Escalante, Juan Carlos Chumacero Calle, Inocenta Marivel Cavarjal Bautista, Segundo Perez-Saavedra und Jose Nieto-Gamboa. „Impact of Emotional Style on Academic Goals in Pandemic Times“. International Journal of Higher Education 9, Nr. 9 (02.11.2020): 21. http://dx.doi.org/10.5430/ijhe.v9n9p21.
Der volle Inhalt der QuelleShen, Lingdong, Chunlei Huo, Nuo Xu, Chaowei Han und Zichen Wang. „Learn How to See: Collaborative Embodied Learning for Object Detection and Camera Adjusting“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 5 (24.03.2024): 4793–801. http://dx.doi.org/10.1609/aaai.v38i5.28281.
Der volle Inhalt der Quellevan der Oord, Saskia, und Gail Tripp. „How to Improve Behavioral Parent and Teacher Training for Children with ADHD: Integrating Empirical Research on Learning and Motivation into Treatment“. Clinical Child and Family Psychology Review 23, Nr. 4 (24.09.2020): 577–604. http://dx.doi.org/10.1007/s10567-020-00327-z.
Der volle Inhalt der Quelle