Artykuły w czasopismach na temat „Causal reinforcement learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Causal reinforcement learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Madumal, Prashan, Tim Miller, Liz Sonenberg i Frank Vetere. "Explainable Reinforcement Learning through a Causal Lens". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 03 (3.04.2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.
Pełny tekst źródłaLi, Dezhi, Yunjun Lu, Jianping Wu, Wenlu Zhou i Guangjun Zeng. "Causal Reinforcement Learning for Knowledge Graph Reasoning". Applied Sciences 14, nr 6 (15.03.2024): 2498. http://dx.doi.org/10.3390/app14062498.
Pełny tekst źródłaYang, Dezhi, Guoxian Yu, Jun Wang, Zhengtian Wu i Maozu Guo. "Reinforcement Causal Structure Learning on Order Graph". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 10737–44. http://dx.doi.org/10.1609/aaai.v37i9.26274.
Pełny tekst źródłaMadumal, Prashan. "Explainable Agency in Reinforcement Learning Agents". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 10 (3.04.2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.
Pełny tekst źródłaHerlau, Tue, i Rasmus Larsen. "Reinforcement Learning of Causal Variables Using Mediation Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6910–17. http://dx.doi.org/10.1609/aaai.v36i6.20648.
Pełny tekst źródłaDuong, Tri Dung, Qian Li i Guandong Xu. "Stochastic intervention for causal inference via reinforcement learning". Neurocomputing 482 (kwiecień 2022): 40–49. http://dx.doi.org/10.1016/j.neucom.2022.01.086.
Pełny tekst źródłaZhang, Wei, Xuesong Wang, Haoyu Wang i Yuhu Cheng. "Causal Meta-Reinforcement Learning for Multimodal Remote Sensing Data Classification". Remote Sensing 16, nr 6 (16.03.2024): 1055. http://dx.doi.org/10.3390/rs16061055.
Pełny tekst źródłaVeselic, Sebastijan, Gerhard Jocham, Christian Gausterer, Bernhard Wagner, Miriam Ernhoefer-Reßler, Rupert Lanzenberger, Christoph Eisenegger, Claus Lamm i Annabel Losecaat Vermeer. "A causal role of estradiol in human reinforcement learning". Hormones and Behavior 134 (sierpień 2021): 105022. http://dx.doi.org/10.1016/j.yhbeh.2021.105022.
Pełny tekst źródłaZhou, Zhengyuan, Michael Bloem i Nicholas Bambos. "Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning". IEEE Transactions on Automatic Control 63, nr 9 (wrzesień 2018): 2787–802. http://dx.doi.org/10.1109/tac.2017.2775960.
Pełny tekst źródłaWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu i Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 14 (24.03.2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Pełny tekst źródłaDu, Xiao, Yutong Ye, Pengyu Zhang, Yaning Yang, Mingsong Chen i Ting Wang. "Situation-Dependent Causal Influence-Based Cooperative Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 16 (24.03.2024): 17362–70. http://dx.doi.org/10.1609/aaai.v38i16.29684.
Pełny tekst źródłaSkalse, Joar, i Alessandro Abate. "Misspecification in Inverse Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 15136–43. http://dx.doi.org/10.1609/aaai.v37i12.26766.
Pełny tekst źródłaBuehner, Marc J., i Jon May. "Abolishing the effect of reinforcement delay on human causal learning". Quarterly Journal of Experimental Psychology Section B 57, nr 2b (kwiecień 2004): 179–91. http://dx.doi.org/10.1080/02724990344000123.
Pełny tekst źródłaYang, Shantian, Bo Yang, Zheng Zeng i Zhongfeng Kang. "Causal inference multi-agent reinforcement learning for traffic signal control". Information Fusion 94 (czerwiec 2023): 243–56. http://dx.doi.org/10.1016/j.inffus.2023.02.009.
Pełny tekst źródłaMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein i Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 8 (26.06.2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Pełny tekst źródłaEka, Eka Madya, Yunyun Yudiana i Komarudin. "Effect of reinceforcement on physical learning on motivation learning". Gladi : Jurnal Ilmu Keolahragaan 13, nr 1 (31.03.2022): 41–46. http://dx.doi.org/10.21009/gjik.131.04.
Pełny tekst źródłaMehta, Neville, Soumya Ray, Prasad Tadepalli i Thomas Dietterich. "Automatic Discovery and Transfer of Task Hierarchies in Reinforcement Learning". AI Magazine 32, nr 1 (16.03.2011): 35. http://dx.doi.org/10.1609/aimag.v32i1.2342.
Pełny tekst źródłaValverde, Gabriel, David Quesada, Pedro Larrañaga i Concha Bielza. "Causal reinforcement learning based on Bayesian networks applied to industrial settings". Engineering Applications of Artificial Intelligence 125 (październik 2023): 106657. http://dx.doi.org/10.1016/j.engappai.2023.106657.
Pełny tekst źródłaSun, Yuewen, Erli Wang, Biwei Huang, Chaochao Lu, Lu Feng, Changyin Sun i Kun Zhang. "ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 14 (24.03.2024): 15193–201. http://dx.doi.org/10.1609/aaai.v38i14.29442.
Pełny tekst źródłaLiefeng Zhu, Liefeng Zhu, i Yongbiao Luo Liefeng Zhu. "Application of Bayesian Networks and Reinforcement Learning in Intelligent Control Systems in Uncertain Environments". 電腦學刊 35, nr 2 (kwiecień 2024): 001–16. http://dx.doi.org/10.53106/199115992024043502001.
Pełny tekst źródłaBuehner, Marc J., i Jon May. "Rethinking Temporal Contiguity and the Judgement of Causality: Effects of Prior Knowledge, Experience, and Reinforcement Procedure". Quarterly Journal of Experimental Psychology Section A 56, nr 5 (lipiec 2003): 865–90. http://dx.doi.org/10.1080/02724980244000675.
Pełny tekst źródłaSanghvi, Navyata, Shinnosuke Usami, Mohit Sharma, Joachim Groeger i Kris Kitani. "Inverse Reinforcement Learning with Explicit Policy Estimates". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 11 (18.05.2021): 9472–80. http://dx.doi.org/10.1609/aaai.v35i11.17141.
Pełny tekst źródłaAgarwal, Anish. "Causal Inference for Social and Engineering Systems". ACM SIGMETRICS Performance Evaluation Review 50, nr 3 (30.12.2022): 7–11. http://dx.doi.org/10.1145/3579342.3579345.
Pełny tekst źródłaGao, Haichuan, Tianren Zhang, Zhile Yang, Yuqing Guo, Jinsheng Ren, Shangqi Guo i Feng Chen. "Fast Counterfactual Inference for History-Based Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 7613–23. http://dx.doi.org/10.1609/aaai.v37i6.25924.
Pełny tekst źródłaMartinez-Gil, Francisco, Miguel Lozano, Ignacio García-Fernández, Pau Romero, Dolors Serra i Rafael Sebastián. "Using Inverse Reinforcement Learning with Real Trajectories to Get More Trustworthy Pedestrian Simulations". Mathematics 8, nr 9 (2.09.2020): 1479. http://dx.doi.org/10.3390/math8091479.
Pełny tekst źródłaLee, Kyungjae, Sungjoon Choi i Songhwai Oh. "Sparse Markov Decision Processes With Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning". IEEE Robotics and Automation Letters 3, nr 3 (lipiec 2018): 1466–73. http://dx.doi.org/10.1109/lra.2018.2800085.
Pełny tekst źródłaGhorbel, N., S.-A. Addouche i A. El Mhamedi. "Forward management of spare parts stock shortages via causal reasoning using reinforcement learning". IFAC-PapersOnLine 48, nr 3 (2015): 1061–66. http://dx.doi.org/10.1016/j.ifacol.2015.06.224.
Pełny tekst źródłaNadim, Karim, Mohamed-Salah Ouali, Hakim Ghezzaz i Ahmed Ragab. "Learn-to-supervise: Causal reinforcement learning for high-level control in industrial processes". Engineering Applications of Artificial Intelligence 126 (listopad 2023): 106853. http://dx.doi.org/10.1016/j.engappai.2023.106853.
Pełny tekst źródłaZhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu i Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Pełny tekst źródłaZhou, Haoran, Junliang Lu, Ziyu Li i Xinyi Zhang. "Study on whether marriage affects depression based on causal inference". Applied and Computational Engineering 6, nr 1 (14.06.2023): 1661–72. http://dx.doi.org/10.54254/2755-2721/6/20230827.
Pełny tekst źródłaDjeumou, Franck, Murat Cubuktepe, Craig Lennon i Ufuk Topcu. "Task-Guided Inverse Reinforcement Learning under Partial Information". Proceedings of the International Conference on Automated Planning and Scheduling 32 (13.06.2022): 53–61. http://dx.doi.org/10.1609/icaps.v32i1.19785.
Pełny tekst źródłaEdmonds, Mark, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu i Song-Chun Zhu. "Theory-Based Causal Transfer:Integrating Instance-Level Induction and Abstract-Level Structure Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 02 (3.04.2020): 1283–91. http://dx.doi.org/10.1609/aaai.v34i02.5483.
Pełny tekst źródłaWang, Yuchen, Mitsuhiro Hayashibe i Dai Owaki. "Data-Driven Policy Learning Methods from Biological Behavior: A Systematic Review". Applied Sciences 14, nr 10 (9.05.2024): 4038. http://dx.doi.org/10.3390/app14104038.
Pełny tekst źródłaBarnby, Joseph M., Mitul A. Mehta i Michael Moutoussis. "The computational relationship between reinforcement learning, social inference, and paranoia". PLOS Computational Biology 18, nr 7 (25.07.2022): e1010326. http://dx.doi.org/10.1371/journal.pcbi.1010326.
Pełny tekst źródłaMokhtarian, Ehsan, Mohmmadsadegh Khorasani, Jalal Etesami i Negar Kiyavash. "Novel Ordering-Based Approaches for Causal Structure Learning in the Presence of Unobserved Variables". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 10 (26.06.2023): 12260–68. http://dx.doi.org/10.1609/aaai.v37i10.26445.
Pełny tekst źródłaYang, Chao-Han Huck, I.-Te Danny Hung, Yi Ouyang i Pin-Yu Chen. "Training a Resilient Q-network against Observational Interference". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8814–22. http://dx.doi.org/10.1609/aaai.v36i8.20862.
Pełny tekst źródłaHasanah, Uswatun, Luluk Salimah Oktavia i Putri Silaturrahmi. "INCREASING STUDENTS’ LEARNING INTEREST THROUGH BLENDED LEARNING IN THE EDUCATIONAL PSYCHOLOGY COURSE". JURNAL PAJAR (Pendidikan dan Pengajaran) 7, nr 1 (31.01.2023): 181. http://dx.doi.org/10.33578/pjr.v7i1.9069.
Pełny tekst źródłaWeissengruber, Sebastian, Sang Wan Lee, John P. O’Doherty i Christian C. Ruff. "Neurostimulation Reveals Context-Dependent Arbitration Between Model-Based and Model-Free Reinforcement Learning". Cerebral Cortex 29, nr 11 (19.03.2019): 4850–62. http://dx.doi.org/10.1093/cercor/bhz019.
Pełny tekst źródłaZhang, Yuzhu, i Hao Xu. "Reconfigurable-Intelligent-Surface-Enhanced Dynamic Resource Allocation for the Social Internet of Electric Vehicle Charging Networks with Causal-Structure-Based Reinforcement Learning". Future Internet 16, nr 5 (11.05.2024): 165. http://dx.doi.org/10.3390/fi16050165.
Pełny tekst źródłaElder, Jacob, Tyler Davis i Brent L. Hughes. "Learning About the Self: Motives for Coherence and Positivity Constrain Learning From Self-Relevant Social Feedback". Psychological Science 33, nr 4 (28.03.2022): 629–47. http://dx.doi.org/10.1177/09567976211045934.
Pełny tekst źródłaNISHINA, Kyosuke, i Shigeru FUJITA. "A World Model Reinforcement Learning Method That Is Not Distracted by Background Information by Using Representation Learning via Invariant Causal Mechanisms for Non-Contrastive Learning". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 36, nr 1 (15.02.2024): 571–81. http://dx.doi.org/10.3156/jsoft.36.1_571.
Pełny tekst źródłaKawato, Mitsuo, i Aurelio Cortese. "From internal models toward metacognitive AI". Biological Cybernetics 115, nr 5 (październik 2021): 415–30. http://dx.doi.org/10.1007/s00422-021-00904-7.
Pełny tekst źródłaLiu, Xiuwen, Xinghua Lei, Xin Li i Sirui Chen. "Self-Interested Coalitional Crowdsensing for Multi-Agent Interactive Environment Monitoring". Sensors 24, nr 2 (14.01.2024): 509. http://dx.doi.org/10.3390/s24020509.
Pełny tekst źródłaSyarah, Evi, Asdar Asdar i Mas'ud Muhamadiyah. "Pengaruh Pemberian Penguatan Terhadap Motivasi Belajar Siswa Pada Mata Pelajaran Bahasa Indonesia Kelas V SDN Se-Kecamatan Suppa Kabupaten Pinrang". Bosowa Journal of Education 2, nr 1 (24.12.2021): 33–39. http://dx.doi.org/10.35965/bje.v2i1.1178.
Pełny tekst źródłaWang, Zhicheng, Biwei Huang, Shikui Tu, Kun Zhang i Lei Xu. "DeepTrader: A Deep Reinforcement Learning Approach for Risk-Return Balanced Portfolio Management with Market Conditions Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 1 (18.05.2021): 643–50. http://dx.doi.org/10.1609/aaai.v35i1.16144.
Pełny tekst źródłaZhang, Xianjie, Yu Liu, Wenjun Li i Chen Gong. "Pruning the Communication Bandwidth between Reinforcement Learning Agents through Causal Inference: An Innovative Approach to Designing a Smart Grid Power System". Sensors 22, nr 20 (13.10.2022): 7785. http://dx.doi.org/10.3390/s22207785.
Pełny tekst źródłaMcMilin, Emily. "Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 18778–88. http://dx.doi.org/10.1609/aaai.v38i17.29842.
Pełny tekst źródłaPalacios Garay, Jessica Paola, Jorge Luis Escalante, Juan Carlos Chumacero Calle, Inocenta Marivel Cavarjal Bautista, Segundo Perez-Saavedra i Jose Nieto-Gamboa. "Impact of Emotional Style on Academic Goals in Pandemic Times". International Journal of Higher Education 9, nr 9 (2.11.2020): 21. http://dx.doi.org/10.5430/ijhe.v9n9p21.
Pełny tekst źródłaShen, Lingdong, Chunlei Huo, Nuo Xu, Chaowei Han i Zichen Wang. "Learn How to See: Collaborative Embodied Learning for Object Detection and Camera Adjusting". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 5 (24.03.2024): 4793–801. http://dx.doi.org/10.1609/aaai.v38i5.28281.
Pełny tekst źródłavan der Oord, Saskia, i Gail Tripp. "How to Improve Behavioral Parent and Teacher Training for Children with ADHD: Integrating Empirical Research on Learning and Motivation into Treatment". Clinical Child and Family Psychology Review 23, nr 4 (24.09.2020): 577–604. http://dx.doi.org/10.1007/s10567-020-00327-z.
Pełny tekst źródła