Artigos de revistas sobre o tema "Causal reinforcement learning"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Causal reinforcement learning".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Madumal, Prashan, Tim Miller, Liz Sonenberg e Frank Vetere. "Explainable Reinforcement Learning through a Causal Lens". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 03 (3 de abril de 2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.
Texto completo da fonteLi, Dezhi, Yunjun Lu, Jianping Wu, Wenlu Zhou e Guangjun Zeng. "Causal Reinforcement Learning for Knowledge Graph Reasoning". Applied Sciences 14, n.º 6 (15 de março de 2024): 2498. http://dx.doi.org/10.3390/app14062498.
Texto completo da fonteYang, Dezhi, Guoxian Yu, Jun Wang, Zhengtian Wu e Maozu Guo. "Reinforcement Causal Structure Learning on Order Graph". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junho de 2023): 10737–44. http://dx.doi.org/10.1609/aaai.v37i9.26274.
Texto completo da fonteMadumal, Prashan. "Explainable Agency in Reinforcement Learning Agents". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 10 (3 de abril de 2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.
Texto completo da fonteHerlau, Tue, e Rasmus Larsen. "Reinforcement Learning of Causal Variables Using Mediation Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junho de 2022): 6910–17. http://dx.doi.org/10.1609/aaai.v36i6.20648.
Texto completo da fonteDuong, Tri Dung, Qian Li e Guandong Xu. "Stochastic intervention for causal inference via reinforcement learning". Neurocomputing 482 (abril de 2022): 40–49. http://dx.doi.org/10.1016/j.neucom.2022.01.086.
Texto completo da fonteZhang, Wei, Xuesong Wang, Haoyu Wang e Yuhu Cheng. "Causal Meta-Reinforcement Learning for Multimodal Remote Sensing Data Classification". Remote Sensing 16, n.º 6 (16 de março de 2024): 1055. http://dx.doi.org/10.3390/rs16061055.
Texto completo da fonteVeselic, Sebastijan, Gerhard Jocham, Christian Gausterer, Bernhard Wagner, Miriam Ernhoefer-Reßler, Rupert Lanzenberger, Christoph Eisenegger, Claus Lamm e Annabel Losecaat Vermeer. "A causal role of estradiol in human reinforcement learning". Hormones and Behavior 134 (agosto de 2021): 105022. http://dx.doi.org/10.1016/j.yhbeh.2021.105022.
Texto completo da fonteZhou, Zhengyuan, Michael Bloem e Nicholas Bambos. "Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning". IEEE Transactions on Automatic Control 63, n.º 9 (setembro de 2018): 2787–802. http://dx.doi.org/10.1109/tac.2017.2775960.
Texto completo da fonteWang, Zizhao, Caroline Wang, Xuesu Xiao, Yuke Zhu e Peter Stone. "Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de março de 2024): 15778–86. http://dx.doi.org/10.1609/aaai.v38i14.29507.
Texto completo da fonteDu, Xiao, Yutong Ye, Pengyu Zhang, Yaning Yang, Mingsong Chen e Ting Wang. "Situation-Dependent Causal Influence-Based Cooperative Multi-Agent Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de março de 2024): 17362–70. http://dx.doi.org/10.1609/aaai.v38i16.29684.
Texto completo da fonteSkalse, Joar, e Alessandro Abate. "Misspecification in Inverse Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junho de 2023): 15136–43. http://dx.doi.org/10.1609/aaai.v37i12.26766.
Texto completo da fonteBuehner, Marc J., e Jon May. "Abolishing the effect of reinforcement delay on human causal learning". Quarterly Journal of Experimental Psychology Section B 57, n.º 2b (abril de 2004): 179–91. http://dx.doi.org/10.1080/02724990344000123.
Texto completo da fonteYang, Shantian, Bo Yang, Zheng Zeng e Zhongfeng Kang. "Causal inference multi-agent reinforcement learning for traffic signal control". Information Fusion 94 (junho de 2023): 243–56. http://dx.doi.org/10.1016/j.inffus.2023.02.009.
Texto completo da fonteMutti, Mirco, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein e Marcello Restelli. "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junho de 2023): 9251–59. http://dx.doi.org/10.1609/aaai.v37i8.26109.
Texto completo da fonteEka, Eka Madya, Yunyun Yudiana e Komarudin. "Effect of reinceforcement on physical learning on motivation learning". Gladi : Jurnal Ilmu Keolahragaan 13, n.º 1 (31 de março de 2022): 41–46. http://dx.doi.org/10.21009/gjik.131.04.
Texto completo da fonteMehta, Neville, Soumya Ray, Prasad Tadepalli e Thomas Dietterich. "Automatic Discovery and Transfer of Task Hierarchies in Reinforcement Learning". AI Magazine 32, n.º 1 (16 de março de 2011): 35. http://dx.doi.org/10.1609/aimag.v32i1.2342.
Texto completo da fonteValverde, Gabriel, David Quesada, Pedro Larrañaga e Concha Bielza. "Causal reinforcement learning based on Bayesian networks applied to industrial settings". Engineering Applications of Artificial Intelligence 125 (outubro de 2023): 106657. http://dx.doi.org/10.1016/j.engappai.2023.106657.
Texto completo da fonteSun, Yuewen, Erli Wang, Biwei Huang, Chaochao Lu, Lu Feng, Changyin Sun e Kun Zhang. "ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de março de 2024): 15193–201. http://dx.doi.org/10.1609/aaai.v38i14.29442.
Texto completo da fonteLiefeng Zhu, Liefeng Zhu, e Yongbiao Luo Liefeng Zhu. "Application of Bayesian Networks and Reinforcement Learning in Intelligent Control Systems in Uncertain Environments". 電腦學刊 35, n.º 2 (abril de 2024): 001–16. http://dx.doi.org/10.53106/199115992024043502001.
Texto completo da fonteBuehner, Marc J., e Jon May. "Rethinking Temporal Contiguity and the Judgement of Causality: Effects of Prior Knowledge, Experience, and Reinforcement Procedure". Quarterly Journal of Experimental Psychology Section A 56, n.º 5 (julho de 2003): 865–90. http://dx.doi.org/10.1080/02724980244000675.
Texto completo da fonteSanghvi, Navyata, Shinnosuke Usami, Mohit Sharma, Joachim Groeger e Kris Kitani. "Inverse Reinforcement Learning with Explicit Policy Estimates". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 11 (18 de maio de 2021): 9472–80. http://dx.doi.org/10.1609/aaai.v35i11.17141.
Texto completo da fonteAgarwal, Anish. "Causal Inference for Social and Engineering Systems". ACM SIGMETRICS Performance Evaluation Review 50, n.º 3 (30 de dezembro de 2022): 7–11. http://dx.doi.org/10.1145/3579342.3579345.
Texto completo da fonteGao, Haichuan, Tianren Zhang, Zhile Yang, Yuqing Guo, Jinsheng Ren, Shangqi Guo e Feng Chen. "Fast Counterfactual Inference for History-Based Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junho de 2023): 7613–23. http://dx.doi.org/10.1609/aaai.v37i6.25924.
Texto completo da fonteMartinez-Gil, Francisco, Miguel Lozano, Ignacio García-Fernández, Pau Romero, Dolors Serra e Rafael Sebastián. "Using Inverse Reinforcement Learning with Real Trajectories to Get More Trustworthy Pedestrian Simulations". Mathematics 8, n.º 9 (2 de setembro de 2020): 1479. http://dx.doi.org/10.3390/math8091479.
Texto completo da fonteLee, Kyungjae, Sungjoon Choi e Songhwai Oh. "Sparse Markov Decision Processes With Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning". IEEE Robotics and Automation Letters 3, n.º 3 (julho de 2018): 1466–73. http://dx.doi.org/10.1109/lra.2018.2800085.
Texto completo da fonteGhorbel, N., S.-A. Addouche e A. El Mhamedi. "Forward management of spare parts stock shortages via causal reasoning using reinforcement learning". IFAC-PapersOnLine 48, n.º 3 (2015): 1061–66. http://dx.doi.org/10.1016/j.ifacol.2015.06.224.
Texto completo da fonteNadim, Karim, Mohamed-Salah Ouali, Hakim Ghezzaz e Ahmed Ragab. "Learn-to-supervise: Causal reinforcement learning for high-level control in industrial processes". Engineering Applications of Artificial Intelligence 126 (novembro de 2023): 106853. http://dx.doi.org/10.1016/j.engappai.2023.106853.
Texto completo da fonteZhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu e Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junho de 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Texto completo da fonteZhou, Haoran, Junliang Lu, Ziyu Li e Xinyi Zhang. "Study on whether marriage affects depression based on causal inference". Applied and Computational Engineering 6, n.º 1 (14 de junho de 2023): 1661–72. http://dx.doi.org/10.54254/2755-2721/6/20230827.
Texto completo da fonteDjeumou, Franck, Murat Cubuktepe, Craig Lennon e Ufuk Topcu. "Task-Guided Inverse Reinforcement Learning under Partial Information". Proceedings of the International Conference on Automated Planning and Scheduling 32 (13 de junho de 2022): 53–61. http://dx.doi.org/10.1609/icaps.v32i1.19785.
Texto completo da fonteEdmonds, Mark, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu e Song-Chun Zhu. "Theory-Based Causal Transfer:Integrating Instance-Level Induction and Abstract-Level Structure Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 02 (3 de abril de 2020): 1283–91. http://dx.doi.org/10.1609/aaai.v34i02.5483.
Texto completo da fonteWang, Yuchen, Mitsuhiro Hayashibe e Dai Owaki. "Data-Driven Policy Learning Methods from Biological Behavior: A Systematic Review". Applied Sciences 14, n.º 10 (9 de maio de 2024): 4038. http://dx.doi.org/10.3390/app14104038.
Texto completo da fonteBarnby, Joseph M., Mitul A. Mehta e Michael Moutoussis. "The computational relationship between reinforcement learning, social inference, and paranoia". PLOS Computational Biology 18, n.º 7 (25 de julho de 2022): e1010326. http://dx.doi.org/10.1371/journal.pcbi.1010326.
Texto completo da fonteMokhtarian, Ehsan, Mohmmadsadegh Khorasani, Jalal Etesami e Negar Kiyavash. "Novel Ordering-Based Approaches for Causal Structure Learning in the Presence of Unobserved Variables". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 10 (26 de junho de 2023): 12260–68. http://dx.doi.org/10.1609/aaai.v37i10.26445.
Texto completo da fonteYang, Chao-Han Huck, I.-Te Danny Hung, Yi Ouyang e Pin-Yu Chen. "Training a Resilient Q-network against Observational Interference". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junho de 2022): 8814–22. http://dx.doi.org/10.1609/aaai.v36i8.20862.
Texto completo da fonteHasanah, Uswatun, Luluk Salimah Oktavia e Putri Silaturrahmi. "INCREASING STUDENTS’ LEARNING INTEREST THROUGH BLENDED LEARNING IN THE EDUCATIONAL PSYCHOLOGY COURSE". JURNAL PAJAR (Pendidikan dan Pengajaran) 7, n.º 1 (31 de janeiro de 2023): 181. http://dx.doi.org/10.33578/pjr.v7i1.9069.
Texto completo da fonteWeissengruber, Sebastian, Sang Wan Lee, John P. O’Doherty e Christian C. Ruff. "Neurostimulation Reveals Context-Dependent Arbitration Between Model-Based and Model-Free Reinforcement Learning". Cerebral Cortex 29, n.º 11 (19 de março de 2019): 4850–62. http://dx.doi.org/10.1093/cercor/bhz019.
Texto completo da fonteZhang, Yuzhu, e Hao Xu. "Reconfigurable-Intelligent-Surface-Enhanced Dynamic Resource Allocation for the Social Internet of Electric Vehicle Charging Networks with Causal-Structure-Based Reinforcement Learning". Future Internet 16, n.º 5 (11 de maio de 2024): 165. http://dx.doi.org/10.3390/fi16050165.
Texto completo da fonteElder, Jacob, Tyler Davis e Brent L. Hughes. "Learning About the Self: Motives for Coherence and Positivity Constrain Learning From Self-Relevant Social Feedback". Psychological Science 33, n.º 4 (28 de março de 2022): 629–47. http://dx.doi.org/10.1177/09567976211045934.
Texto completo da fonteNISHINA, Kyosuke, e Shigeru FUJITA. "A World Model Reinforcement Learning Method That Is Not Distracted by Background Information by Using Representation Learning via Invariant Causal Mechanisms for Non-Contrastive Learning". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 36, n.º 1 (15 de fevereiro de 2024): 571–81. http://dx.doi.org/10.3156/jsoft.36.1_571.
Texto completo da fonteKawato, Mitsuo, e Aurelio Cortese. "From internal models toward metacognitive AI". Biological Cybernetics 115, n.º 5 (outubro de 2021): 415–30. http://dx.doi.org/10.1007/s00422-021-00904-7.
Texto completo da fonteLiu, Xiuwen, Xinghua Lei, Xin Li e Sirui Chen. "Self-Interested Coalitional Crowdsensing for Multi-Agent Interactive Environment Monitoring". Sensors 24, n.º 2 (14 de janeiro de 2024): 509. http://dx.doi.org/10.3390/s24020509.
Texto completo da fonteSyarah, Evi, Asdar Asdar e Mas'ud Muhamadiyah. "Pengaruh Pemberian Penguatan Terhadap Motivasi Belajar Siswa Pada Mata Pelajaran Bahasa Indonesia Kelas V SDN Se-Kecamatan Suppa Kabupaten Pinrang". Bosowa Journal of Education 2, n.º 1 (24 de dezembro de 2021): 33–39. http://dx.doi.org/10.35965/bje.v2i1.1178.
Texto completo da fonteWang, Zhicheng, Biwei Huang, Shikui Tu, Kun Zhang e Lei Xu. "DeepTrader: A Deep Reinforcement Learning Approach for Risk-Return Balanced Portfolio Management with Market Conditions Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 1 (18 de maio de 2021): 643–50. http://dx.doi.org/10.1609/aaai.v35i1.16144.
Texto completo da fonteZhang, Xianjie, Yu Liu, Wenjun Li e Chen Gong. "Pruning the Communication Bandwidth between Reinforcement Learning Agents through Causal Inference: An Innovative Approach to Designing a Smart Grid Power System". Sensors 22, n.º 20 (13 de outubro de 2022): 7785. http://dx.doi.org/10.3390/s22207785.
Texto completo da fonteMcMilin, Emily. "Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de março de 2024): 18778–88. http://dx.doi.org/10.1609/aaai.v38i17.29842.
Texto completo da fontePalacios Garay, Jessica Paola, Jorge Luis Escalante, Juan Carlos Chumacero Calle, Inocenta Marivel Cavarjal Bautista, Segundo Perez-Saavedra e Jose Nieto-Gamboa. "Impact of Emotional Style on Academic Goals in Pandemic Times". International Journal of Higher Education 9, n.º 9 (2 de novembro de 2020): 21. http://dx.doi.org/10.5430/ijhe.v9n9p21.
Texto completo da fonteShen, Lingdong, Chunlei Huo, Nuo Xu, Chaowei Han e Zichen Wang. "Learn How to See: Collaborative Embodied Learning for Object Detection and Camera Adjusting". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 5 (24 de março de 2024): 4793–801. http://dx.doi.org/10.1609/aaai.v38i5.28281.
Texto completo da fontevan der Oord, Saskia, e Gail Tripp. "How to Improve Behavioral Parent and Teacher Training for Children with ADHD: Integrating Empirical Research on Learning and Motivation into Treatment". Clinical Child and Family Psychology Review 23, n.º 4 (24 de setembro de 2020): 577–604. http://dx.doi.org/10.1007/s10567-020-00327-z.
Texto completo da fonte