Literatura académica sobre el tema "State representation learning"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "State representation learning".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "State representation learning"
Xu, Cai, Wei Zhao, Jinglong Zhao, Ziyu Guan, Yaming Yang, Long Chen y Xiangyu Song. "Progressive Deep Multi-View Comprehensive Representation Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junio de 2023): 10557–65. http://dx.doi.org/10.1609/aaai.v37i9.26254.
Texto completoYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang y Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junio de 2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Texto completode Bruin, Tim, Jens Kober, Karl Tuyls y Robert Babuska. "Integrating State Representation Learning Into Deep Reinforcement Learning". IEEE Robotics and Automation Letters 3, n.º 3 (julio de 2018): 1394–401. http://dx.doi.org/10.1109/lra.2018.2800101.
Texto completoChen, Haoqiang, Yadong Liu, Zongtan Zhou y Ming Zhang. "A2C: Attention-Augmented Contrastive Learning for State Representation Extraction". Applied Sciences 10, n.º 17 (26 de agosto de 2020): 5902. http://dx.doi.org/10.3390/app10175902.
Texto completoOng, Sylvie, Yuri Grinberg y Joelle Pineau. "Mixed Observability Predictive State Representations". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junio de 2013): 746–52. http://dx.doi.org/10.1609/aaai.v27i1.8680.
Texto completoMaier, Marc, Brian Taylor, Huseyin Oktay y David Jensen. "Learning Causal Models of Relational Domains". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (3 de julio de 2010): 531–38. http://dx.doi.org/10.1609/aaai.v24i1.7695.
Texto completoLesort, Timothée, Natalia Díaz-Rodríguez, Jean-Frano̧is Goudou y David Filliat. "State representation learning for control: An overview". Neural Networks 108 (diciembre de 2018): 379–92. http://dx.doi.org/10.1016/j.neunet.2018.07.006.
Texto completoChornozhuk, S. "The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem". Cybernetics and Computer Technologies, n.º 3 (27 de octubre de 2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.
Texto completoZhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao y Wing-Yin Yu. "Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.
Texto completoLi, Dongfen, Lichao Meng, Jingjing Li, Ke Lu y Yang Yang. "Domain adaptive state representation alignment for reinforcement learning". Information Sciences 609 (septiembre de 2022): 1353–68. http://dx.doi.org/10.1016/j.ins.2022.07.156.
Texto completoTesis sobre el tema "State representation learning"
Nuzzo, Francesco. "Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.
Texto completoTillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Sadeghi, Mohsen. "Representation and interaction of sensorimotor learning processes". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.
Texto completoMerckling, Astrid. "Unsupervised pretraining of state representations in a rewardless environment". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.
Texto completoThis thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Texto completoBoots, Byron. "Spectral Approaches to Learning Predictive Representations". Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.
Texto completoGabriel, Florence. "Mental representations of fractions: development, stable state, learning difficulties and intervention". Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.
Texto completoBased on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.
In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.
Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.
Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.
The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.
In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Shi, Fangzhou. "Towards Molecule Generation with Heterogeneous States via Reinforcement Learning". Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.
Texto completoFord, Shelton J. "The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions". 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.
Texto completoAllen, Heather. "Experiencing literature – learning from experience: the application of neuroscience to literary analysis by example of representations of German colonialism in Uwe Timm’s Morenga". 2011. http://hdl.handle.net/1993/4862.
Texto completoStasko, Carly. "A Pedagogy of Holistic Media Literacy: Reflections on Culture Jamming as Transformative Learning and Healing". Thesis, 2009. http://hdl.handle.net/1807/18109.
Texto completoLibros sobre el tema "State representation learning"
A, Bositis David y Joint Center for Political and Economic Studies (U.S.), eds. Redistricting and minority representation: Learning from the past, preparing for the future. Washington, D.C: Joint Center for Political and Economic Studies, 1998.
Buscar texto completo1966-, McBride Kecia Driver, ed. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.
Buscar texto completoBurge, Tyler. Perception: First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.
Texto completoBoden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.
Texto completoAlden, John, Alexander H. Cohen y Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Buscar texto completoAlden, John, Alexander H. Cohen y Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Buscar texto completoGinsburg, Herbert P., Rachael Labrecque, Kara Carpenter y Dana Pagar. New Possibilities for Early Mathematics Education. Editado por Roi Cohen Kadosh y Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.029.
Texto completoRueschemeyer, Shirley-Ann y M. Gareth Gaskell, eds. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.
Texto completoPapafragou, Anna, John C. Trueswell y Lila R. Gleitman, eds. The Oxford Handbook of the Mental Lexicon. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198845003.001.0001.
Texto completoCaselli, Tommaso, Eduard Hovy, Martha Palmer y Piek Vossen, eds. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.
Texto completoCapítulos de libros sobre el tema "State representation learning"
Merckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux y Nicolas Perrin. "State Representation Learning from Demonstration". En Machine Learning, Optimization, and Data Science, 304–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.
Texto completoSteccanella, Lorenzo y Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning". En Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Texto completoSchestakov, Stefan, Paul Heinemeyer y Elena Demidova. "Road Network Representation Learning with Vehicle Trajectories". En Advances in Knowledge Discovery and Data Mining, 57–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.
Texto completoSychev, Oleg. "Visualizing Program State as a Clustered Graph for Learning Programming". En Diagrammatic Representation and Inference, 404–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.
Texto completoHu, Dapeng, Xuesong Jiang, Xiumei Wei y Jian Wang. "State Representation Learning for Minimax Deep Deterministic Policy Gradient". En Knowledge Science, Engineering and Management, 481–87. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29551-6_43.
Texto completoMeden, Blaž, Abraham Prieto, Peter Peer y Francisco Bellas. "First Steps Towards State Representation Learning for Cognitive Robotics". En Lecture Notes in Computer Science, 499–510. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61705-9_41.
Texto completoMeng, Li, Morten Goodwin, Anis Yazidi y Paal Engelstad. "Unsupervised State Representation Learning in Partially Observable Atari Games". En Computer Analysis of Images and Patterns, 212–22. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_21.
Texto completoServan-Schreiber, David, Axel Cleeremans y James L. McClelland. "Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks". En Connectionist Approaches to Language Learning, 57–89. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_4.
Texto completoDing, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin et al. "Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning". En Representation Learning for Natural Language Processing, 491–521. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.
Texto completoLi, Zhipeng y Xuesong Jiang. "State Representation Learning for Multi-agent Deep Deterministic Policy Gradient". En Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications, 667–75. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_75.
Texto completoActas de conferencias sobre el tema "State representation learning"
Zhao, Jian, Wengang Zhou, Tianyu Zhao, Yun Zhou y Houqiang Li. "State Representation Learning For Effective Deep Reinforcement Learning". En 2020 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2020. http://dx.doi.org/10.1109/icme46284.2020.9102924.
Texto completoNozawa, Kento y Issei Sato. "Evaluation Methods for Representation Learning: A Survey". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/776.
Texto completoZhu, Hanhua. "Generalized Representation Learning Methods for Deep Reinforcement Learning". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/748.
Texto completoBai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie y Min Zhang. "RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.
Texto completoStork, Johannes A., Carl Henrik Ek, Yasemin Bekiroglu y Danica Kragic. "Learning Predictive State Representation for in-hand manipulation". En 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139641.
Texto completoMunk, Jelle, Jens Kober y Robert Babuska. "Learning state representation for deep actor-critic control". En 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798980.
Texto completoDuarte, Valquiria Aparecida Rosa y Rita Maria Silva Julia. "Improving the State Space Representation through Association Rules". En 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2016. http://dx.doi.org/10.1109/icmla.2016.0167.
Texto completoWang, Hai, Takeshi Onishi, Kevin Gimpel y David McAllester. "Emergent Predication Structure in Hidden State Vectors of Neural Readers". En Proceedings of the 2nd Workshop on Representation Learning for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2604.
Texto completoBhatt, Shreyansh, Jinjin Zhao, Candace Thille, Dawn Zimmaro y Neelesh Gattani. "A Novel Approach for Knowledge State Representation and Prediction". En L@S '20: Seventh (2020) ACM Conference on Learning @ Scale. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3386527.3406745.
Texto completoZhao, Han, Xu Yang, Zhenru Wang, Erkun Yang y Cheng Deng. "Graph Debiased Contrastive Learning with Joint Representation Clustering". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/473.
Texto completoInformes sobre el tema "State representation learning"
Babu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper y Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response: Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), junio de 2021. http://dx.doi.org/10.19088/ids.2021.049.
Texto completoIatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov y Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], febrero de 2020. http://dx.doi.org/10.31812/123456789/3749.
Texto completoSingh, Abhijeet, Mauricio Romero y Karthik Muralidharan. COVID-19 Learning Loss and Recovery: Panel Data Evidence from India. Research on Improving Systems of Education (RISE), septiembre de 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.
Texto completoLalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, febrero de 2022. http://dx.doi.org/10.36687/inetwp178.
Texto completoTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan y Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, noviembre de 2020. http://dx.doi.org/10.31812/123456789/4421.
Texto completoTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan y Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, noviembre de 2020. http://dx.doi.org/10.31812/123456789/4421.
Texto completoState Legislator Representation: A Data-Driven Learning Guide. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, abril de 2009. http://dx.doi.org/10.3886/stateleg.
Texto completo