Littérature scientifique sur le sujet « State representation learning »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « State representation learning ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "State representation learning"
Xu, Cai, Wei Zhao, Jinglong Zhao, Ziyu Guan, Yaming Yang, Long Chen et Xiangyu Song. « Progressive Deep Multi-View Comprehensive Representation Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 10557–65. http://dx.doi.org/10.1609/aaai.v37i9.26254.
Texte intégralYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang et Shuicheng Yan. « Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Texte intégralde Bruin, Tim, Jens Kober, Karl Tuyls et Robert Babuska. « Integrating State Representation Learning Into Deep Reinforcement Learning ». IEEE Robotics and Automation Letters 3, no 3 (juillet 2018) : 1394–401. http://dx.doi.org/10.1109/lra.2018.2800101.
Texte intégralChen, Haoqiang, Yadong Liu, Zongtan Zhou et Ming Zhang. « A2C : Attention-Augmented Contrastive Learning for State Representation Extraction ». Applied Sciences 10, no 17 (26 août 2020) : 5902. http://dx.doi.org/10.3390/app10175902.
Texte intégralOng, Sylvie, Yuri Grinberg et Joelle Pineau. « Mixed Observability Predictive State Representations ». Proceedings of the AAAI Conference on Artificial Intelligence 27, no 1 (30 juin 2013) : 746–52. http://dx.doi.org/10.1609/aaai.v27i1.8680.
Texte intégralMaier, Marc, Brian Taylor, Huseyin Oktay et David Jensen. « Learning Causal Models of Relational Domains ». Proceedings of the AAAI Conference on Artificial Intelligence 24, no 1 (3 juillet 2010) : 531–38. http://dx.doi.org/10.1609/aaai.v24i1.7695.
Texte intégralLesort, Timothée, Natalia Díaz-Rodríguez, Jean-Frano̧is Goudou et David Filliat. « State representation learning for control : An overview ». Neural Networks 108 (décembre 2018) : 379–92. http://dx.doi.org/10.1016/j.neunet.2018.07.006.
Texte intégralChornozhuk, S. « The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem ». Cybernetics and Computer Technologies, no 3 (27 octobre 2020) : 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.
Texte intégralZhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao et Wing-Yin Yu. « Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 3 (28 juin 2022) : 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.
Texte intégralLi, Dongfen, Lichao Meng, Jingjing Li, Ke Lu et Yang Yang. « Domain adaptive state representation alignment for reinforcement learning ». Information Sciences 609 (septembre 2022) : 1353–68. http://dx.doi.org/10.1016/j.ins.2022.07.156.
Texte intégralThèses sur le sujet "State representation learning"
Nuzzo, Francesco. « Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.
Texte intégralTillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Sadeghi, Mohsen. « Representation and interaction of sensorimotor learning processes ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.
Texte intégralMerckling, Astrid. « Unsupervised pretraining of state representations in a rewardless environment ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.
Texte intégralThis thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Woodbury, Nathan Scott. « Representation and Reconstruction of Linear, Time-Invariant Networks ». BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Texte intégralBoots, Byron. « Spectral Approaches to Learning Predictive Representations ». Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.
Texte intégralGabriel, Florence. « Mental representations of fractions : development, stable state, learning difficulties and intervention ». Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.
Texte intégralBased on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.
In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.
Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.
Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.
The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.
In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Shi, Fangzhou. « Towards Molecule Generation with Heterogeneous States via Reinforcement Learning ». Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.
Texte intégralFord, Shelton J. « The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions ». 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.
Texte intégralAllen, Heather. « Experiencing literature – learning from experience : the application of neuroscience to literary analysis by example of representations of German colonialism in Uwe Timm’s Morenga ». 2011. http://hdl.handle.net/1993/4862.
Texte intégralStasko, Carly. « A Pedagogy of Holistic Media Literacy : Reflections on Culture Jamming as Transformative Learning and Healing ». Thesis, 2009. http://hdl.handle.net/1807/18109.
Texte intégralLivres sur le sujet "State representation learning"
A, Bositis David, et Joint Center for Political and Economic Studies (U.S.), dir. Redistricting and minority representation : Learning from the past, preparing for the future. Washington, D.C : Joint Center for Political and Economic Studies, 1998.
Trouver le texte intégral1966-, McBride Kecia Driver, dir. Visual media and the humanities : A pedagogy of representation. Knoxville : University of Tennessee Press, 2004.
Trouver le texte intégralBurge, Tyler. Perception : First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.
Texte intégralBoden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.
Texte intégralAlden, John, Alexander H. Cohen et Jonathan J. Ring. Gaming the System : Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Trouver le texte intégralAlden, John, Alexander H. Cohen et Jonathan J. Ring. Gaming the System : Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Trouver le texte intégralGinsburg, Herbert P., Rachael Labrecque, Kara Carpenter et Dana Pagar. New Possibilities for Early Mathematics Education. Sous la direction de Roi Cohen Kadosh et Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.029.
Texte intégralRueschemeyer, Shirley-Ann, et M. Gareth Gaskell, dir. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.
Texte intégralPapafragou, Anna, John C. Trueswell et Lila R. Gleitman, dir. The Oxford Handbook of the Mental Lexicon. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198845003.001.0001.
Texte intégralCaselli, Tommaso, Eduard Hovy, Martha Palmer et Piek Vossen, dir. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.
Texte intégralChapitres de livres sur le sujet "State representation learning"
Merckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux et Nicolas Perrin. « State Representation Learning from Demonstration ». Dans Machine Learning, Optimization, and Data Science, 304–15. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.
Texte intégralSteccanella, Lorenzo, et Anders Jonsson. « State Representation Learning for Goal-Conditioned Reinforcement Learning ». Dans Machine Learning and Knowledge Discovery in Databases, 84–99. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Texte intégralSchestakov, Stefan, Paul Heinemeyer et Elena Demidova. « Road Network Representation Learning with Vehicle Trajectories ». Dans Advances in Knowledge Discovery and Data Mining, 57–69. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.
Texte intégralSychev, Oleg. « Visualizing Program State as a Clustered Graph for Learning Programming ». Dans Diagrammatic Representation and Inference, 404–7. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.
Texte intégralHu, Dapeng, Xuesong Jiang, Xiumei Wei et Jian Wang. « State Representation Learning for Minimax Deep Deterministic Policy Gradient ». Dans Knowledge Science, Engineering and Management, 481–87. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29551-6_43.
Texte intégralMeden, Blaž, Abraham Prieto, Peter Peer et Francisco Bellas. « First Steps Towards State Representation Learning for Cognitive Robotics ». Dans Lecture Notes in Computer Science, 499–510. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61705-9_41.
Texte intégralMeng, Li, Morten Goodwin, Anis Yazidi et Paal Engelstad. « Unsupervised State Representation Learning in Partially Observable Atari Games ». Dans Computer Analysis of Images and Patterns, 212–22. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_21.
Texte intégralServan-Schreiber, David, Axel Cleeremans et James L. McClelland. « Graded State Machines : The Representation of Temporal Contingencies in Simple Recurrent Networks ». Dans Connectionist Approaches to Language Learning, 57–89. Boston, MA : Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_4.
Texte intégralDing, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin et al. « Ten Key Problems of Pre-trained Models : An Outlook of Representation Learning ». Dans Representation Learning for Natural Language Processing, 491–521. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.
Texte intégralLi, Zhipeng, et Xuesong Jiang. « State Representation Learning for Multi-agent Deep Deterministic Policy Gradient ». Dans Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications, 667–75. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_75.
Texte intégralActes de conférences sur le sujet "State representation learning"
Zhao, Jian, Wengang Zhou, Tianyu Zhao, Yun Zhou et Houqiang Li. « State Representation Learning For Effective Deep Reinforcement Learning ». Dans 2020 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2020. http://dx.doi.org/10.1109/icme46284.2020.9102924.
Texte intégralNozawa, Kento, et Issei Sato. « Evaluation Methods for Representation Learning : A Survey ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/776.
Texte intégralZhu, Hanhua. « Generalized Representation Learning Methods for Deep Reinforcement Learning ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/748.
Texte intégralBai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie et Min Zhang. « RaSa : Relation and Sensitivity Aware Representation Learning for Text-based Person Search ». Dans Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California : International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.
Texte intégralStork, Johannes A., Carl Henrik Ek, Yasemin Bekiroglu et Danica Kragic. « Learning Predictive State Representation for in-hand manipulation ». Dans 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139641.
Texte intégralMunk, Jelle, Jens Kober et Robert Babuska. « Learning state representation for deep actor-critic control ». Dans 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798980.
Texte intégralDuarte, Valquiria Aparecida Rosa, et Rita Maria Silva Julia. « Improving the State Space Representation through Association Rules ». Dans 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2016. http://dx.doi.org/10.1109/icmla.2016.0167.
Texte intégralWang, Hai, Takeshi Onishi, Kevin Gimpel et David McAllester. « Emergent Predication Structure in Hidden State Vectors of Neural Readers ». Dans Proceedings of the 2nd Workshop on Representation Learning for NLP. Stroudsburg, PA, USA : Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2604.
Texte intégralBhatt, Shreyansh, Jinjin Zhao, Candace Thille, Dawn Zimmaro et Neelesh Gattani. « A Novel Approach for Knowledge State Representation and Prediction ». Dans L@S '20 : Seventh (2020) ACM Conference on Learning @ Scale. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3386527.3406745.
Texte intégralZhao, Han, Xu Yang, Zhenru Wang, Erkun Yang et Cheng Deng. « Graph Debiased Contrastive Learning with Joint Representation Clustering ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/473.
Texte intégralRapports d'organisations sur le sujet "State representation learning"
Babu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper et Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response : Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), juin 2021. http://dx.doi.org/10.19088/ids.2021.049.
Texte intégralIatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov et Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], février 2020. http://dx.doi.org/10.31812/123456789/3749.
Texte intégralSingh, Abhijeet, Mauricio Romero et Karthik Muralidharan. COVID-19 Learning Loss and Recovery : Panel Data Evidence from India. Research on Improving Systems of Education (RISE), septembre 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.
Texte intégralLalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting : A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, février 2022. http://dx.doi.org/10.36687/inetwp178.
Texte intégralTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan et Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, novembre 2020. http://dx.doi.org/10.31812/123456789/4421.
Texte intégralTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan et Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, novembre 2020. http://dx.doi.org/10.31812/123456789/4421.
Texte intégralState Legislator Representation : A Data-Driven Learning Guide. Ann Arbor, MI : Inter-university Consortium for Political and Social Research, avril 2009. http://dx.doi.org/10.3886/stateleg.
Texte intégral