Academic literature on the topic 'State representation learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'State representation learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "State representation learning"
Xu, Cai, Wei Zhao, Jinglong Zhao, Ziyu Guan, Yaming Yang, Long Chen, and Xiangyu Song. "Progressive Deep Multi-View Comprehensive Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10557–65. http://dx.doi.org/10.1609/aaai.v37i9.26254.
Full textYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang, and Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Full textde Bruin, Tim, Jens Kober, Karl Tuyls, and Robert Babuska. "Integrating State Representation Learning Into Deep Reinforcement Learning." IEEE Robotics and Automation Letters 3, no. 3 (July 2018): 1394–401. http://dx.doi.org/10.1109/lra.2018.2800101.
Full textChen, Haoqiang, Yadong Liu, Zongtan Zhou, and Ming Zhang. "A2C: Attention-Augmented Contrastive Learning for State Representation Extraction." Applied Sciences 10, no. 17 (August 26, 2020): 5902. http://dx.doi.org/10.3390/app10175902.
Full textOng, Sylvie, Yuri Grinberg, and Joelle Pineau. "Mixed Observability Predictive State Representations." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 746–52. http://dx.doi.org/10.1609/aaai.v27i1.8680.
Full textMaier, Marc, Brian Taylor, Huseyin Oktay, and David Jensen. "Learning Causal Models of Relational Domains." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 531–38. http://dx.doi.org/10.1609/aaai.v24i1.7695.
Full textLesort, Timothée, Natalia Díaz-Rodríguez, Jean-Frano̧is Goudou, and David Filliat. "State representation learning for control: An overview." Neural Networks 108 (December 2018): 379–92. http://dx.doi.org/10.1016/j.neunet.2018.07.006.
Full textChornozhuk, S. "The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem." Cybernetics and Computer Technologies, no. 3 (October 27, 2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.
Full textZhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao, and Wing-Yin Yu. "Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.
Full textLiu, Qiyuan, Qi Zhou, Rui Yang, and Jie Wang. "Robust Representation Learning by Clustering with Bisimulation Metrics for Visual Reinforcement Learning with Distractions." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8843–51. http://dx.doi.org/10.1609/aaai.v37i7.26063.
Full textDissertations / Theses on the topic "State representation learning"
Nuzzo, Francesco. "Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.
Full textTillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Sadeghi, Mohsen. "Representation and interaction of sensorimotor learning processes." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.
Full textMerckling, Astrid. "Unsupervised pretraining of state representations in a rewardless environment." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.
Full textThis thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Full textHautot, Julien. "Représentation à base radiale pour l'apprentissage par renforcement visuel." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0093.
Full textThis thesis work falls within the context of Reinforcement Learning (RL) from image data. Unlike supervised learning, which enables performing various tasks such as classification, regression, or segmentation from an annotated database, RL allows learning without a database through interactions with an environment. In these methods, an agent, such as a robot, performs different actions to explore its environment and gather training data. Training such an agent involves trial and error; the agent is penalized when it fails at its task and rewarded when it succeeds. The goal for the agent is to improve its behavior to obtain the most long-term rewards.We focus on visual extractions in RL scenarios using first-person view images. The use of visual data often involves deep convolutional networks that work directly on images. However, these networks have significant computational complexity, lack interpretability, and sometimes suffer from instability. To overcome these difficulties, we investigated the development of a network based on radial basis functions, which enable sparse and localized activations in the input space. Radial basis function networks (RBFNs) peaked in the 1990s but were later supplanted by convolutional networks due to their high computational cost on images. In this thesis, we developed a visual feature extractor inspired by RBFNs, simplifying the computational cost on images. We used our network for solving first-person visual tasks and compared its results with various state-of-the-art methods, including end-to-end learning methods, state representation learning methods, and extreme machine learning methods. Different scenarios were tested from the VizDoom simulator and the Pybullet robotics physics simulator. In addition to comparing the rewards obtained after learning, we conducted various tests on noise robustness, parameter generation of our network, and task transfer to reality.The proposed network achieves the best performance in reinforcement learning on the tested scenarios while being easier to use and interpret. Additionally, our network is robust to various noise types, paving the way for the effective transfer of knowledge acquired in simulation to reality
Boots, Byron. "Spectral Approaches to Learning Predictive Representations." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.
Full textGabriel, Florence. "Mental representations of fractions: development, stable state, learning difficulties and intervention." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.
Full textBased on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.
In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.
Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.
Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.
The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.
In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Shi, Fangzhou. "Towards Molecule Generation with Heterogeneous States via Reinforcement Learning." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.
Full textFord, Shelton J. "The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions." 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.
Full textAllen, Heather. "Experiencing literature – learning from experience: the application of neuroscience to literary analysis by example of representations of German colonialism in Uwe Timm’s Morenga." 2011. http://hdl.handle.net/1993/4862.
Full textBooks on the topic "State representation learning"
1966-, McBride Kecia Driver, ed. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.
Find full textBurge, Tyler. Perception: First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.
Full textBoden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.
Full textAlden, John, Alexander H. Cohen, and Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Find full textAlden, John, Alexander H. Cohen, and Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Find full textGinsburg, Herbert P., Rachael Labrecque, Kara Carpenter, and Dana Pagar. New Possibilities for Early Mathematics Education. Edited by Roi Cohen Kadosh and Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.029.
Full textRueschemeyer, Shirley-Ann, and M. Gareth Gaskell, eds. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.
Full textPapafragou, Anna, John C. Trueswell, and Lila R. Gleitman, eds. The Oxford Handbook of the Mental Lexicon. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198845003.001.0001.
Full textCaselli, Tommaso, Eduard Hovy, Martha Palmer, and Piek Vossen, eds. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.
Full textFox, Roy F. MediaSpeak. Praeger Publishers, 2000. http://dx.doi.org/10.5040/9798400684258.
Full textBook chapters on the topic "State representation learning"
Merckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux, and Nicolas Perrin. "State Representation Learning from Demonstration." In Machine Learning, Optimization, and Data Science, 304–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.
Full textSteccanella, Lorenzo, and Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Full textSchestakov, Stefan, Paul Heinemeyer, and Elena Demidova. "Road Network Representation Learning with Vehicle Trajectories." In Advances in Knowledge Discovery and Data Mining, 57–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.
Full textSychev, Oleg. "Visualizing Program State as a Clustered Graph for Learning Programming." In Diagrammatic Representation and Inference, 404–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.
Full textHu, Dapeng, Xuesong Jiang, Xiumei Wei, and Jian Wang. "State Representation Learning for Minimax Deep Deterministic Policy Gradient." In Knowledge Science, Engineering and Management, 481–87. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29551-6_43.
Full textMeden, Blaž, Abraham Prieto, Peter Peer, and Francisco Bellas. "First Steps Towards State Representation Learning for Cognitive Robotics." In Lecture Notes in Computer Science, 499–510. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61705-9_41.
Full textMeng, Li, Morten Goodwin, Anis Yazidi, and Paal Engelstad. "Unsupervised State Representation Learning in Partially Observable Atari Games." In Computer Analysis of Images and Patterns, 212–22. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_21.
Full textServan-Schreiber, David, Axel Cleeremans, and James L. McClelland. "Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks." In Connectionist Approaches to Language Learning, 57–89. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_4.
Full textDing, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin, et al. "Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning." In Representation Learning for Natural Language Processing, 491–521. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.
Full textLi, Zhipeng, and Xuesong Jiang. "State Representation Learning for Multi-agent Deep Deterministic Policy Gradient." In Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications, 667–75. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_75.
Full textConference papers on the topic "State representation learning"
Li, Ziyi, Xiangtao Hu, Yongle Zhang, and Fujie Zhou. "Task-Oriented Reinforcement Learning with Interest State Representation." In 2024 International Conference on Advanced Robotics and Mechatronics (ICARM), 721–28. IEEE, 2024. http://dx.doi.org/10.1109/icarm62033.2024.10715850.
Full textDrexler, Dominik, Simon Ståhlberg, Blai Bonet, and Hector Geffner. "Symmetries and Expressive Requirements for Learning General Policies." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 845–55. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/79.
Full textZhang, Yuanjing, Tao Shang, Chenyi Zhang, and Xueyi Guo. "Quantum Gate Control with State Representation for Deep Reinforcement Learning." In 2024 International Conference on Quantum Communications, Networking, and Computing (QCNC), 119–26. IEEE, 2024. http://dx.doi.org/10.1109/qcnc62729.2024.00028.
Full textNikolich, Aleksandr, Konstantin Korolev, Sergei Bratchikov, Igor Kiselev, and Artem Shelmanov. "Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for Russian." In Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), 189–99. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.mrl-1.15.
Full textTang, Zhan-Yun, Ya-Rong Liu, and Pan Qin. "Visual Reinforcement Learning Using Dynamic State Representation for Continuous Motion Control." In 2024 43rd Chinese Control Conference (CCC), 8435–40. IEEE, 2024. http://dx.doi.org/10.23919/ccc63176.2024.10662669.
Full textLiu, Junhua, Justin Albrethsen, Lincoln Goh, David Yau, and Kwan Hui Lim. "Spatial-Temporal Graph Representation Learning for Tactical Networks Future State Prediction." In 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650266.
Full textShams, Siavash, Sukru Samet Dindar, Xilin Jiang, and Nima Mesgarani. "SSAMBA: Self-Supervised Audio Representation Learning With Mamba State Space Model." In 2024 IEEE Spoken Language Technology Workshop (SLT), 1053–59. IEEE, 2024. https://doi.org/10.1109/slt61566.2024.10832304.
Full textBalyo, Tomáš, Martin Suda, Lukáš Chrpa, Dominik Šafránek, Stephan Gocht, Filip Dvořák, Roman Barták, and G. Michael Youngblood. "Planning Domain Model Acquisition from State Traces without Action Parameters." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 812–22. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/76.
Full textParać, Roko, Lorenzo Nodari, Leo Ardon, Daniel Furelos-Blanco, Federico Cerutti, and Alessandra Russo. "Learning Robust Reward Machines from Noisy Labels." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 909–19. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/85.
Full textZhou, Guichun, and Xiangdong Zhou. "Multivariate Time Series Representation Learning for Electrophysiology Classification with Procedure State Cross-domain Embedding." In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 1771–74. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822707.
Full textReports on the topic "State representation learning"
Babu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper, and Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response: Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), June 2021. http://dx.doi.org/10.19088/ids.2021.049.
Full textIatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov, and Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3749.
Full textSingh, Abhijeet, Mauricio Romero, and Karthik Muralidharan. COVID-19 Learning Loss and Recovery: Panel Data Evidence from India. Research on Improving Systems of Education (RISE), September 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.
Full textLalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, February 2022. http://dx.doi.org/10.36687/inetwp178.
Full textFerdaus, Md Meftahul, Mahdi Abdelguerfi, Kendall Niles, Ken Pathak, and Joe Tom. Widened attention-enhanced atrous convolutional network for efficient embedded vision applications under resource constraints. Engineer Research and Development Center (U.S.), November 2024. http://dx.doi.org/10.21079/11681/49459.
Full textTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan, and Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.
Full textTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan, and Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.
Full textState Legislator Representation: A Data-Driven Learning Guide. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, April 2009. http://dx.doi.org/10.3886/stateleg.
Full text