Auswahl der wissenschaftlichen Literatur zum Thema „State representation learning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "State representation learning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "State representation learning"
Xu, Cai, Wei Zhao, Jinglong Zhao, Ziyu Guan, Yaming Yang, Long Chen und Xiangyu Song. „Progressive Deep Multi-View Comprehensive Representation Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 9 (26.06.2023): 10557–65. http://dx.doi.org/10.1609/aaai.v37i9.26254.
Der volle Inhalt der QuelleYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang und Shuicheng Yan. „Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 9 (26.06.2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Der volle Inhalt der Quellede Bruin, Tim, Jens Kober, Karl Tuyls und Robert Babuska. „Integrating State Representation Learning Into Deep Reinforcement Learning“. IEEE Robotics and Automation Letters 3, Nr. 3 (Juli 2018): 1394–401. http://dx.doi.org/10.1109/lra.2018.2800101.
Der volle Inhalt der QuelleChen, Haoqiang, Yadong Liu, Zongtan Zhou und Ming Zhang. „A2C: Attention-Augmented Contrastive Learning for State Representation Extraction“. Applied Sciences 10, Nr. 17 (26.08.2020): 5902. http://dx.doi.org/10.3390/app10175902.
Der volle Inhalt der QuelleOng, Sylvie, Yuri Grinberg und Joelle Pineau. „Mixed Observability Predictive State Representations“. Proceedings of the AAAI Conference on Artificial Intelligence 27, Nr. 1 (30.06.2013): 746–52. http://dx.doi.org/10.1609/aaai.v27i1.8680.
Der volle Inhalt der QuelleMaier, Marc, Brian Taylor, Huseyin Oktay und David Jensen. „Learning Causal Models of Relational Domains“. Proceedings of the AAAI Conference on Artificial Intelligence 24, Nr. 1 (03.07.2010): 531–38. http://dx.doi.org/10.1609/aaai.v24i1.7695.
Der volle Inhalt der QuelleLesort, Timothée, Natalia Díaz-Rodríguez, Jean-Frano̧is Goudou und David Filliat. „State representation learning for control: An overview“. Neural Networks 108 (Dezember 2018): 379–92. http://dx.doi.org/10.1016/j.neunet.2018.07.006.
Der volle Inhalt der QuelleChornozhuk, S. „The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem“. Cybernetics and Computer Technologies, Nr. 3 (27.10.2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.
Der volle Inhalt der QuelleZhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao und Wing-Yin Yu. „Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 3 (28.06.2022): 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.
Der volle Inhalt der QuelleLi, Dongfen, Lichao Meng, Jingjing Li, Ke Lu und Yang Yang. „Domain adaptive state representation alignment for reinforcement learning“. Information Sciences 609 (September 2022): 1353–68. http://dx.doi.org/10.1016/j.ins.2022.07.156.
Der volle Inhalt der QuelleDissertationen zum Thema "State representation learning"
Nuzzo, Francesco. „Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.
Der volle Inhalt der QuelleTillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Sadeghi, Mohsen. „Representation and interaction of sensorimotor learning processes“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.
Der volle Inhalt der QuelleMerckling, Astrid. „Unsupervised pretraining of state representations in a rewardless environment“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.
Der volle Inhalt der QuelleThis thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Woodbury, Nathan Scott. „Representation and Reconstruction of Linear, Time-Invariant Networks“. BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Der volle Inhalt der QuelleBoots, Byron. „Spectral Approaches to Learning Predictive Representations“. Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.
Der volle Inhalt der QuelleGabriel, Florence. „Mental representations of fractions: development, stable state, learning difficulties and intervention“. Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.
Der volle Inhalt der QuelleBased on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.
In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.
Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.
Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.
The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.
In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Shi, Fangzhou. „Towards Molecule Generation with Heterogeneous States via Reinforcement Learning“. Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.
Der volle Inhalt der QuelleFord, Shelton J. „The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions“. 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.
Der volle Inhalt der QuelleAllen, Heather. „Experiencing literature – learning from experience: the application of neuroscience to literary analysis by example of representations of German colonialism in Uwe Timm’s Morenga“. 2011. http://hdl.handle.net/1993/4862.
Der volle Inhalt der QuelleStasko, Carly. „A Pedagogy of Holistic Media Literacy: Reflections on Culture Jamming as Transformative Learning and Healing“. Thesis, 2009. http://hdl.handle.net/1807/18109.
Der volle Inhalt der QuelleBücher zum Thema "State representation learning"
A, Bositis David, und Joint Center for Political and Economic Studies (U.S.), Hrsg. Redistricting and minority representation: Learning from the past, preparing for the future. Washington, D.C: Joint Center for Political and Economic Studies, 1998.
Den vollen Inhalt der Quelle finden1966-, McBride Kecia Driver, Hrsg. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.
Den vollen Inhalt der Quelle findenBurge, Tyler. Perception: First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.
Der volle Inhalt der QuelleBoden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.
Der volle Inhalt der QuelleAlden, John, Alexander H. Cohen und Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Den vollen Inhalt der Quelle findenAlden, John, Alexander H. Cohen und Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Den vollen Inhalt der Quelle findenGinsburg, Herbert P., Rachael Labrecque, Kara Carpenter und Dana Pagar. New Possibilities for Early Mathematics Education. Herausgegeben von Roi Cohen Kadosh und Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.029.
Der volle Inhalt der QuelleRueschemeyer, Shirley-Ann, und M. Gareth Gaskell, Hrsg. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.
Der volle Inhalt der QuellePapafragou, Anna, John C. Trueswell und Lila R. Gleitman, Hrsg. The Oxford Handbook of the Mental Lexicon. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198845003.001.0001.
Der volle Inhalt der QuelleCaselli, Tommaso, Eduard Hovy, Martha Palmer und Piek Vossen, Hrsg. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.
Der volle Inhalt der QuelleBuchteile zum Thema "State representation learning"
Merckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux und Nicolas Perrin. „State Representation Learning from Demonstration“. In Machine Learning, Optimization, and Data Science, 304–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.
Der volle Inhalt der QuelleSteccanella, Lorenzo, und Anders Jonsson. „State Representation Learning for Goal-Conditioned Reinforcement Learning“. In Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Der volle Inhalt der QuelleSchestakov, Stefan, Paul Heinemeyer und Elena Demidova. „Road Network Representation Learning with Vehicle Trajectories“. In Advances in Knowledge Discovery and Data Mining, 57–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.
Der volle Inhalt der QuelleSychev, Oleg. „Visualizing Program State as a Clustered Graph for Learning Programming“. In Diagrammatic Representation and Inference, 404–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.
Der volle Inhalt der QuelleHu, Dapeng, Xuesong Jiang, Xiumei Wei und Jian Wang. „State Representation Learning for Minimax Deep Deterministic Policy Gradient“. In Knowledge Science, Engineering and Management, 481–87. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29551-6_43.
Der volle Inhalt der QuelleMeden, Blaž, Abraham Prieto, Peter Peer und Francisco Bellas. „First Steps Towards State Representation Learning for Cognitive Robotics“. In Lecture Notes in Computer Science, 499–510. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61705-9_41.
Der volle Inhalt der QuelleMeng, Li, Morten Goodwin, Anis Yazidi und Paal Engelstad. „Unsupervised State Representation Learning in Partially Observable Atari Games“. In Computer Analysis of Images and Patterns, 212–22. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_21.
Der volle Inhalt der QuelleServan-Schreiber, David, Axel Cleeremans und James L. McClelland. „Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks“. In Connectionist Approaches to Language Learning, 57–89. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_4.
Der volle Inhalt der QuelleDing, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin et al. „Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning“. In Representation Learning for Natural Language Processing, 491–521. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.
Der volle Inhalt der QuelleLi, Zhipeng, und Xuesong Jiang. „State Representation Learning for Multi-agent Deep Deterministic Policy Gradient“. In Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications, 667–75. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_75.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "State representation learning"
Zhao, Jian, Wengang Zhou, Tianyu Zhao, Yun Zhou und Houqiang Li. „State Representation Learning For Effective Deep Reinforcement Learning“. In 2020 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2020. http://dx.doi.org/10.1109/icme46284.2020.9102924.
Der volle Inhalt der QuelleNozawa, Kento, und Issei Sato. „Evaluation Methods for Representation Learning: A Survey“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/776.
Der volle Inhalt der QuelleZhu, Hanhua. „Generalized Representation Learning Methods for Deep Reinforcement Learning“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/748.
Der volle Inhalt der QuelleBai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie und Min Zhang. „RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.
Der volle Inhalt der QuelleStork, Johannes A., Carl Henrik Ek, Yasemin Bekiroglu und Danica Kragic. „Learning Predictive State Representation for in-hand manipulation“. In 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139641.
Der volle Inhalt der QuelleMunk, Jelle, Jens Kober und Robert Babuska. „Learning state representation for deep actor-critic control“. In 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798980.
Der volle Inhalt der QuelleDuarte, Valquiria Aparecida Rosa, und Rita Maria Silva Julia. „Improving the State Space Representation through Association Rules“. In 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2016. http://dx.doi.org/10.1109/icmla.2016.0167.
Der volle Inhalt der QuelleWang, Hai, Takeshi Onishi, Kevin Gimpel und David McAllester. „Emergent Predication Structure in Hidden State Vectors of Neural Readers“. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2604.
Der volle Inhalt der QuelleBhatt, Shreyansh, Jinjin Zhao, Candace Thille, Dawn Zimmaro und Neelesh Gattani. „A Novel Approach for Knowledge State Representation and Prediction“. In L@S '20: Seventh (2020) ACM Conference on Learning @ Scale. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3386527.3406745.
Der volle Inhalt der QuelleZhao, Han, Xu Yang, Zhenru Wang, Erkun Yang und Cheng Deng. „Graph Debiased Contrastive Learning with Joint Representation Clustering“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/473.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "State representation learning"
Babu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper und Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response: Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), Juni 2021. http://dx.doi.org/10.19088/ids.2021.049.
Der volle Inhalt der QuelleIatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov und Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], Februar 2020. http://dx.doi.org/10.31812/123456789/3749.
Der volle Inhalt der QuelleSingh, Abhijeet, Mauricio Romero und Karthik Muralidharan. COVID-19 Learning Loss and Recovery: Panel Data Evidence from India. Research on Improving Systems of Education (RISE), September 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.
Der volle Inhalt der QuelleLalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, Februar 2022. http://dx.doi.org/10.36687/inetwp178.
Der volle Inhalt der QuelleTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan und Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.
Der volle Inhalt der QuelleTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan und Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.
Der volle Inhalt der QuelleState Legislator Representation: A Data-Driven Learning Guide. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, April 2009. http://dx.doi.org/10.3886/stateleg.
Der volle Inhalt der Quelle