Literatura académica sobre el tema "States representation learning"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "States representation learning".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "States representation learning"
Konidaris, George, Leslie Pack Kaelbling y Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning". Journal of Artificial Intelligence Research 61 (31 de enero de 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.
Texto completoSCARPETTA, SILVIA, ZHAOPING LI y JOHN HERTZ. "LEARNING IN AN OSCILLATORY CORTICAL MODEL". Fractals 11, supp01 (febrero de 2003): 291–300. http://dx.doi.org/10.1142/s0218348x03001951.
Texto completoZhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu y Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junio de 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Texto completoYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang y Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junio de 2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Texto completoChornozhuk, S. "The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem". Cybernetics and Computer Technologies, n.º 3 (27 de octubre de 2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.
Texto completoLamanna, Leonardo, Alfonso Emilio Gerevini, Alessandro Saetti, Luciano Serafini y Paolo Traverso. "On-line Learning of Planning Domains from Sensor Data in PAL: Scaling up to Large State Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de mayo de 2021): 11862–69. http://dx.doi.org/10.1609/aaai.v35i13.17409.
Texto completoSapena, Oscar, Eva Onaindia y Eliseo Marzal. "Automated feature extraction for planning state representation". Inteligencia Artificial 27, n.º 74 (10 de octubre de 2024): 227–42. http://dx.doi.org/10.4114/intartif.vol27iss74pp227-242.
Texto completoO’Donnell, Ryan y John Wright. "Learning and testing quantum states via probabilistic combinatorics and representation theory". Current Developments in Mathematics 2021, n.º 1 (2021): 43–94. http://dx.doi.org/10.4310/cdm.2021.v2021.n1.a2.
Texto completoZhang, Hengyuan, Suyao Zhao, Ruiheng Liu, Wenlong Wang, Yixin Hong y Runjiu Hu. "Automatic Traffic Anomaly Detection on the Road Network with Spatial-Temporal Graph Neural Network Representation Learning". Wireless Communications and Mobile Computing 2022 (20 de junio de 2022): 1–12. http://dx.doi.org/10.1155/2022/4222827.
Texto completoDayan, Peter. "Improving Generalization for Temporal Difference Learning: The Successor Representation". Neural Computation 5, n.º 4 (julio de 1993): 613–24. http://dx.doi.org/10.1162/neco.1993.5.4.613.
Texto completoTesis sobre el tema "States representation learning"
Shi, Fangzhou. "Towards Molecule Generation with Heterogeneous States via Reinforcement Learning". Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.
Texto completoCastanet, Nicolas. "Automatic state representation and goal selection in unsupervised reinforcement learning". Electronic Thesis or Diss., Sorbonne université, 2025. http://www.theses.fr/2025SORUS005.
Texto completoIn the past few years, Reinforcement Learning (RL) achieved tremendous success by training specialized agents owning the ability to drastically exceed human performance in complex games like Chess or Go, or in robotics applications. These agents often lack versatility, requiring human engineering to design their behavior for specific tasks with predefined reward signal, limiting their ability to handle new circumstances. This agent's specialization results in poor generalization capabilities, which make them vulnerable to small variations of external factors and adversarial attacks. A long term objective in artificial intelligence research is to move beyond today's specialized RL agents toward more generalist systems endowed with the capability to adapt in real time to unpredictable external factors and to new downstream tasks. This work aims in this direction, tackling unsupervised reinforcement learning problems, a framework where agents are not provided with external rewards, and thus must autonomously learn new tasks throughout their lifespan, guided by intrinsic motivations. The concept of intrinsic motivation arise from our understanding of humans ability to exhibit certain self-sufficient behaviors during their development, such as playing or having curiosity. This ability allows individuals to design and solve their own tasks, and to build inner physical and social representations of their environments, acquiring an open-ended set of skills throughout their lifespan as a result. This thesis is part of the research effort to incorporate these essential features in artificial agents, leveraging goal-conditioned reinforcement learning to design agents able to discover and master every feasible goals in complex environments. In our first contribution, we investigate autonomous intrinsic goal setting, as a versatile agent should be able to determine its own goals and the order in which to learn these goals to enhance its performances. By leveraging a learned model of the agent's current goal reaching abilities, we show that we can shape an optimal difficulty goal distribution, enabling to sample goals in the Zone of Proximal Development (ZPD) of the agent, which is a psychological concept referring to the frontier between what a learner knows and what it does not, constituting the space of knowledge that is not mastered yet but have the potential to be acquired. We demonstrate that targeting the ZPD of the agent's result in a significant increase in performance for a great variety of goal-reaching tasks. Another core competence is to extract a relevant representation of what matters in the environment from observations coming from any available sensors. We address this question in our second contribution, by highlighting the difficulty to learn a correct representation of the environment in an online setting, where the agent acquires knowledge incrementally as it make progresses. In this context, recent achieved goals are outliers, as there are very few occurrences of this new skill in the agent's experiences, making their representations brittle. We leverage the adversarial setting of Distributionally Robust Optimization in order for the agent's representations of such outliers to be reliable. We show that our method leads to a virtuous circle, as learning accurate representations for new goals fosters the exploration of the environment
Boots, Byron. "Spectral Approaches to Learning Predictive Representations". Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.
Texto completoNuzzo, Francesco. "Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.
Texto completoTillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Sadeghi, Mohsen. "Representation and interaction of sensorimotor learning processes". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.
Texto completoGabriel, Florence. "Mental representations of fractions: development, stable state, learning difficulties and intervention". Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.
Texto completoBased on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.
In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.
Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.
Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.
The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.
In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Merckling, Astrid. "Unsupervised pretraining of state representations in a rewardless environment". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.
Texto completoThis thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Texto completoHautot, Julien. "Représentation à base radiale pour l'apprentissage par renforcement visuel". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0093.
Texto completoThis thesis work falls within the context of Reinforcement Learning (RL) from image data. Unlike supervised learning, which enables performing various tasks such as classification, regression, or segmentation from an annotated database, RL allows learning without a database through interactions with an environment. In these methods, an agent, such as a robot, performs different actions to explore its environment and gather training data. Training such an agent involves trial and error; the agent is penalized when it fails at its task and rewarded when it succeeds. The goal for the agent is to improve its behavior to obtain the most long-term rewards.We focus on visual extractions in RL scenarios using first-person view images. The use of visual data often involves deep convolutional networks that work directly on images. However, these networks have significant computational complexity, lack interpretability, and sometimes suffer from instability. To overcome these difficulties, we investigated the development of a network based on radial basis functions, which enable sparse and localized activations in the input space. Radial basis function networks (RBFNs) peaked in the 1990s but were later supplanted by convolutional networks due to their high computational cost on images. In this thesis, we developed a visual feature extractor inspired by RBFNs, simplifying the computational cost on images. We used our network for solving first-person visual tasks and compared its results with various state-of-the-art methods, including end-to-end learning methods, state representation learning methods, and extreme machine learning methods. Different scenarios were tested from the VizDoom simulator and the Pybullet robotics physics simulator. In addition to comparing the rewards obtained after learning, we conducted various tests on noise robustness, parameter generation of our network, and task transfer to reality.The proposed network achieves the best performance in reinforcement learning on the tested scenarios while being easier to use and interpret. Additionally, our network is robust to various noise types, paving the way for the effective transfer of knowledge acquired in simulation to reality
Ford, Shelton J. "The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions". 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.
Texto completoLibros sobre el tema "States representation learning"
1966-, McBride Kecia Driver, ed. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.
Buscar texto completoAlden, John, Alexander H. Cohen y Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Buscar texto completoAlden, John, Alexander H. Cohen y Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Buscar texto completoBurge, Tyler. Perception: First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.
Texto completoGoldman, Alvin I. Theory of Mind. Editado por Eric Margolis, Richard Samuels y Stephen P. Stich. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780195309799.013.0017.
Texto completoBoden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.
Texto completoKenny, Neil, ed. Literature, Learning, and Social Hierarchy in Early Modern Europe. British Academy, 2022. http://dx.doi.org/10.5871/bacad/9780197267332.001.0001.
Texto completoFox, Roy F. MediaSpeak. Praeger Publishers, 2000. http://dx.doi.org/10.5040/9798400684258.
Texto completoCaselli, Tommaso, Eduard Hovy, Martha Palmer y Piek Vossen, eds. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.
Texto completoHaney, Craig y Shirin Bakhshay. Contexts of Ill-Treatment. Editado por Metin Başoğlu. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199374625.003.0006.
Texto completoCapítulos de libros sobre el tema "States representation learning"
Balagopalan, Sarada. "Children’s Participation in Their Right to Education: Learning from the Delhi High Court Cases, 1998–2001". En The Politics of Children’s Rights and Representation, 81–103. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-04480-9_4.
Texto completoBouajjani, Ahmed, Wael-Amine Boutglay y Peter Habermehl. "Data-driven Numerical Invariant Synthesis with Automatic Generation of Attributes". En Computer Aided Verification, 282–303. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_14.
Texto completoSchestakov, Stefan, Paul Heinemeyer y Elena Demidova. "Road Network Representation Learning with Vehicle Trajectories". En Advances in Knowledge Discovery and Data Mining, 57–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.
Texto completoMerckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux y Nicolas Perrin. "State Representation Learning from Demonstration". En Machine Learning, Optimization, and Data Science, 304–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.
Texto completoWingate, David. "Predictively Defined Representations of State". En Adaptation, Learning, and Optimization, 415–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3_13.
Texto completoStoffl, Lucas, Andy Bonnetto, Stéphane d’Ascoli y Alexander Mathis. "Elucidating the Hierarchical Nature of Behavior with Masked Autoencoders". En Lecture Notes in Computer Science, 106–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73039-9_7.
Texto completoSteccanella, Lorenzo y Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning". En Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Texto completoSychev, Oleg. "Visualizing Program State as a Clustered Graph for Learning Programming". En Diagrammatic Representation and Inference, 404–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.
Texto completoHoward, Eric, Iftekher S. Chowdhury y Ian Nagle. "Matrix Product State Representations for Machine Learning". En Artificial Intelligence in Intelligent Systems, 455–68. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77445-5_43.
Texto completoDing, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin et al. "Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning". En Representation Learning for Natural Language Processing, 491–521. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.
Texto completoActas de conferencias sobre el tema "States representation learning"
Drexler, Dominik, Simon Ståhlberg, Blai Bonet y Hector Geffner. "Symmetries and Expressive Requirements for Learning General Policies". En 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 845–55. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/79.
Texto completoNikolich, Aleksandr, Konstantin Korolev, Sergei Bratchikov, Igor Kiselev y Artem Shelmanov. "Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for Russian". En Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), 189–99. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.mrl-1.15.
Texto completoLi, Ziyi, Xiangtao Hu, Yongle Zhang y Fujie Zhou. "Task-Oriented Reinforcement Learning with Interest State Representation". En 2024 International Conference on Advanced Robotics and Mechatronics (ICARM), 721–28. IEEE, 2024. http://dx.doi.org/10.1109/icarm62033.2024.10715850.
Texto completoBalyo, Tomáš, Martin Suda, Lukáš Chrpa, Dominik Šafránek, Stephan Gocht, Filip Dvořák, Roman Barták y G. Michael Youngblood. "Planning Domain Model Acquisition from State Traces without Action Parameters". En 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 812–22. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/76.
Texto completoRodriguez, Ivan D., Blai Bonet, Javier Romero y Hector Geffner. "Learning First-Order Representations for Planning from Black Box States: New Results". En 18th International Conference on Principles of Knowledge Representation and Reasoning {KR-2021}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/kr.2021/51.
Texto completoLi, Chao, Yujing Hu, Shangdong Yang, Tangjie Lv, Changjie Fan, Wenbin Li, Chongjie Zhang y Yang Gao. "STAR: Spatio-Temporal State Compression for Multi-Agent Tasks with Rich Observations". En Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/14.
Texto completoLi, Zhengwei, Zhenyang Lin, Yurou Chen y Zhiyong Liu. "Efficient Offline Meta-Reinforcement Learning via Robust Task Representations and Adaptive Policy Generation". En Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/500.
Texto completoStåhlberg, Simon, Blai Bonet y Hector Geffner. "Learning General Policies with Policy Gradient Methods". En 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/63.
Texto completoDe Giacomo, Giuseppe, Marco Favorito, Luca Iocchi, Fabio Patrizi y Alessandro Ronca. "Temporal Logic Monitoring Rewards via Transducers". En 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/89.
Texto completoLi, Huihui y Lei Wei. "General purpose representation and association machine: Part 4: Improve learning for three-states and multi-tasks". En IEEE SOUTHEASTCON 2013. IEEE, 2013. http://dx.doi.org/10.1109/secon.2013.6567485.
Texto completoInformes sobre el tema "States representation learning"
Singh, Abhijeet, Mauricio Romero y Karthik Muralidharan. COVID-19 Learning Loss and Recovery: Panel Data Evidence from India. Research on Improving Systems of Education (RISE), septiembre de 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.
Texto completoLalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, febrero de 2022. http://dx.doi.org/10.36687/inetwp178.
Texto completoBabu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper y Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response: Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), junio de 2021. http://dx.doi.org/10.19088/ids.2021.049.
Texto completoTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan y Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, noviembre de 2020. http://dx.doi.org/10.31812/123456789/4421.
Texto completoTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan y Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, noviembre de 2020. http://dx.doi.org/10.31812/123456789/4421.
Texto completoGoodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier y Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, marzo de 2023. http://dx.doi.org/10.46999/arxn5612.
Texto completoFerdaus, Md Meftahul, Mahdi Abdelguerfi, Kendall Niles, Ken Pathak y Joe Tom. Widened attention-enhanced atrous convolutional network for efficient embedded vision applications under resource constraints. Engineer Research and Development Center (U.S.), noviembre de 2024. http://dx.doi.org/10.21079/11681/49459.
Texto completoIatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov y Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], febrero de 2020. http://dx.doi.org/10.31812/123456789/3749.
Texto completoState Legislator Representation: A Data-Driven Learning Guide. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, abril de 2009. http://dx.doi.org/10.3886/stateleg.
Texto completo