Добірка наукової літератури з теми "States representation learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "States representation learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "States representation learning"
Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning." Journal of Artificial Intelligence Research 61 (January 31, 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.
Повний текст джерелаSCARPETTA, SILVIA, ZHAOPING LI, and JOHN HERTZ. "LEARNING IN AN OSCILLATORY CORTICAL MODEL." Fractals 11, supp01 (February 2003): 291–300. http://dx.doi.org/10.1142/s0218348x03001951.
Повний текст джерелаZhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu, and Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Повний текст джерелаYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang, and Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Повний текст джерелаChornozhuk, S. "The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem." Cybernetics and Computer Technologies, no. 3 (October 27, 2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.
Повний текст джерелаLamanna, Leonardo, Alfonso Emilio Gerevini, Alessandro Saetti, Luciano Serafini, and Paolo Traverso. "On-line Learning of Planning Domains from Sensor Data in PAL: Scaling up to Large State Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11862–69. http://dx.doi.org/10.1609/aaai.v35i13.17409.
Повний текст джерелаSapena, Oscar, Eva Onaindia, and Eliseo Marzal. "Automated feature extraction for planning state representation." Inteligencia Artificial 27, no. 74 (October 10, 2024): 227–42. http://dx.doi.org/10.4114/intartif.vol27iss74pp227-242.
Повний текст джерелаO’Donnell, Ryan, and John Wright. "Learning and testing quantum states via probabilistic combinatorics and representation theory." Current Developments in Mathematics 2021, no. 1 (2021): 43–94. http://dx.doi.org/10.4310/cdm.2021.v2021.n1.a2.
Повний текст джерелаZhang, Hengyuan, Suyao Zhao, Ruiheng Liu, Wenlong Wang, Yixin Hong, and Runjiu Hu. "Automatic Traffic Anomaly Detection on the Road Network with Spatial-Temporal Graph Neural Network Representation Learning." Wireless Communications and Mobile Computing 2022 (June 20, 2022): 1–12. http://dx.doi.org/10.1155/2022/4222827.
Повний текст джерелаDayan, Peter. "Improving Generalization for Temporal Difference Learning: The Successor Representation." Neural Computation 5, no. 4 (July 1993): 613–24. http://dx.doi.org/10.1162/neco.1993.5.4.613.
Повний текст джерелаДисертації з теми "States representation learning"
Shi, Fangzhou. "Towards Molecule Generation with Heterogeneous States via Reinforcement Learning." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.
Повний текст джерелаCastanet, Nicolas. "Automatic state representation and goal selection in unsupervised reinforcement learning." Electronic Thesis or Diss., Sorbonne université, 2025. http://www.theses.fr/2025SORUS005.
Повний текст джерелаIn the past few years, Reinforcement Learning (RL) achieved tremendous success by training specialized agents owning the ability to drastically exceed human performance in complex games like Chess or Go, or in robotics applications. These agents often lack versatility, requiring human engineering to design their behavior for specific tasks with predefined reward signal, limiting their ability to handle new circumstances. This agent's specialization results in poor generalization capabilities, which make them vulnerable to small variations of external factors and adversarial attacks. A long term objective in artificial intelligence research is to move beyond today's specialized RL agents toward more generalist systems endowed with the capability to adapt in real time to unpredictable external factors and to new downstream tasks. This work aims in this direction, tackling unsupervised reinforcement learning problems, a framework where agents are not provided with external rewards, and thus must autonomously learn new tasks throughout their lifespan, guided by intrinsic motivations. The concept of intrinsic motivation arise from our understanding of humans ability to exhibit certain self-sufficient behaviors during their development, such as playing or having curiosity. This ability allows individuals to design and solve their own tasks, and to build inner physical and social representations of their environments, acquiring an open-ended set of skills throughout their lifespan as a result. This thesis is part of the research effort to incorporate these essential features in artificial agents, leveraging goal-conditioned reinforcement learning to design agents able to discover and master every feasible goals in complex environments. In our first contribution, we investigate autonomous intrinsic goal setting, as a versatile agent should be able to determine its own goals and the order in which to learn these goals to enhance its performances. By leveraging a learned model of the agent's current goal reaching abilities, we show that we can shape an optimal difficulty goal distribution, enabling to sample goals in the Zone of Proximal Development (ZPD) of the agent, which is a psychological concept referring to the frontier between what a learner knows and what it does not, constituting the space of knowledge that is not mastered yet but have the potential to be acquired. We demonstrate that targeting the ZPD of the agent's result in a significant increase in performance for a great variety of goal-reaching tasks. Another core competence is to extract a relevant representation of what matters in the environment from observations coming from any available sensors. We address this question in our second contribution, by highlighting the difficulty to learn a correct representation of the environment in an online setting, where the agent acquires knowledge incrementally as it make progresses. In this context, recent achieved goals are outliers, as there are very few occurrences of this new skill in the agent's experiences, making their representations brittle. We leverage the adversarial setting of Distributionally Robust Optimization in order for the agent's representations of such outliers to be reliable. We show that our method leads to a virtuous circle, as learning accurate representations for new goals fosters the exploration of the environment
Boots, Byron. "Spectral Approaches to Learning Predictive Representations." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.
Повний текст джерелаNuzzo, Francesco. "Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.
Повний текст джерелаTillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Sadeghi, Mohsen. "Representation and interaction of sensorimotor learning processes." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.
Повний текст джерелаGabriel, Florence. "Mental representations of fractions: development, stable state, learning difficulties and intervention." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.
Повний текст джерелаBased on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.
In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.
Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.
Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.
The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.
In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Merckling, Astrid. "Unsupervised pretraining of state representations in a rewardless environment." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.
Повний текст джерелаThis thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Повний текст джерелаHautot, Julien. "Représentation à base radiale pour l'apprentissage par renforcement visuel." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0093.
Повний текст джерелаThis thesis work falls within the context of Reinforcement Learning (RL) from image data. Unlike supervised learning, which enables performing various tasks such as classification, regression, or segmentation from an annotated database, RL allows learning without a database through interactions with an environment. In these methods, an agent, such as a robot, performs different actions to explore its environment and gather training data. Training such an agent involves trial and error; the agent is penalized when it fails at its task and rewarded when it succeeds. The goal for the agent is to improve its behavior to obtain the most long-term rewards.We focus on visual extractions in RL scenarios using first-person view images. The use of visual data often involves deep convolutional networks that work directly on images. However, these networks have significant computational complexity, lack interpretability, and sometimes suffer from instability. To overcome these difficulties, we investigated the development of a network based on radial basis functions, which enable sparse and localized activations in the input space. Radial basis function networks (RBFNs) peaked in the 1990s but were later supplanted by convolutional networks due to their high computational cost on images. In this thesis, we developed a visual feature extractor inspired by RBFNs, simplifying the computational cost on images. We used our network for solving first-person visual tasks and compared its results with various state-of-the-art methods, including end-to-end learning methods, state representation learning methods, and extreme machine learning methods. Different scenarios were tested from the VizDoom simulator and the Pybullet robotics physics simulator. In addition to comparing the rewards obtained after learning, we conducted various tests on noise robustness, parameter generation of our network, and task transfer to reality.The proposed network achieves the best performance in reinforcement learning on the tested scenarios while being easier to use and interpret. Additionally, our network is robust to various noise types, paving the way for the effective transfer of knowledge acquired in simulation to reality
Ford, Shelton J. "The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions." 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.
Повний текст джерелаКниги з теми "States representation learning"
1966-, McBride Kecia Driver, ed. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.
Знайти повний текст джерелаAlden, John, Alexander H. Cohen, and Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Знайти повний текст джерелаAlden, John, Alexander H. Cohen, and Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.
Знайти повний текст джерелаBurge, Tyler. Perception: First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.
Повний текст джерелаGoldman, Alvin I. Theory of Mind. Edited by Eric Margolis, Richard Samuels, and Stephen P. Stich. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780195309799.013.0017.
Повний текст джерелаBoden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.
Повний текст джерелаKenny, Neil, ed. Literature, Learning, and Social Hierarchy in Early Modern Europe. British Academy, 2022. http://dx.doi.org/10.5871/bacad/9780197267332.001.0001.
Повний текст джерелаFox, Roy F. MediaSpeak. Praeger Publishers, 2000. http://dx.doi.org/10.5040/9798400684258.
Повний текст джерелаCaselli, Tommaso, Eduard Hovy, Martha Palmer, and Piek Vossen, eds. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.
Повний текст джерелаHaney, Craig, and Shirin Bakhshay. Contexts of Ill-Treatment. Edited by Metin Başoğlu. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199374625.003.0006.
Повний текст джерелаЧастини книг з теми "States representation learning"
Balagopalan, Sarada. "Children’s Participation in Their Right to Education: Learning from the Delhi High Court Cases, 1998–2001." In The Politics of Children’s Rights and Representation, 81–103. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-04480-9_4.
Повний текст джерелаBouajjani, Ahmed, Wael-Amine Boutglay, and Peter Habermehl. "Data-driven Numerical Invariant Synthesis with Automatic Generation of Attributes." In Computer Aided Verification, 282–303. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_14.
Повний текст джерелаSchestakov, Stefan, Paul Heinemeyer, and Elena Demidova. "Road Network Representation Learning with Vehicle Trajectories." In Advances in Knowledge Discovery and Data Mining, 57–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.
Повний текст джерелаMerckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux, and Nicolas Perrin. "State Representation Learning from Demonstration." In Machine Learning, Optimization, and Data Science, 304–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.
Повний текст джерелаWingate, David. "Predictively Defined Representations of State." In Adaptation, Learning, and Optimization, 415–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3_13.
Повний текст джерелаStoffl, Lucas, Andy Bonnetto, Stéphane d’Ascoli, and Alexander Mathis. "Elucidating the Hierarchical Nature of Behavior with Masked Autoencoders." In Lecture Notes in Computer Science, 106–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73039-9_7.
Повний текст джерелаSteccanella, Lorenzo, and Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.
Повний текст джерелаSychev, Oleg. "Visualizing Program State as a Clustered Graph for Learning Programming." In Diagrammatic Representation and Inference, 404–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.
Повний текст джерелаHoward, Eric, Iftekher S. Chowdhury, and Ian Nagle. "Matrix Product State Representations for Machine Learning." In Artificial Intelligence in Intelligent Systems, 455–68. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77445-5_43.
Повний текст джерелаDing, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin, et al. "Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning." In Representation Learning for Natural Language Processing, 491–521. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.
Повний текст джерелаТези доповідей конференцій з теми "States representation learning"
Drexler, Dominik, Simon Ståhlberg, Blai Bonet, and Hector Geffner. "Symmetries and Expressive Requirements for Learning General Policies." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 845–55. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/79.
Повний текст джерелаNikolich, Aleksandr, Konstantin Korolev, Sergei Bratchikov, Igor Kiselev, and Artem Shelmanov. "Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for Russian." In Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), 189–99. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.mrl-1.15.
Повний текст джерелаLi, Ziyi, Xiangtao Hu, Yongle Zhang, and Fujie Zhou. "Task-Oriented Reinforcement Learning with Interest State Representation." In 2024 International Conference on Advanced Robotics and Mechatronics (ICARM), 721–28. IEEE, 2024. http://dx.doi.org/10.1109/icarm62033.2024.10715850.
Повний текст джерелаBalyo, Tomáš, Martin Suda, Lukáš Chrpa, Dominik Šafránek, Stephan Gocht, Filip Dvořák, Roman Barták, and G. Michael Youngblood. "Planning Domain Model Acquisition from State Traces without Action Parameters." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 812–22. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/76.
Повний текст джерелаRodriguez, Ivan D., Blai Bonet, Javier Romero, and Hector Geffner. "Learning First-Order Representations for Planning from Black Box States: New Results." In 18th International Conference on Principles of Knowledge Representation and Reasoning {KR-2021}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/kr.2021/51.
Повний текст джерелаLi, Chao, Yujing Hu, Shangdong Yang, Tangjie Lv, Changjie Fan, Wenbin Li, Chongjie Zhang, and Yang Gao. "STAR: Spatio-Temporal State Compression for Multi-Agent Tasks with Rich Observations." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/14.
Повний текст джерелаLi, Zhengwei, Zhenyang Lin, Yurou Chen, and Zhiyong Liu. "Efficient Offline Meta-Reinforcement Learning via Robust Task Representations and Adaptive Policy Generation." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/500.
Повний текст джерелаStåhlberg, Simon, Blai Bonet, and Hector Geffner. "Learning General Policies with Policy Gradient Methods." In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/63.
Повний текст джерелаDe Giacomo, Giuseppe, Marco Favorito, Luca Iocchi, Fabio Patrizi, and Alessandro Ronca. "Temporal Logic Monitoring Rewards via Transducers." In 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/89.
Повний текст джерелаLi, Huihui, and Lei Wei. "General purpose representation and association machine: Part 4: Improve learning for three-states and multi-tasks." In IEEE SOUTHEASTCON 2013. IEEE, 2013. http://dx.doi.org/10.1109/secon.2013.6567485.
Повний текст джерелаЗвіти організацій з теми "States representation learning"
Singh, Abhijeet, Mauricio Romero, and Karthik Muralidharan. COVID-19 Learning Loss and Recovery: Panel Data Evidence from India. Research on Improving Systems of Education (RISE), September 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.
Повний текст джерелаLalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, February 2022. http://dx.doi.org/10.36687/inetwp178.
Повний текст джерелаBabu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper, and Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response: Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), June 2021. http://dx.doi.org/10.19088/ids.2021.049.
Повний текст джерелаTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan, and Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.
Повний текст джерелаTarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan, and Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.
Повний текст джерелаGoodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier, and Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, March 2023. http://dx.doi.org/10.46999/arxn5612.
Повний текст джерелаFerdaus, Md Meftahul, Mahdi Abdelguerfi, Kendall Niles, Ken Pathak, and Joe Tom. Widened attention-enhanced atrous convolutional network for efficient embedded vision applications under resource constraints. Engineer Research and Development Center (U.S.), November 2024. http://dx.doi.org/10.21079/11681/49459.
Повний текст джерелаIatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov, and Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3749.
Повний текст джерелаState Legislator Representation: A Data-Driven Learning Guide. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, April 2009. http://dx.doi.org/10.3886/stateleg.
Повний текст джерела