Littérature scientifique sur le sujet « State representation learning »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « State representation learning ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "State representation learning"

1

Xu, Cai, Wei Zhao, Jinglong Zhao, Ziyu Guan, Yaming Yang, Long Chen et Xiangyu Song. « Progressive Deep Multi-View Comprehensive Representation Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 10557–65. http://dx.doi.org/10.1609/aaai.v37i9.26254.

Texte intégral
Résumé :
Multi-view Comprehensive Representation Learning (MCRL) aims to synthesize information from multiple views to learn comprehensive representations of data items. Prevalent deep MCRL methods typically concatenate synergistic view-specific representations or average aligned view-specific representations in the fusion stage. However, the performance of synergistic fusion methods inevitably degenerate or even fail when partial views are missing in real-world applications; the aligned based fusion methods usually cannot fully exploit the complementarity of multi-view data. To eliminate all these drawbacks, in this work we present a Progressive Deep Multi-view Fusion (PDMF) method. Considering the multi-view comprehensive representation should contain complete information and the view-specific data contain partial information, we deem that it is unstable to directly learn the mapping from partial information to complete information. Hence, PDMF employs a progressive learning strategy, which contains the pre-training and fine-tuning stages. In the pre-training stage, PDMF decodes the auxiliary comprehensive representation to the view-specific data. It also captures the consistency and complementarity by learning the relations between the dimensions of the auxiliary comprehensive representation and all views. In the fine-tuning stage, PDMF learns the mapping from the original data to the comprehensive representation with the help of the auxiliary comprehensive representation and relations. Experiments conducted on a synthetic toy dataset and 4 real-world datasets show that PDMF outperforms state-of-the-art baseline methods. The code is released at https://github.com/winterant/PDMF.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang et Shuicheng Yan. « Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.

Texte intégral
Résumé :
Deep reinforcement learning (RL) algorithms suffer severe performance degradation when the interaction data is scarce, which limits their real-world application. Recently, visual representation learning has been shown to be effective and promising for boosting sample efficiency in RL. These methods usually rely on contrastive learning and data augmentation to train a transition model, which is different from how the model is used in RL---performing value-based planning. Accordingly, the learned representation by these visual methods may be good for recognition but not optimal for estimating state value and solving the decision problem. To address this issue, we propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making. More specifically, VCR trains a model to predict the future state (also referred to as the "imagined state'') based on the current one and a sequence of actions. Instead of aligning this imagined state with a real state returned by the environment, VCR applies a Q value head on both of the states and obtains two distributions of action values. Then a distance is computed and minimized to force the imagined state to produce a similar action value prediction as that by the real state. We develop two implementations of the above idea for the discrete and continuous action spaces respectively. We conduct experiments on Atari 100k and DeepMind Control Suite benchmarks to validate their effectiveness for improving sample efficiency. It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
3

de Bruin, Tim, Jens Kober, Karl Tuyls et Robert Babuska. « Integrating State Representation Learning Into Deep Reinforcement Learning ». IEEE Robotics and Automation Letters 3, no 3 (juillet 2018) : 1394–401. http://dx.doi.org/10.1109/lra.2018.2800101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Chen, Haoqiang, Yadong Liu, Zongtan Zhou et Ming Zhang. « A2C : Attention-Augmented Contrastive Learning for State Representation Extraction ». Applied Sciences 10, no 17 (26 août 2020) : 5902. http://dx.doi.org/10.3390/app10175902.

Texte intégral
Résumé :
Reinforcement learning (RL) faces a series of challenges, including learning efficiency and generalization. The state representation used to train RL is one of the important factors causing these challenges. In this paper, we explore providing a more efficient state representation for RL. Contrastive learning is used as the representation extraction method in our work. We propose an attention mechanism implementation and extend an existing contrastive learning method by embedding the attention mechanism. Finally an attention-augmented contrastive learning method called A2C is obtained. As a result, using the state representation from A2C, the robot achieves better learning efficiency and generalization than those using state-of-the-art representations. Moreover, our attention mechanism is proven to be able to calculate the correlation of arbitrary distance among pixels, which is conducive to capturing more accurate obstacle information. What is more, we remove the attention mechanism from A2C. It is shown that the rewards available for the attention-removed A2C are reduced by more than 70%, which indicates the important role of the attention mechanism.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ong, Sylvie, Yuri Grinberg et Joelle Pineau. « Mixed Observability Predictive State Representations ». Proceedings of the AAAI Conference on Artificial Intelligence 27, no 1 (30 juin 2013) : 746–52. http://dx.doi.org/10.1609/aaai.v27i1.8680.

Texte intégral
Résumé :
Learning accurate models of agent behaviours is crucial for the purpose of controlling systems where the agents' and environment's dynamics are unknown. This is a challenging problem, but structural assumptions can be leveraged to tackle it effectively. In particular, many systems exhibit mixed observability, when observations of some system components are essentially perfect and noiseless, while observations of other components are imperfect, aliased or noisy. In this paper we present a new model learning framework, the mixed observability predictive state representation (MO-PSR), which extends the previously known predictive state representations to the case of mixed observability systems. We present a learning algorithm that is scalable to large amounts of data and to large mixed observability domains, and show theoretical analysis of the learning consistency and computational complexity. Empirical results demonstrate that our algorithm is capable of learning accurate models, at a larger scale than with the generic predictive state representation, by leveraging the mixed observability properties.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Maier, Marc, Brian Taylor, Huseyin Oktay et David Jensen. « Learning Causal Models of Relational Domains ». Proceedings of the AAAI Conference on Artificial Intelligence 24, no 1 (3 juillet 2010) : 531–38. http://dx.doi.org/10.1609/aaai.v24i1.7695.

Texte intégral
Résumé :
Methods for discovering causal knowledge from observational data have been a persistent topic of AI research for several decades. Essentially all of this work focuses on knowledge representations for propositional domains. In this paper, we present several key algorithmic and theoretical innovations that extend causal discovery to relational domains. We provide strong evidence that effective learning of causal models is enhanced by relational representations. We present an algorithm, relational PC, that learns causal dependencies in a state-of-the-art relational representation, and we identify the key representational and algorithmic innovations that make the algorithm possible. Finally, we prove the algorithm's theoretical correctness and demonstrate its effectiveness on synthetic and real data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lesort, Timothée, Natalia Díaz-Rodríguez, Jean-Frano̧is Goudou et David Filliat. « State representation learning for control : An overview ». Neural Networks 108 (décembre 2018) : 379–92. http://dx.doi.org/10.1016/j.neunet.2018.07.006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Chornozhuk, S. « The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem ». Cybernetics and Computer Technologies, no 3 (27 octobre 2020) : 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.

Texte intégral
Résumé :
Introduction. The spatial protein structure folding is an important and actual problem in computational biology. Considering the mathematical model of the task, it can be easily concluded that finding an optimal protein conformation in a three dimensional grid is a NP-hard problem. Therefore some reinforcement learning techniques such as Q-learning approach can be used to solve the problem. The article proposes a new geometric “state-action” space representation which significantly differs from all alternative representations used for this problem. The purpose of the article is to analyze existing approaches of different states and actions spaces representations for Q-learning algorithm for protein structure folding problem, reveal their advantages and disadvantages and propose the new geometric “state-space” representation. Afterwards the goal is to compare existing and the proposed approaches, make conclusions with also describing possible future steps of further research. Result. The work of the proposed algorithm is compared with others on the basis of 10 known chains with a length of 48 first proposed in [16]. For each of the chains the Q-learning algorithm with the proposed “state-space” representation outperformed the same Q-learning algorithm with alternative existing “state-space” representations both in terms of average and minimal energy values of resulted conformations. Moreover, a plenty of existing representations are used for a 2D protein structure predictions. However, during the experiments both existing and proposed representations were slightly changed or developed to solve the problem in 3D, which is more computationally demanding task. Conclusion. The quality of the Q-learning algorithm with the proposed geometric “state-action” space representation has been experimentally confirmed. Consequently, it’s proved that the further research is promising. Moreover, several steps of possible future research such as combining the proposed approach with deep learning techniques has been already suggested. Keywords: Spatial protein structure, combinatorial optimization, relative coding, machine learning, Q-learning, Bellman equation, state space, action space, basis in 3D space.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Zhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao et Wing-Yin Yu. « Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 3 (28 juin 2022) : 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.

Texte intégral
Résumé :
Spatio-temporal representation learning is critical for video self-supervised representation. Recent approaches mainly use contrastive learning and pretext tasks. However, these approaches learn representation by discriminating sampled instances via feature similarity in the latent space while ignoring the intermediate state of the learned representations, which limits the overall performance. In this work, taking into account the degree of similarity of sampled instances as the intermediate state, we propose a novel pretext task - spatio-temporal overlap rate (STOR) prediction. It stems from the observation that humans are capable of discriminating the overlap rates of videos in space and time. This task encourages the model to discriminate the STOR of two generated samples to learn the representations. Moreover, we employ a joint optimization combining pretext tasks with contrastive learning to further enhance the spatio-temporal representation learning. We also study the mutual influence of each component in the proposed scheme. Extensive experiments demonstrate that our proposed STOR task can favor both contrastive learning and pretext tasks and the joint optimization scheme can significantly improve the spatio-temporal representation in video understanding. The code is available at https://github.com/Katou2/CSTP.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Li, Dongfen, Lichao Meng, Jingjing Li, Ke Lu et Yang Yang. « Domain adaptive state representation alignment for reinforcement learning ». Information Sciences 609 (septembre 2022) : 1353–68. http://dx.doi.org/10.1016/j.ins.2022.07.156.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "State representation learning"

1

Nuzzo, Francesco. « Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.

Texte intégral
Résumé :
State representation learning aims to extract useful features from the observations received by a Reinforcement Learning agent interacting with an environment. These features allow the agent to take advantage of the low-dimensional and informative representation to improve the efficiency in solving tasks. In this work, we study unsupervised state representation learning in Atari games. We use a RNN architecture for learning features that depend on sequences of observations, and pretrain a single-frame encoder architecture with different methods on randomly collected frames. Finally, we empirically evaluate how pretrained state representations perform compared with a randomly initialized architecture. For this purpose, we let a RL agent train on 22 different Atari 2600 games initializing the encoder either randomly or with one of the following unsupervised methods: VAE, CPC and ST-DIM. Promising results are obtained in most games when ST-DIM is chosen as pretraining method, while VAE often performs worse than a random initialization.
Tillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Sadeghi, Mohsen. « Representation and interaction of sensorimotor learning processes ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.

Texte intégral
Résumé :
Human sensorimotor control is remarkably adept at utilising contextual information to learn and recall systematic sensorimotor transformations. Here, we investigate the motor representations that underlie such learning, and examine how motor memories acquired based on different contextual information interact. Using a novel three-dimensional robotic manipulandum, the 3BOT, we examined the spatial transfer of learning across various movement directions in a 3D environment, while human subjects performed reaching movements under velocity-dependent force field. The obtained pattern of generalisation suggested that the representation of dynamic learning was most likely defined in a target-based, rather than an extrinsic, coordinate system. We further examined how motor memories interact when subjects adapt to force fields applied in orthogonal dimensions. We found that, unlike opposing fields, learning two spatially orthogonal force fields led to the formation of separate motor memories, which neither interfered with nor facilitated each other. Moreover, we demonstrated a novel, more general aspect of the spontaneous recovery phenomenon using a two-dimensional force field task: when subjects learned two orthogonal force fields consecutively, in the following phase of clamped error feedback, the expression of adaptation spontaneously rotated from the direction of the second force field, towards the direction of the first force field. Finally, we examined the interaction of sensorimotor memories formed based on separate contextual information. Subjects performed reciprocating reaching and object manipulation tasks under two alternating contexts (movement directions), while we manipulated the dynamics of the task in each context separately. The results suggested that separate motor memories were formed for the dynamics of the task in different contexts, and that these motor memories interacted by sharing error signals to enhance learning. Importantly, the extent of interaction was not fixed between the context-dependent motor memories, but adaptively changed according to the task dynamics to potentially improve overall performance. Together, our experimental and theoretical results add to the understanding of mechanisms that underlie sensorimotor learning, and the way these mechanisms interact under various tasks and different dynamics.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Merckling, Astrid. « Unsupervised pretraining of state representations in a rewardless environment ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.

Texte intégral
Résumé :
Cette thèse vise à étendre les capacités de l'apprentissage de représentation d'état (state representation learning, SRL) afin d'aider la mise à l'échelle des algorithmes d'apprentissage par renforcement profond (deep reinforcement learning, DRL) aux tâches de contrôle continu avec des observations sensorielles à haute dimension (en particulier des images). Le SRL permet d'améliorer les performances des algorithmes de DRL en leur transmettant de meilleures entrées que celles apprises à partir de zéro avec des stratégies de bout-en-bout. Plus précisément, cette thèse aborde le problème de l'estimation d'état à la manière d'un pré-entraînement profond non supervisé de représentations d'état sans récompense. Ces représentations doivent vérifier certaines propriétés pour permettre l'application correcte du bootstrapping et d'autres mécanismes de prises de décisions communs à l'apprentissage supervisé, comme être de faible dimension et garantir la cohérence locale et la topologie (ou connectivité) de l'environnement, ce que nous chercherons à réaliser à travers les modèles pré-entraînés avec les deux algorithmes de SRL proposés dans cette thèse
This thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
Styles APA, Harvard, Vancouver, ISO, etc.
4

Woodbury, Nathan Scott. « Representation and Reconstruction of Linear, Time-Invariant Networks ». BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.

Texte intégral
Résumé :
Network reconstruction is the process of recovering a unique structured representation of some dynamic system using input-output data and some additional knowledge about the structure of the system. Many network reconstruction algorithms have been proposed in recent years, most dealing with the reconstruction of strictly proper networks (i.e., networks that require delays in all dynamics between measured variables). However, no reconstruction technique presently exists capable of recovering both the structure and dynamics of networks where links are proper (delays in dynamics are not required) and not necessarily strictly proper.The ultimate objective of this dissertation is to develop algorithms capable of reconstructing proper networks, and this objective will be addressed in three parts. The first part lays the foundation for the theory of mathematical representations of proper networks, including an exposition on when such networks are well-posed (i.e., physically realizable). The second part studies the notions of abstractions of a network, which are other networks that preserve certain properties of the original network but contain less structural information. As such, abstractions require less a priori information to reconstruct from data than the original network, which allows previously-unsolvable problems to become solvable. The third part addresses our original objective and presents reconstruction algorithms to recover proper networks in both the time domain and in the frequency domain.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Boots, Byron. « Spectral Approaches to Learning Predictive Representations ». Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.

Texte intégral
Résumé :
A central problem in artificial intelligence is to choose actions to maximize reward in a partially observable, uncertain environment. To do so, we must obtain an accurate environment model, and then plan to maximize reward. However, for complex domains, specifying a model by hand can be a time consuming process. This motivates an alternative approach: learning a model directly from observations. Unfortunately, learning algorithms often recover a model that is too inaccurate to support planning or too large and complex for planning to succeed; or, they require excessive prior domain knowledge or fail to provide guarantees such as statistical consistency. To address this gap, we propose spectral subspace identification algorithms which provably learn compact, accurate, predictive models of partially observable dynamical systems directly from sequences of action-observation pairs. Our research agenda includes several variations of this general approach: spectral methods for classical models like Kalman filters and hidden Markov models, batch algorithms and online algorithms, and kernel-based algorithms for learning models in high- and infinite-dimensional feature spaces. All of these approaches share a common framework: the model’s belief space is represented as predictions of observable quantities and spectral algorithms are applied to learn the model parameters. Unlike the popular EM algorithm, spectral learning algorithms are statistically consistent, computationally efficient, and easy to implement using established matrixalgebra techniques. We evaluate our learning algorithms on a series of prediction and planning tasks involving simulated data and real robotic systems.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Gabriel, Florence. « Mental representations of fractions : development, stable state, learning difficulties and intervention ». Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.

Texte intégral
Résumé :
Fractions are very hard to learn. As the joke goes, “Three out of two people have trouble with fractions”. Yet the invention of a notation for fractions is very ancient, dating back to Babylonians and Egyptians. Moreover, it is thought that ratio representation is innate. And obviously, fractions are part of our everyday life. We read them in recipes, we need them to estimate distances on maps or rebates in shops. In addition, fractions play a key role in science and mathematics, in probabilities, proportions and algebraic reasoning. Then why is it so hard for pupils to understand and use them? What is so special about fractions? As in other areas of numerical cognition, a fast-developing field in cognitive science, we tackled this paradox through a multi-pronged approach, investigating both adults and children.

Based on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.

In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.

Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.

Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.

The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.

In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.


Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished

Styles APA, Harvard, Vancouver, ISO, etc.
7

Shi, Fangzhou. « Towards Molecule Generation with Heterogeneous States via Reinforcement Learning ». Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.

Texte intégral
Résumé :
De novo molecular design and generation are frequently prescribed in the field of chemistry and biology, for it plays a critical role in maintaining the prosperity of the chemical industry and benefiting the drug discovery. Nowadays, many significant problems in this field are based on the philosophy of designing molecular structures towards specific desired properties. This research is very meaningful in both medical and AI fields, which can benefits novel drug discovery for some diseases. However, It remains a challenging task due to the large size of chemical space. In recent years, reinforcement learning-based methods leverage graphs to represent molecules and generate molecules as a decision making process. However, this vanilla graph representation may neglect the intrinsic context information with molecules and limits the generation performance accordingly. In this paper, we propose to augment the original graph states with the SMILES context vectors. As a result, SMILES representations are easily processed by a simple language model such that the general semantic features of a molecule can be extracted; and the graph representations perform better in handling the topology relationship of each atom. Moreover, we propose a framework that combines supervised learning and reinforcement learning algorithm to take a solid consideration of these two heterogeneous state representations of a molecule, which can fuse the information from both of them and extract more comprehensive features so that more sophisticated decisions can be made by the policy network. Our model also introduces two attention mechanisms, i.e., action-attention, and graph-attention, to further improve the performance. We conduct our experiments on a practical dataset, ZINC, and the experiment results demonstrate that our framework can outperform other baselines in the learning performance of molecule generation and chemical property optimization.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ford, Shelton J. « The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions ». 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Allen, Heather. « Experiencing literature – learning from experience : the application of neuroscience to literary analysis by example of representations of German colonialism in Uwe Timm’s Morenga ». 2011. http://hdl.handle.net/1993/4862.

Texte intégral
Résumé :
Is it probable that a reader can have an empathetic and learning experience of an historical event facilitated through text? Research in neuroscience indicates that the form of a text can trigger mirror neurons, enhancing empathy with the events and characters portrayed and enabling introspective learning through stimulation of the default state network in a reading brain. Narrative elements in historical and fictional literature are analyzed for their potential in facilitating the stimulation of these states. The historical fiction novel Morenga by Uwe Timm is analyzed in order to deduce what a reader neurologically experiences in relation to the text and the historical event portrayed in the novel during the reading process. The probability of the reader experiencing empathy and learning through text so that their perspectives on inter-textual and extra-textual similar events are affected is then developed.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Stasko, Carly. « A Pedagogy of Holistic Media Literacy : Reflections on Culture Jamming as Transformative Learning and Healing ». Thesis, 2009. http://hdl.handle.net/1807/18109.

Texte intégral
Résumé :
This qualitative study uses narrative inquiry (Connelly & Clandinin, 1988, 1990, 2001) and self-study to investigate ways to further understand and facilitate the integration of holistic philosophies of education with media literacy pedagogies. As founder and director of the Youth Media Literacy Project and a self-titled Imagitator (one who agitates imagination), I have spent over 10 years teaching media literacy in various high schools, universities, and community centres across North America. This study will focus on my own personal practical knowledge (Connelly & Clandinin, 1982) as a culture jammer, educator and cancer survivor to illustrate my original vision of a ‘holistic media literacy pedagogy’. This research reflects on the emergence and impact of holistic media literacy in my personal and professional life and also draws from relevant interdisciplinary literature to challenge and synthesize current insights and theories of media literacy, holistic education and culture jamming.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "State representation learning"

1

A, Bositis David, et Joint Center for Political and Economic Studies (U.S.), dir. Redistricting and minority representation : Learning from the past, preparing for the future. Washington, D.C : Joint Center for Political and Economic Studies, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

1966-, McBride Kecia Driver, dir. Visual media and the humanities : A pedagogy of representation. Knoxville : University of Tennessee Press, 2004.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Burge, Tyler. Perception : First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.

Texte intégral
Résumé :
Perception is the first form of representational mind to emerge in evolution. Three types of form are discussed: formal representational structure of perceptual states, formation characteristics in computations of perceptual states, and the form of the visual and visuomotor systems. The book distinguishes perception from non-perceptual sensing. The formal representational structure of perceptual states is developed via a systematic semantics for them—an account of what it is for them to be accurate or inaccurate. This semantics is elaborated by explaining how the representational form is embedded in an iconic format. These structures are then situated in what is known about the processing of perceptual representations, with emphasis on formation of perceptual categorizations. Features of processing that provide insight into the scope of the perceptual (paradigmatically visual) system are highlighted. Relations between these processes and associated perceptual-level capacities—conation, attention, memory, anticipation, affect, learning, imagining—are delineated. Roughly, a perceptual-level capacity is one that borrows its form and content from perception and involves processing that is no more complex or sophisticated than processing that occurs in the classical visual hierarchy. Relations between perception and these associated perceptual-level capacities are argued to occur within the perceptual and perceptual-motor systems. An account of what it is to occur within these systems is elaborated. An upshot is refinement of the distinction between perceptual-level capacities, on one hand, and thought and conception, on the other. Intermediate territory between perception-level representation and propositional thought is explored. The book is resolutely a work in philosophy of science. It attempts to understand perception by focusing on its form, function, and underlying capacities, as indicated in the sciences of perception, rather than by relying on introspection or ordinary talk about perception.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Boden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.

Texte intégral
Résumé :
A host of state-of-the-art AI applications exist, designed for countless specific tasks and used in almost every area of life, by laymen and professionals alike. Many outperform even the most expert humans. In that sense, progress has been spectacular. But the AI pioneers were also hoping for systems with general intelligence. ‘General intelligence as the Holy Grail’ explains why artificial general intelligence is still highly elusive despite recent increases in computer power. It considers the general AI strategies in recent research—heuristics, planning, mathematical simplification, and different forms of knowledge representation—and discusses the concepts of the frame problem, agents and distributed cognition, machine learning, and generalist systems.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Alden, John, Alexander H. Cohen et Jonathan J. Ring. Gaming the System : Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Alden, John, Alexander H. Cohen et Jonathan J. Ring. Gaming the System : Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ginsburg, Herbert P., Rachael Labrecque, Kara Carpenter et Dana Pagar. New Possibilities for Early Mathematics Education. Sous la direction de Roi Cohen Kadosh et Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.029.

Texte intégral
Résumé :
Mathematics instruction for young children should begin early, elaborate on and mathematize children’s everyday mathematics, promote a meaningful integration and synthesis of mathematics knowledge, and advance the development of conceptual understanding, procedural fluency, and use of effective strategies. The affordances provided by computer programs can be used to further these goals by involving children in activities that are not possible with traditional methods. Drawing on research and theory concerning the development of mathematical cognition, learning, and teaching, high quality mathematics software can provide a productive learning environment with several components: (1) useful instructions and demonstrations, scaffolds, and feedback; (2) mathematical tools (like a device that groups objects into tens); and (3) virtual objects, manipulatives and mathematical representations. We propose a five-stage iterative research and development process consisting of (1) coherent design; (2) formative research; (3) revision; (4) learning studies; and (5) summative research. A case study ofMathemAntics, software for children ranging from age 3 to grade 3, illustrates the research and development process. The chapter concludes with implications for early childhood educators, software designers, and researchers.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rueschemeyer, Shirley-Ann, et M. Gareth Gaskell, dir. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.

Texte intégral
Résumé :
This handbook reviews the current state of the art in the field of psycholinguistics. Part I deals with language comprehension at the sublexical, lexical, and sentence and discourse levels. It explores concepts of speech representation and the search for universal speech segmentation mechanisms against a background of linguistic diversity and compares first language with second language segmentation. It also discusses visual word recognition, lexico-semantics, the different forms of lexical ambiguity, sentence comprehension, text comprehension, and language in deaf populations. Part II focuses on language production, with chapters covering topics such as word production and related processes based on evidence from aphasia, the major debates surrounding grammatical encoding. Part III considers various aspects of interaction and communication, including the role of gesture in language processing, approaches to the study of perspective-taking, and the interrelationships between language comprehension, emotion, and sociality. Part IV is concerned with language development and evolution, focusing on topics ranging from the development of prosodic phonology, the neurobiology of artificial grammar learning, and developmental dyslexia. The book concludes with Part V, which looks at methodological advances in psycholinguistic research, such as the use of intracranial electrophysiology in the area of language processing.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Papafragou, Anna, John C. Trueswell et Lila R. Gleitman, dir. The Oxford Handbook of the Mental Lexicon. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780198845003.001.0001.

Texte intégral
Résumé :
The present handbook is a state-of-the-art compilation of papers from leading scholars on the mental lexicon—the representation of language in the mind/brain at the level of individual words and meaningful sub-word units. In recent years, the study of words as mental objects has grown rapidly across several fields including linguistics, psychology, philosophy, neuroscience, education, and computational cognitive science. This comprehensive collection spans multiple disciplines, topics, theories, and methods, to highlight important advances in the study of the mental lexicon, identify areas of debate, and inspire innovation in the field from present and future generations of scholars. The book is divided into three parts. Part I presents modern linguistic and cognitive theories of how the mind/brain represents words at the phonological, morphological, syntactic, semantic, and pragmatic levels. This part also discusses broad architectural issues pertaining to the organization of the lexicon, the relation between words and concepts, and the role of compositionality. Part II discusses how children learn the form and meaning of words in their native language drawing from the key domains of phonology, morphology, syntax, semantics, and pragmatics. Multiple approaches to lexical learning are introduced to explain how learner- and environment-driven factors contribute to both the stability and the variability of lexical learning across both individual learners and communities. Part III examines how the mental lexicon contributes to language use during listening, speaking, and conversation, and includes perspectives from bilingualism, sign languages, and disorders of lexical access and production.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Caselli, Tommaso, Eduard Hovy, Martha Palmer et Piek Vossen, dir. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.

Texte intégral
Résumé :
Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "State representation learning"

1

Merckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux et Nicolas Perrin. « State Representation Learning from Demonstration ». Dans Machine Learning, Optimization, and Data Science, 304–15. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Steccanella, Lorenzo, et Anders Jonsson. « State Representation Learning for Goal-Conditioned Reinforcement Learning ». Dans Machine Learning and Knowledge Discovery in Databases, 84–99. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Schestakov, Stefan, Paul Heinemeyer et Elena Demidova. « Road Network Representation Learning with Vehicle Trajectories ». Dans Advances in Knowledge Discovery and Data Mining, 57–69. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.

Texte intégral
Résumé :
AbstractSpatio-temporal traffic patterns reflecting the mobility behavior of road users are essential for learning effective general-purpose road representations. Such patterns are largely neglected in state-of-the-art road representation learning, mainly focusing on modeling road topology and static road features. Incorporating traffic patterns into road network representation learning is particularly challenging due to the complex relationship between road network structure and mobility behavior of road users. In this paper, we present TrajRNE – a novel trajectory-based road embedding model incorporating vehicle trajectory information into road network representation learning. Our experiments on two real-world datasets demonstrate that TrajRNE outperforms state-of-the-art road representation learning baselines on various downstream tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sychev, Oleg. « Visualizing Program State as a Clustered Graph for Learning Programming ». Dans Diagrammatic Representation and Inference, 404–7. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hu, Dapeng, Xuesong Jiang, Xiumei Wei et Jian Wang. « State Representation Learning for Minimax Deep Deterministic Policy Gradient ». Dans Knowledge Science, Engineering and Management, 481–87. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29551-6_43.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Meden, Blaž, Abraham Prieto, Peter Peer et Francisco Bellas. « First Steps Towards State Representation Learning for Cognitive Robotics ». Dans Lecture Notes in Computer Science, 499–510. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61705-9_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Meng, Li, Morten Goodwin, Anis Yazidi et Paal Engelstad. « Unsupervised State Representation Learning in Partially Observable Atari Games ». Dans Computer Analysis of Images and Patterns, 212–22. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Servan-Schreiber, David, Axel Cleeremans et James L. McClelland. « Graded State Machines : The Representation of Temporal Contingencies in Simple Recurrent Networks ». Dans Connectionist Approaches to Language Learning, 57–89. Boston, MA : Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4008-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ding, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin et al. « Ten Key Problems of Pre-trained Models : An Outlook of Representation Learning ». Dans Representation Learning for Natural Language Processing, 491–521. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.

Texte intégral
Résumé :
AbstractThe aforementioned representation learning methods have shown their effectiveness in various NLP scenarios and tasks. Large-scale pre-trained language models (i.e., big models) are the state of the art of representation learning for NLP and beyond. With the rapid growth of data scale and the development of computation devices, big models bring us to a new era of AI and NLP. Standing on the new giants of big models, there are many new challenges and opportunities for representation learning. In the last chapter, we will provide a 2023 outlook for the future directions of representation learning techniques for NLP by summarizing ten key open problems for pre-trained models.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Li, Zhipeng, et Xuesong Jiang. « State Representation Learning for Multi-agent Deep Deterministic Policy Gradient ». Dans Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications, 667–75. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_75.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "State representation learning"

1

Zhao, Jian, Wengang Zhou, Tianyu Zhao, Yun Zhou et Houqiang Li. « State Representation Learning For Effective Deep Reinforcement Learning ». Dans 2020 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2020. http://dx.doi.org/10.1109/icme46284.2020.9102924.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Nozawa, Kento, et Issei Sato. « Evaluation Methods for Representation Learning : A Survey ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/776.

Texte intégral
Résumé :
Representation learning enables us to automatically extract generic feature representations from a dataset to solve another machine learning task. Recently, extracted feature representations by a representation learning algorithm and a simple predictor have exhibited state-of-the-art performance on several machine learning tasks. Despite its remarkable progress, there exist various ways to evaluate representation learning algorithms depending on the application because of the flexibility of representation learning. To understand the current applications of representation learning, we review evaluation methods of representation learning algorithms. On the basis of our evaluation survey, we also discuss the future direction of representation learning. The extended version, https://arxiv.org/abs/2204.08226, gives more detailed discussions and a survey on theoretical analyses.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhu, Hanhua. « Generalized Representation Learning Methods for Deep Reinforcement Learning ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/748.

Texte intégral
Résumé :
Deep reinforcement learning (DRL) increases the successful applications of reinforcement learning (RL) techniques but also brings challenges such as low sample efficiency. In this work, I propose generalized representation learning methods to obtain compact state space suitable for RL from a raw observation state. I expect my new methods will increase sample efficiency of RL by understandable representations of state and therefore improve the performance of RL.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie et Min Zhang. « RaSa : Relation and Sensitivity Aware Representation Learning for Text-based Person Search ». Dans Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California : International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.

Texte intégral
Résumé :
Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (i.e., learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (i.e., being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (i.e., learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: https://github.com/Flame-Chasers/RaSa.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Stork, Johannes A., Carl Henrik Ek, Yasemin Bekiroglu et Danica Kragic. « Learning Predictive State Representation for in-hand manipulation ». Dans 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139641.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Munk, Jelle, Jens Kober et Robert Babuska. « Learning state representation for deep actor-critic control ». Dans 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798980.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Duarte, Valquiria Aparecida Rosa, et Rita Maria Silva Julia. « Improving the State Space Representation through Association Rules ». Dans 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2016. http://dx.doi.org/10.1109/icmla.2016.0167.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wang, Hai, Takeshi Onishi, Kevin Gimpel et David McAllester. « Emergent Predication Structure in Hidden State Vectors of Neural Readers ». Dans Proceedings of the 2nd Workshop on Representation Learning for NLP. Stroudsburg, PA, USA : Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2604.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bhatt, Shreyansh, Jinjin Zhao, Candace Thille, Dawn Zimmaro et Neelesh Gattani. « A Novel Approach for Knowledge State Representation and Prediction ». Dans L@S '20 : Seventh (2020) ACM Conference on Learning @ Scale. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3386527.3406745.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zhao, Han, Xu Yang, Zhenru Wang, Erkun Yang et Cheng Deng. « Graph Debiased Contrastive Learning with Joint Representation Clustering ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/473.

Texte intégral
Résumé :
By contrasting positive-negative counterparts, graph contrastive learning has become a prominent technique for unsupervised graph representation learning. However, existing methods fail to consider the class information and will introduce false-negative samples in the random negative sampling, causing poor performance. To this end, we propose a graph debiased contrastive learning framework, which can jointly perform representation learning and clustering. Specifically, representations can be optimized by aligning with clustered class information, and simultaneously, the optimized representations can promote clustering, leading to more powerful representations and clustering results. More importantly, we randomly select negative samples from the clusters which are different from the positive sample's cluster. In this way, as the supervisory signals, the clustering results can be utilized to effectively decrease the false-negative samples. Extensive experiments on five datasets demonstrate that our method achieves new state-of-the-art results on graph clustering and classification tasks.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "State representation learning"

1

Babu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper et Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response : Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), juin 2021. http://dx.doi.org/10.19088/ids.2021.049.

Texte intégral
Résumé :
This paper presents an analysis of the role of decentralised institutions to understand the learning and challenges of the grass-roots-led pandemic response of Kerala. The study is based on interviews with experts and frontline workers to ensure the representation of all stakeholders dealing with the outbreak, from the state level to the household level, and a review of published government orders, health guidelines, and news articles. The outcome of the study shows that along with the decentralised system of governance, the strong grass-roots-level network of Accredited Social Health Activists (ASHA) workers, volunteer groups, and Kudumbashree members played a pivotal role in pandemic management in the state. The efficient functioning of local bodies in the state, experience gained from successive disasters, and the Nipah outbreak naturally aided grass-roots-level actions. The lessons others can draw from Kerala are the importance of public expenditure on health, investment for building social capital, and developing the local self-delivery system.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Iatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov et Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], février 2020. http://dx.doi.org/10.31812/123456789/3749.

Texte intégral
Résumé :
Augmented reality is one of the most modern information visualization technologies. Number of scientific studies on different aspects of augmented reality technology development and application is analyzed in the research. Practical examples of augmented reality technologies for various industries are described. Very often augmented reality technologies are used for: social interaction (communication, entertainment and games); education; tourism; areas of purchase/sale and presentation. There are various scientific and mass events in Ukraine, as well as specialized training to promote augmented reality technologies. There are following results of the research: main benefits that educational institutions would receive from introduction of augmented reality technology are highlighted; it is determined that application of augmented reality technologies in education would contribute to these technologies development and therefore need increase for specialists in the augmented reality; growth of students' professional level due to application of augmented reality technologies is proved; adaptation features of augmented reality technologies in learning disciplines for students of different educational institutions are outlined; it is advisable to apply integrated approach in the process of preparing future professionals of new technological era; application of augmented reality technologies increases motivation to learn, increases level of information assimilation due to the variety and interactivity of its visual representation. Main difficulties of application of augmented reality technologies are financial, professional and methodical. Following factors are necessary for introduction of augmented reality technologies: state support for such projects and state procurement for development of augmented reality technologies; conduction of scientific research and experimental confirmation of effectiveness and pedagogical expediency of augmented reality technologies application for training of specialists of different specialties; systematic conduction of number of national and international events on dissemination and application of augmented reality technology. It is confirmed that application of augmented reality technologies is appropriate for training of future specialists of new technological era.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Singh, Abhijeet, Mauricio Romero et Karthik Muralidharan. COVID-19 Learning Loss and Recovery : Panel Data Evidence from India. Research on Improving Systems of Education (RISE), septembre 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.

Texte intégral
Résumé :
We use a near-representative household panel survey of ∼19,000 primary-school-aged children in rural Tamil Nadu to study the extent of ‘learning loss’ after COVID-19 school closures, the pace of recovery in the months after schools reopened, and the role of a flagship compensatory intervention introduced by the state government. Students tested in December 2021, after 18 months of school closures, displayed severe deficits in learning of about 0.7 standard deviations (σ) in math and 0.34σ in language compared to identically-aged students in the same villages in 2019. Using multiple rounds of in-person testing, we find that two-thirds of this deficit was made up in the 6 months after school reopening. Using value-added models, we attribute ∼24% of the cohort-level recovery to a government-run after-school remediation program which improved test scores for attendees by 0.17σ in math and 0.09σ in Tamil after 3-4 months. Further, while learning loss was regressive, the recovery was progressive, likely reflecting (in part) the greater take up of the remediation program by more socioeconomically disadvantaged students. These positive results from a state-wide program delivered at scale by the government may provide a useful template for both recovery from COVID-19 learning losses, and bridging learning gaps more generally in low-and-middle-income countries.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting : A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, février 2022. http://dx.doi.org/10.36687/inetwp178.

Texte intégral
Résumé :
How much does money drive legislative outcomes in the United States? In this article, we use aggregated campaign finance data as well as a Transformer based text embedding model to predict roll call votes for legislation in the US Congress with more than 90% accuracy. In a series of model comparisons in which the input feature sets are varied, we investigate the extent to which campaign finance is predictive of voting behavior in comparison with variables like partisan affiliation. We find that the financial interests backing a legislator’s campaigns are independently predictive in both chambers of Congress, but also uncover a sizable asymmetry between the Senate and the House of Representatives. These findings are cross-referenced with a Representational Similarity Analysis (RSA) linking legislators’ financial and voting records, in which we show that “legislators who vote together get paid together”, again discovering an asymmetry between the House and the Senate in the additional predictive power of campaign finance once party is accounted for. We suggest an explanation of these facts in terms of Thomas Ferguson’s Investment Theory of Party Competition: due to a number of structural differences between the House and Senate, but chiefly the lower amortized cost of obtaining individuated influence with Senators, political investors prefer operating on the House using the party as a proxy.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Tarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan et Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, novembre 2020. http://dx.doi.org/10.31812/123456789/4421.

Texte intégral
Résumé :
The article deals with the analysis of the impact of the using AR technology in the study of a foreign language by university students. It is stated out that AR technology can be a good tool for learning a foreign language. The use of elements of AR in the course of studying a foreign language, in particular in the form of virtual excursions, is proposed. Advantages of using AR technology in the study of the German language are identified, namely: the possibility of involvement of different channels of information perception, the integrity of the representation of the studied object, the faster and better memorization of new vocabulary, the development of communicative foreign language skills. The ease and accessibility of using QR codes to obtain information about the object of study from open Internet sources is shown. The results of a survey of students after virtual tours are presented. A reorientation of methodological support for the study of a foreign language at universities is proposed. Attention is drawn to the use of AR elements in order to support students with different learning styles (audio, visual, kinesthetic).
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan et Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, novembre 2020. http://dx.doi.org/10.31812/123456789/4421.

Texte intégral
Résumé :
The article deals with the analysis of the impact of the using AR technology in the study of a foreign language by university students. It is stated out that AR technology can be a good tool for learning a foreign language. The use of elements of AR in the course of studying a foreign language, in particular in the form of virtual excursions, is proposed. Advantages of using AR technology in the study of the German language are identified, namely: the possibility of involvement of different channels of information perception, the integrity of the representation of the studied object, the faster and better memorization of new vocabulary, the development of communicative foreign language skills. The ease and accessibility of using QR codes to obtain information about the object of study from open Internet sources is shown. The results of a survey of students after virtual tours are presented. A reorientation of methodological support for the study of a foreign language at universities is proposed. Attention is drawn to the use of AR elements in order to support students with different learning styles (audio, visual, kinesthetic).
Styles APA, Harvard, Vancouver, ISO, etc.
7

State Legislator Representation : A Data-Driven Learning Guide. Ann Arbor, MI : Inter-university Consortium for Political and Social Research, avril 2009. http://dx.doi.org/10.3886/stateleg.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie