Academic literature on the topic 'States representation learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'States representation learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "States representation learning"

1

Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning." Journal of Artificial Intelligence Research 61 (January 31, 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.

Full text
Abstract:
We consider the problem of constructing abstract representations for planning in high-dimensional, continuous environments. We assume an agent equipped with a collection of high-level actions, and construct representations provably capable of evaluating plans composed of sequences of those actions. We first consider the deterministic planning case, and show that the relevant computation involves set operations performed over sets of states. We define the specific collection of sets that is necessary and sufficient for planning, and use them to construct a grounded abstract symbolic representation that is provably suitable for deterministic planning. The resulting representation can be expressed in PDDL, a canonical high-level planning domain language; we construct such a representation for the Playroom domain and solve it in milliseconds using an off-the-shelf planner. We then consider probabilistic planning, which we show requires generalizing from sets of states to distributions over states. We identify the specific distributions required for planning, and use them to construct a grounded abstract symbolic representation that correctly estimates the expected reward and probability of success of any plan. In addition, we show that learning the relevant probability distributions corresponds to specific instances of probabilistic density estimation and probabilistic classification. We construct an agent that autonomously learns the correct abstract representation of a computer game domain, and rapidly solves it. Finally, we apply these techniques to create a physical robot system that autonomously learns its own symbolic representation of a mobile manipulation task directly from sensorimotor data---point clouds, map locations, and joint angles---and then plans using that representation. Together, these results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high-level representations.
APA, Harvard, Vancouver, ISO, and other styles
2

SCARPETTA, SILVIA, ZHAOPING LI, and JOHN HERTZ. "LEARNING IN AN OSCILLATORY CORTICAL MODEL." Fractals 11, supp01 (February 2003): 291–300. http://dx.doi.org/10.1142/s0218348x03001951.

Full text
Abstract:
We study a model of generalized-Hebbian learning in asymmetric oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. The learning rule is based on the synaptic plasticity observed experimentally, in particular long-term potentiation and long-term depression of the synaptic efficacies depending on the relative timing of the pre- and postsynaptic activities during learning. The learned memory or representational states can be encoded by both the amplitude and the phase patterns of the oscillating neural populations, enabling more efficient and robust information coding than in conventional models of associative memory or input representation. Depending on the class of nonlinearity of the activation function, the model can function as an associative memory for oscillatory patterns (nonlinearity of class II) or can generalize from or interpolate between the learned states, appropriate for the function of input representation (nonlinearity of class I). In the former case, simulations of the model exhibits a first order transition between the "disordered state" and the "ordered" memory state.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu, and Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.

Full text
Abstract:
Good representations can help RL agents perform concise modeling of their surroundings, and thus support effective decision-making in complex environments. Previous methods learn good representations by imposing extra constraints on dynamics. However, in the causal perspective, the causation between the action and its effect is not fully considered in those methods, which leads to the ignorance of the underlying relations among the action effects on the transitions. Based on the intuition that the same action always causes similar effects among different states, we induce such causation by taking the invariance of action effects among states as the relation. By explicitly utilizing such invariance, in this paper, we show that a better representation can be learned and potentially improves the sample efficiency and the generalization ability of the learned policy. We propose Invariant Action Effect Model (IAEM) to capture the invariance in action effects, where the effect of an action is represented as the residual of representations from neighboring states. IAEM is composed of two parts: (1) a new contrastive-based loss to capture the underlying invariance of action effects; (2) an individual action effect and provides a self-adapted weighting strategy to tackle the corner cases where the invariance does not hold. The extensive experiments on two benchmarks, i.e. Grid-World and Atari, show that the representations learned by IAEM preserve the invariance of action effects. Moreover, with the invariant action effect, IAEM can accelerate the learning process by 1.6x, rapidly generalize to new environments by fine-tuning on a few components, and outperform other dynamics-based representation methods by 1.4x in limited steps.
APA, Harvard, Vancouver, ISO, and other styles
4

Yue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang, and Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.

Full text
Abstract:
Deep reinforcement learning (RL) algorithms suffer severe performance degradation when the interaction data is scarce, which limits their real-world application. Recently, visual representation learning has been shown to be effective and promising for boosting sample efficiency in RL. These methods usually rely on contrastive learning and data augmentation to train a transition model, which is different from how the model is used in RL---performing value-based planning. Accordingly, the learned representation by these visual methods may be good for recognition but not optimal for estimating state value and solving the decision problem. To address this issue, we propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making. More specifically, VCR trains a model to predict the future state (also referred to as the "imagined state'') based on the current one and a sequence of actions. Instead of aligning this imagined state with a real state returned by the environment, VCR applies a Q value head on both of the states and obtains two distributions of action values. Then a distance is computed and minimized to force the imagined state to produce a similar action value prediction as that by the real state. We develop two implementations of the above idea for the discrete and continuous action spaces respectively. We conduct experiments on Atari 100k and DeepMind Control Suite benchmarks to validate their effectiveness for improving sample efficiency. It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Chornozhuk, S. "The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem." Cybernetics and Computer Technologies, no. 3 (October 27, 2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.

Full text
Abstract:
Introduction. The spatial protein structure folding is an important and actual problem in computational biology. Considering the mathematical model of the task, it can be easily concluded that finding an optimal protein conformation in a three dimensional grid is a NP-hard problem. Therefore some reinforcement learning techniques such as Q-learning approach can be used to solve the problem. The article proposes a new geometric “state-action” space representation which significantly differs from all alternative representations used for this problem. The purpose of the article is to analyze existing approaches of different states and actions spaces representations for Q-learning algorithm for protein structure folding problem, reveal their advantages and disadvantages and propose the new geometric “state-space” representation. Afterwards the goal is to compare existing and the proposed approaches, make conclusions with also describing possible future steps of further research. Result. The work of the proposed algorithm is compared with others on the basis of 10 known chains with a length of 48 first proposed in [16]. For each of the chains the Q-learning algorithm with the proposed “state-space” representation outperformed the same Q-learning algorithm with alternative existing “state-space” representations both in terms of average and minimal energy values of resulted conformations. Moreover, a plenty of existing representations are used for a 2D protein structure predictions. However, during the experiments both existing and proposed representations were slightly changed or developed to solve the problem in 3D, which is more computationally demanding task. Conclusion. The quality of the Q-learning algorithm with the proposed geometric “state-action” space representation has been experimentally confirmed. Consequently, it’s proved that the further research is promising. Moreover, several steps of possible future research such as combining the proposed approach with deep learning techniques has been already suggested. Keywords: Spatial protein structure, combinatorial optimization, relative coding, machine learning, Q-learning, Bellman equation, state space, action space, basis in 3D space.
APA, Harvard, Vancouver, ISO, and other styles
6

Lamanna, Leonardo, Alfonso Emilio Gerevini, Alessandro Saetti, Luciano Serafini, and Paolo Traverso. "On-line Learning of Planning Domains from Sensor Data in PAL: Scaling up to Large State Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11862–69. http://dx.doi.org/10.1609/aaai.v35i13.17409.

Full text
Abstract:
We propose an approach to learn an extensional representation of a discrete deterministic planning domain from observations in a continuous space navigated by the agent actions. This is achieved through the use of a perception function providing the likelihood of a real-value observation being in a given state of the planning domain after executing an action. The agent learns an extensional representation of the domain (the set of states, the transitions from states to states caused by actions) and the perception function on-line, while it acts for accomplishing its task. In order to provide a practical approach that can scale up to large state spaces, a “draft” intensional (PDDL-based) model of the planning domain is used to guide the exploration of the environment and learn the states and state transitions. The proposed approach uses a novel algorithm to (i) construct the extensional representation of the domain by interleaving symbolic planning in the PDDL intensional representation and search in the state transition graph of the extensional representation; (ii) incrementally refine the intensional representation taking into account information about the actions that the agent cannot execute. An experimental analysis shows that the novel approach can scale up to large state spaces, thus overcoming the limits in scalability of the previous work.
APA, Harvard, Vancouver, ISO, and other styles
7

Sapena, Oscar, Eva Onaindia, and Eliseo Marzal. "Automated feature extraction for planning state representation." Inteligencia Artificial 27, no. 74 (October 10, 2024): 227–42. http://dx.doi.org/10.4114/intartif.vol27iss74pp227-242.

Full text
Abstract:
Deep learning methods have recently emerged as a mechanism for generating embeddings of planning states without the need to predefine feature spaces. In this work, we advocate for an automated, cost-effective and interpretable approach to extract representative features of planning states from high-level language. We present a technique that builds up on the objects type and yields a generalization over an entire planning domain, enabling to encode numerical state and goal information of individual planning tasks. The proposed representation is then evaluated in a task for learning heuristic functions for particular domains. A comparative analysis with one of the best current sequential planner and a recent ML-based approach demonstrate the efficacy of our method in improving planner performance.
APA, Harvard, Vancouver, ISO, and other styles
8

O’Donnell, Ryan, and John Wright. "Learning and testing quantum states via probabilistic combinatorics and representation theory." Current Developments in Mathematics 2021, no. 1 (2021): 43–94. http://dx.doi.org/10.4310/cdm.2021.v2021.n1.a2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Hengyuan, Suyao Zhao, Ruiheng Liu, Wenlong Wang, Yixin Hong, and Runjiu Hu. "Automatic Traffic Anomaly Detection on the Road Network with Spatial-Temporal Graph Neural Network Representation Learning." Wireless Communications and Mobile Computing 2022 (June 20, 2022): 1–12. http://dx.doi.org/10.1155/2022/4222827.

Full text
Abstract:
Traffic anomaly detection is an essential part of an intelligent transportation system. Automatic traffic anomaly detection can provide sufficient decision-support information for road network operators, travelers, and other stakeholders. This research proposes a novel automatic traffic anomaly detection method based on spatial-temporal graph neural network representation learning. We divide traffic anomaly detection into two steps: first is learning the implicit graph feature representation of multivariate time series of traffic flows based on a graph attention model to predict the traffic states. Second, traffic anomalies are detected using graph deviation score calculation to compare the deviation of predicted traffic states with the observed traffic states. Experiments on real network datasets show that with an end-to-end workflow and spatial-temporal representation of traffic states, this method can detect traffic anomalies accurately and automatically and achieves better performance over baselines.
APA, Harvard, Vancouver, ISO, and other styles
10

Dayan, Peter. "Improving Generalization for Temporal Difference Learning: The Successor Representation." Neural Computation 5, no. 4 (July 1993): 613–24. http://dx.doi.org/10.1162/neco.1993.5.4.613.

Full text
Abstract:
Estimation of returns over time, the focus of temporal difference (TD) algorithms, imposes particular constraints on good function approximators or representations. Appropriate generalization between states is determined by how similar their successors are, and representations should follow suit. This paper shows how TD machinery can be used to learn such representations, and illustrates, using a navigation task, the appropriately distributed nature of the result.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "States representation learning"

1

Shi, Fangzhou. "Towards Molecule Generation with Heterogeneous States via Reinforcement Learning." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22335.

Full text
Abstract:
De novo molecular design and generation are frequently prescribed in the field of chemistry and biology, for it plays a critical role in maintaining the prosperity of the chemical industry and benefiting the drug discovery. Nowadays, many significant problems in this field are based on the philosophy of designing molecular structures towards specific desired properties. This research is very meaningful in both medical and AI fields, which can benefits novel drug discovery for some diseases. However, It remains a challenging task due to the large size of chemical space. In recent years, reinforcement learning-based methods leverage graphs to represent molecules and generate molecules as a decision making process. However, this vanilla graph representation may neglect the intrinsic context information with molecules and limits the generation performance accordingly. In this paper, we propose to augment the original graph states with the SMILES context vectors. As a result, SMILES representations are easily processed by a simple language model such that the general semantic features of a molecule can be extracted; and the graph representations perform better in handling the topology relationship of each atom. Moreover, we propose a framework that combines supervised learning and reinforcement learning algorithm to take a solid consideration of these two heterogeneous state representations of a molecule, which can fuse the information from both of them and extract more comprehensive features so that more sophisticated decisions can be made by the policy network. Our model also introduces two attention mechanisms, i.e., action-attention, and graph-attention, to further improve the performance. We conduct our experiments on a practical dataset, ZINC, and the experiment results demonstrate that our framework can outperform other baselines in the learning performance of molecule generation and chemical property optimization.
APA, Harvard, Vancouver, ISO, and other styles
2

Castanet, Nicolas. "Automatic state representation and goal selection in unsupervised reinforcement learning." Electronic Thesis or Diss., Sorbonne université, 2025. http://www.theses.fr/2025SORUS005.

Full text
Abstract:
Au cours des dernières années, l'apprentissage par renforcement a connu un succès considérable en entrainant des agents spécialisés capables de dépasser radicalement les performances humaines dans des jeux complexes comme les échecs ou le go, ou dans des applications robotiques. Ces agents manquent souvent de polyvalence, ce qui oblige l'ingénierie humaine à concevoir leur comportement pour des tâches spécifiques avec un signal de récompense prédéfini, limitant ainsi leur capacité à faire face à de nouvelles circonstances. La spécialisation de ces agents se traduit par de faibles capacités de généralisation, ce qui les rend vulnérables à de petites variations de facteurs externes. L'un des objectifs de la recherche en intelligence artificielle est de dépasser les agents spécialisés d'aujourd'hui pour aller vers des systèmes plus généralistes pouvant s'adapter en temps réel à des facteurs externes imprévisibles et à de nouvelles tâches en aval. Ce travail va dans ce sens, en s'attaquant aux problèmes d'apprentissage par renforcement non supervisé, un cadre dans lequel les agents ne reçoivent pas de récompenses externes et doivent donc apprendre de manière autonome de nouvelles tâches tout au long de leur vie, guidés par des motivations intrinsèques. Le concept de motivation intrinsèque découle de notre compréhension de la capacité des humains à adopter certains comportements autonomes au cours de leur développement, tels que le jeu ou la curiosité. Cette capacité permet aux individus de concevoir et de résoudre leurs propres tâches, et de construire des représentations physiques et sociales de leur environnement, acquérant ainsi un ensemble ouvert de compétences tout au long de leur existence. Cette thèse s'inscrit dans l'effort de recherche visant à incorporer ces caractéristiques essentielles dans les agents artificiels, en s'appuyant sur l'apprentissage par renforcement conditionné par les buts pour concevoir des agents capables de découvrir et de maîtriser tous les buts réalisables dans des environnements complexes. Dans notre première contribution, nous étudions la sélection autonome de buts intrinsèques, car un agent polyvalent doit être capable de déterminer ses propres objectifs et l'ordre dans lequel apprendre ces objectifs pour améliorer ses performances. En tirant parti d'un modèle appris des capacités actuelles de l'agent à atteindre des buts, nous montrons que nous pouvons construire une distribution de buts optimale en fonction de leur difficulté, permettant d'échantillonner des buts dans la zone de développement proximal (ZDP) de l'agent, qui est un concept issu de la psychologie signifiant à la frontière entre ce qu'un agent sait et ce qu'il ne sait pas, constituant l'espace de connaissances qui n'est pas encore maîtrisé, mais qui a le potentiel d'être acquis. Nous démontrons que le fait de cibler la ZDP de l'agent entraîne une augmentation significative des performances pour une grande variété de tâches. Une autre compétence clé est d'extraire une représentation pertinente de l'environnement à partir des observations issues des capteurs disponibles. Nous abordons cette question dans notre deuxième contribution, en soulignant la difficulté d'apprendre une représentation correcte de l'environnement dans un cadre en ligne, où l'agent acquiert des connaissances de manière incrémentale au fur et à mesure de ses progrès. Dans ce contexte, les objectifs récemment atteints sont considérés comme des valeurs aberrantes, car il y a très peu d'occurrences de cette nouvelle compétence dans les expériences de l'agent, ce qui rend leurs représentations fragiles. Nous exploitons le cadre adversaire de l'Optimisation Distributionnellement Robuste afin que les représentations de l'agent pour de tels exemples soient fiables. Nous montrons que notre méthode conduit à un cercle vertueux, car l'apprentissage de représentations correctes pour de nouveaux objectifs favorise l'exploration de l'environnement
In the past few years, Reinforcement Learning (RL) achieved tremendous success by training specialized agents owning the ability to drastically exceed human performance in complex games like Chess or Go, or in robotics applications. These agents often lack versatility, requiring human engineering to design their behavior for specific tasks with predefined reward signal, limiting their ability to handle new circumstances. This agent's specialization results in poor generalization capabilities, which make them vulnerable to small variations of external factors and adversarial attacks. A long term objective in artificial intelligence research is to move beyond today's specialized RL agents toward more generalist systems endowed with the capability to adapt in real time to unpredictable external factors and to new downstream tasks. This work aims in this direction, tackling unsupervised reinforcement learning problems, a framework where agents are not provided with external rewards, and thus must autonomously learn new tasks throughout their lifespan, guided by intrinsic motivations. The concept of intrinsic motivation arise from our understanding of humans ability to exhibit certain self-sufficient behaviors during their development, such as playing or having curiosity. This ability allows individuals to design and solve their own tasks, and to build inner physical and social representations of their environments, acquiring an open-ended set of skills throughout their lifespan as a result. This thesis is part of the research effort to incorporate these essential features in artificial agents, leveraging goal-conditioned reinforcement learning to design agents able to discover and master every feasible goals in complex environments. In our first contribution, we investigate autonomous intrinsic goal setting, as a versatile agent should be able to determine its own goals and the order in which to learn these goals to enhance its performances. By leveraging a learned model of the agent's current goal reaching abilities, we show that we can shape an optimal difficulty goal distribution, enabling to sample goals in the Zone of Proximal Development (ZPD) of the agent, which is a psychological concept referring to the frontier between what a learner knows and what it does not, constituting the space of knowledge that is not mastered yet but have the potential to be acquired. We demonstrate that targeting the ZPD of the agent's result in a significant increase in performance for a great variety of goal-reaching tasks. Another core competence is to extract a relevant representation of what matters in the environment from observations coming from any available sensors. We address this question in our second contribution, by highlighting the difficulty to learn a correct representation of the environment in an online setting, where the agent acquires knowledge incrementally as it make progresses. In this context, recent achieved goals are outliers, as there are very few occurrences of this new skill in the agent's experiences, making their representations brittle. We leverage the adversarial setting of Distributionally Robust Optimization in order for the agent's representations of such outliers to be reliable. We show that our method leads to a virtuous circle, as learning accurate representations for new goals fosters the exploration of the environment
APA, Harvard, Vancouver, ISO, and other styles
3

Boots, Byron. "Spectral Approaches to Learning Predictive Representations." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/131.

Full text
Abstract:
A central problem in artificial intelligence is to choose actions to maximize reward in a partially observable, uncertain environment. To do so, we must obtain an accurate environment model, and then plan to maximize reward. However, for complex domains, specifying a model by hand can be a time consuming process. This motivates an alternative approach: learning a model directly from observations. Unfortunately, learning algorithms often recover a model that is too inaccurate to support planning or too large and complex for planning to succeed; or, they require excessive prior domain knowledge or fail to provide guarantees such as statistical consistency. To address this gap, we propose spectral subspace identification algorithms which provably learn compact, accurate, predictive models of partially observable dynamical systems directly from sequences of action-observation pairs. Our research agenda includes several variations of this general approach: spectral methods for classical models like Kalman filters and hidden Markov models, batch algorithms and online algorithms, and kernel-based algorithms for learning models in high- and infinite-dimensional feature spaces. All of these approaches share a common framework: the model’s belief space is represented as predictions of observable quantities and spectral algorithms are applied to learn the model parameters. Unlike the popular EM algorithm, spectral learning algorithms are statistically consistent, computationally efficient, and easy to implement using established matrixalgebra techniques. We evaluate our learning algorithms on a series of prediction and planning tasks involving simulated data and real robotic systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Nuzzo, Francesco. "Unsupervised state representation pretraining in Reinforcement Learning applied to Atari games." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288189.

Full text
Abstract:
State representation learning aims to extract useful features from the observations received by a Reinforcement Learning agent interacting with an environment. These features allow the agent to take advantage of the low-dimensional and informative representation to improve the efficiency in solving tasks. In this work, we study unsupervised state representation learning in Atari games. We use a RNN architecture for learning features that depend on sequences of observations, and pretrain a single-frame encoder architecture with different methods on randomly collected frames. Finally, we empirically evaluate how pretrained state representations perform compared with a randomly initialized architecture. For this purpose, we let a RL agent train on 22 different Atari 2600 games initializing the encoder either randomly or with one of the following unsupervised methods: VAE, CPC and ST-DIM. Promising results are obtained in most games when ST-DIM is chosen as pretraining method, while VAE often performs worse than a random initialization.
Tillståndsrepresentationsinlärning handlar om att extrahera användbara egenskaper från de observationer som mottagits av en agent som interagerar med en miljö i förstärkningsinlärning. Dessa egenskaper gör det möjligt för agenten att dra nytta av den lågdimensionella och informativa representationen för att förbättra effektiviteten vid lösning av uppgifter. I det här arbetet studerar vi icke-väglett lärande i Atari-spel. Vi använder en RNN-arkitektur för inlärning av egenskaper som är beroende av observationssekvenser, och förtränar en kodararkitektur för enskild bild med olika metoder på slumpmässigt samlade bilder. Slutligen utvärderar vi empiriskt hur förtränade tillståndsrepresentationer fungerar jämfört med en slumpmässigt initierad arkitektur. För detta ändamål låter vi en RL-agent träna på 22 olika Atari 2600-spel som initierar kodaren antingen slumpmässigt eller med en av följande metoder utan tillsyn: VAE, CPC och ST-DIM. Lovande resultat uppnås i de flesta spel när ST-DIM väljs som metod för träning, medan VAE ofta fungerar sämre än en slumpmässig initialisering.
APA, Harvard, Vancouver, ISO, and other styles
5

Sadeghi, Mohsen. "Representation and interaction of sensorimotor learning processes." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278611.

Full text
Abstract:
Human sensorimotor control is remarkably adept at utilising contextual information to learn and recall systematic sensorimotor transformations. Here, we investigate the motor representations that underlie such learning, and examine how motor memories acquired based on different contextual information interact. Using a novel three-dimensional robotic manipulandum, the 3BOT, we examined the spatial transfer of learning across various movement directions in a 3D environment, while human subjects performed reaching movements under velocity-dependent force field. The obtained pattern of generalisation suggested that the representation of dynamic learning was most likely defined in a target-based, rather than an extrinsic, coordinate system. We further examined how motor memories interact when subjects adapt to force fields applied in orthogonal dimensions. We found that, unlike opposing fields, learning two spatially orthogonal force fields led to the formation of separate motor memories, which neither interfered with nor facilitated each other. Moreover, we demonstrated a novel, more general aspect of the spontaneous recovery phenomenon using a two-dimensional force field task: when subjects learned two orthogonal force fields consecutively, in the following phase of clamped error feedback, the expression of adaptation spontaneously rotated from the direction of the second force field, towards the direction of the first force field. Finally, we examined the interaction of sensorimotor memories formed based on separate contextual information. Subjects performed reciprocating reaching and object manipulation tasks under two alternating contexts (movement directions), while we manipulated the dynamics of the task in each context separately. The results suggested that separate motor memories were formed for the dynamics of the task in different contexts, and that these motor memories interacted by sharing error signals to enhance learning. Importantly, the extent of interaction was not fixed between the context-dependent motor memories, but adaptively changed according to the task dynamics to potentially improve overall performance. Together, our experimental and theoretical results add to the understanding of mechanisms that underlie sensorimotor learning, and the way these mechanisms interact under various tasks and different dynamics.
APA, Harvard, Vancouver, ISO, and other styles
6

Gabriel, Florence. "Mental representations of fractions: development, stable state, learning difficulties and intervention." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209933.

Full text
Abstract:
Fractions are very hard to learn. As the joke goes, “Three out of two people have trouble with fractions”. Yet the invention of a notation for fractions is very ancient, dating back to Babylonians and Egyptians. Moreover, it is thought that ratio representation is innate. And obviously, fractions are part of our everyday life. We read them in recipes, we need them to estimate distances on maps or rebates in shops. In addition, fractions play a key role in science and mathematics, in probabilities, proportions and algebraic reasoning. Then why is it so hard for pupils to understand and use them? What is so special about fractions? As in other areas of numerical cognition, a fast-developing field in cognitive science, we tackled this paradox through a multi-pronged approach, investigating both adults and children.

Based on some recent research questions and intense debates in the literature, a first behavioural study examined the mental representations of the magnitude of fractions in educated adults. Behavioural observations from adults can indeed provide a first clue to explain the paradox raised by fractions. Contrary perhaps to most educated adults’ intuition, finding the value of a given fraction is not an easy operation. Fractions are complex symbols, and there is an on-going debate in the literature about how their magnitude (i.e. value) is processed. In a first study, we asked adult volunteers to decide as quickly as possible whether two fractions represent the same magnitude or not. Equivalent fractions (e.g. 1/4 and 2/8) were identified as representing the same number only about half of the time. In another experiment, adults were also asked to decide which of two fractions was larger. This paradigm offered different results, suggesting that participants relied on both the global magnitude of the fraction and the magnitude of the components. Our results showed that fraction processing depends on experimental conditions. Adults appear to use the global magnitude only in restricted circumstances, mostly with easy and familiar fractions.

In another study, we investigated the development of the mental representations of the magnitude of fractions. Previous studies in adults showed that fraction processing can be either based on the magnitude of the numerators and denominators or based on the global magnitude of fractions and the magnitude of their components. The type of processing depends on experimental conditions. In this experiment, 5th, 6th, 7th-graders, and adults were tested with two paradigms. First, they performed a same/different task. Second, they carried out a numerical comparison task in which they had to decide which of two fractions was larger. Results showed that 5th-graders do not rely on the representations of the global magnitude of fractions in the Numerical Comparison task, but those representations develop from grade 6 until grade 7. In the Same/Different task, participants only relied on componential strategies. From grade 6 on, pupils apply the same heuristics as adults in fraction magnitude comparison tasks. Moreover, we have shown that correlations between global distance effect and children’s general fraction achievement were significant.

Fractions are well known to represent a stumbling block for primary school children. In a third study, we tried to identify the difficulties encountered by primary school pupils. We observed that most 4th and 5th-graders had only a very limited notion of the meaning of fractions, basically referring to pieces of cakes or pizzas. The fraction as a notation for numbers appeared particularly hard to grasp.

Building upon these results, we designed an intervention programme. The intervention “From Pies to Numbers” aimed at improving children’s understanding of fractions as numbers. The intervention was based on various games in which children had to estimate, compare, and combine fractions represented either symbolically or as figures. 20 game sessions distributed over 3 months led to 15-20% improvement in tests assessing children's capacity to estimate and compare fractions; conversely, children in the control group who received traditional lessons improved more in procedural skills such as simplification of fractions and arithmetic operations with fractions. Thus, a short classroom intervention inducing children to play with fractions improved their conceptual understanding.

The results are discussed in light of recent research on the mental representation of the magnitude of fractions and educational theories. The importance of multidisciplinary approaches in psychology and education was also discussed.

In sum, by combining behavioural experiments in adults and children, and intervention studies, we hoped to have improved the understanding how the brain processes mathematical symbols, while helping teachers get a better grasp of pupils’ difficulties and develop classroom activities that suit the needs of learners.


Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
7

Merckling, Astrid. "Unsupervised pretraining of state representations in a rewardless environment." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS141.

Full text
Abstract:
Cette thèse vise à étendre les capacités de l'apprentissage de représentation d'état (state representation learning, SRL) afin d'aider la mise à l'échelle des algorithmes d'apprentissage par renforcement profond (deep reinforcement learning, DRL) aux tâches de contrôle continu avec des observations sensorielles à haute dimension (en particulier des images). Le SRL permet d'améliorer les performances des algorithmes de DRL en leur transmettant de meilleures entrées que celles apprises à partir de zéro avec des stratégies de bout-en-bout. Plus précisément, cette thèse aborde le problème de l'estimation d'état à la manière d'un pré-entraînement profond non supervisé de représentations d'état sans récompense. Ces représentations doivent vérifier certaines propriétés pour permettre l'application correcte du bootstrapping et d'autres mécanismes de prises de décisions communs à l'apprentissage supervisé, comme être de faible dimension et garantir la cohérence locale et la topologie (ou connectivité) de l'environnement, ce que nous chercherons à réaliser à travers les modèles pré-entraînés avec les deux algorithmes de SRL proposés dans cette thèse
This thesis seeks to extend the capabilities of state representation learning (SRL) to help scale deep reinforcement learning (DRL) algorithms to continuous control tasks with high-dimensional sensory observations (such as images). SRL allows to improve the performance of DRL by providing it with better inputs than the input embeddings learned from scratch with end-to-end strategies. Specifically, this thesis addresses the problem of performing state estimation in the manner of deep unsupervised pretraining of state representations without reward. These representations must verify certain properties to allow for the correct application of bootstrapping and other decision making mechanisms common to supervised learning, such as being low-dimensional and guaranteeing the local consistency and topology (or connectivity) of the environment, which we will seek to achieve through the models pretrained with the two SRL algorithms proposed in this thesis
APA, Harvard, Vancouver, ISO, and other styles
8

Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.

Full text
Abstract:
Network reconstruction is the process of recovering a unique structured representation of some dynamic system using input-output data and some additional knowledge about the structure of the system. Many network reconstruction algorithms have been proposed in recent years, most dealing with the reconstruction of strictly proper networks (i.e., networks that require delays in all dynamics between measured variables). However, no reconstruction technique presently exists capable of recovering both the structure and dynamics of networks where links are proper (delays in dynamics are not required) and not necessarily strictly proper.The ultimate objective of this dissertation is to develop algorithms capable of reconstructing proper networks, and this objective will be addressed in three parts. The first part lays the foundation for the theory of mathematical representations of proper networks, including an exposition on when such networks are well-posed (i.e., physically realizable). The second part studies the notions of abstractions of a network, which are other networks that preserve certain properties of the original network but contain less structural information. As such, abstractions require less a priori information to reconstruct from data than the original network, which allows previously-unsolvable problems to become solvable. The third part addresses our original objective and presents reconstruction algorithms to recover proper networks in both the time domain and in the frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
9

Hautot, Julien. "Représentation à base radiale pour l'apprentissage par renforcement visuel." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0093.

Full text
Abstract:
Ce travail de thèse s'inscrit dans le contexte de l'apprentissage par renforcement (Renforcement Learning - RL) à partir de données image. Contrairement à l'apprentissage supervisé qui permet d'effectuer différentes tâches telles que la classification, la régression ou encore la segmentation à partir d'une base de données annotée, le RL permet d'apprendre, sans base de données, via des interactions avec un environnement. En effet, dans ces méthodes, un agent tel qu'un robot va effectuer différentes actions afin d'explorer son environnement et de récupérer les données d'entraînement. L'entraînement de ce type d'agent s'effectue par essais et erreurs ;lorsque l'agent échoue dans sa tâche, il est pénalisé, tandis que lorsqu'il réussit, il est récompensé. Le but pour l'agent est d'améliorer son comportement pour obtenir le plus de récompenses à long terme. Nous nous intéressons aux extractions visuelles dans des scénarios de RL utilisant des images vues à la première personne. L'utilisation de données visuelles fait souvent appel à des réseaux de convolution profonds permettant de travailler directement sur des images. Cependant, ces réseaux présentent une complexité calculatoire importante, manquent d'explicabilité et souffrent parfois d'instabilité. Pour surmonter ces difficultés, nous avons investigué le développement d'un réseau basé sur des fonctions à base radiales qui permettent des activations éparses et localisées dans l'espace d'entrée. Les réseaux à base radiale (RBFN ) ont connu leur apogée dans les années 90, puis ont été supplantés par les réseaux de convolution car ils étaient jugés difficilement utilisables sur des images en raison de leur coût en calcul. Dans cette thèse, nous avons développé un extracteur de caractéristiques visuelles inspiré des RBFN en simplifiant le coût calculatoire sur les images. Nous avons utilisé notre réseau pour la résolution de tâches visuelles à la première personne et nous avons comparé ses résultats avec différentes méthodes de l'état de l'art; en particulier, des méthodes d'apprentissage de bout-en-bout, des méthodes utilisant l'apprentissage de représentation d'état et des méthodes d'apprentissage machine extrême. Différents scénarios ont été testés issus du simulateur VizDoom, ainsi que du simulateur physique de robotique Pybullet. Outre la comparaison des récompenses obtenues après l'apprentissage, nous avons aussi effectué différents tests sur la robustesse au bruit, la génération des paramètres de notre réseau et le transfert d'une tâche dans la réalité.Le réseau proposé obtient les meilleures performances lors d'apprentissage par renforcement sur les scénarios testés, tout en étant plus simple d'utilisation et d'interprétation. De plus, notre réseau est robuste face à différents bruits, ce qui ouvre la voie à un transfert efficace des connaissances acquises en simulation à la réalité
This thesis work falls within the context of Reinforcement Learning (RL) from image data. Unlike supervised learning, which enables performing various tasks such as classification, regression, or segmentation from an annotated database, RL allows learning without a database through interactions with an environment. In these methods, an agent, such as a robot, performs different actions to explore its environment and gather training data. Training such an agent involves trial and error; the agent is penalized when it fails at its task and rewarded when it succeeds. The goal for the agent is to improve its behavior to obtain the most long-term rewards.We focus on visual extractions in RL scenarios using first-person view images. The use of visual data often involves deep convolutional networks that work directly on images. However, these networks have significant computational complexity, lack interpretability, and sometimes suffer from instability. To overcome these difficulties, we investigated the development of a network based on radial basis functions, which enable sparse and localized activations in the input space. Radial basis function networks (RBFNs) peaked in the 1990s but were later supplanted by convolutional networks due to their high computational cost on images. In this thesis, we developed a visual feature extractor inspired by RBFNs, simplifying the computational cost on images. We used our network for solving first-person visual tasks and compared its results with various state-of-the-art methods, including end-to-end learning methods, state representation learning methods, and extreme machine learning methods. Different scenarios were tested from the VizDoom simulator and the Pybullet robotics physics simulator. In addition to comparing the rewards obtained after learning, we conducted various tests on noise robustness, parameter generation of our network, and task transfer to reality.The proposed network achieves the best performance in reinforcement learning on the tested scenarios while being easier to use and interpret. Additionally, our network is robust to various noise types, paving the way for the effective transfer of knowledge acquired in simulation to reality
APA, Harvard, Vancouver, ISO, and other styles
10

Ford, Shelton J. "The effect of graphing calculators and a three-core representation curriculum on college students' learning of exponential and logarithmic functions." 2008. http://www.lib.ncsu.edu/theses/available/etd-11072008-135009/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "States representation learning"

1

1966-, McBride Kecia Driver, ed. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alden, John, Alexander H. Cohen, and Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alden, John, Alexander H. Cohen, and Jonathan J. Ring. Gaming the System: Nine Games to Teach American Government Through Active Learning. Taylor & Francis Group, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Burge, Tyler. Perception: First Form of Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198871002.001.0001.

Full text
Abstract:
Perception is the first form of representational mind to emerge in evolution. Three types of form are discussed: formal representational structure of perceptual states, formation characteristics in computations of perceptual states, and the form of the visual and visuomotor systems. The book distinguishes perception from non-perceptual sensing. The formal representational structure of perceptual states is developed via a systematic semantics for them—an account of what it is for them to be accurate or inaccurate. This semantics is elaborated by explaining how the representational form is embedded in an iconic format. These structures are then situated in what is known about the processing of perceptual representations, with emphasis on formation of perceptual categorizations. Features of processing that provide insight into the scope of the perceptual (paradigmatically visual) system are highlighted. Relations between these processes and associated perceptual-level capacities—conation, attention, memory, anticipation, affect, learning, imagining—are delineated. Roughly, a perceptual-level capacity is one that borrows its form and content from perception and involves processing that is no more complex or sophisticated than processing that occurs in the classical visual hierarchy. Relations between perception and these associated perceptual-level capacities are argued to occur within the perceptual and perceptual-motor systems. An account of what it is to occur within these systems is elaborated. An upshot is refinement of the distinction between perceptual-level capacities, on one hand, and thought and conception, on the other. Intermediate territory between perception-level representation and propositional thought is explored. The book is resolutely a work in philosophy of science. It attempts to understand perception by focusing on its form, function, and underlying capacities, as indicated in the sciences of perception, rather than by relying on introspection or ordinary talk about perception.
APA, Harvard, Vancouver, ISO, and other styles
5

Goldman, Alvin I. Theory of Mind. Edited by Eric Margolis, Richard Samuels, and Stephen P. Stich. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780195309799.013.0017.

Full text
Abstract:
The article provides an overview of ‘Theory of Mind’ (ToM) research, guided by two classifications. The first covers four competing approaches to mentalizing such as the theory-theory, modularity theory, rationality theory, and simulation theory. The second classification is the first-person/third-person contrast. Jerry Fodor claimed that commonsense psychology is so good at helping predict behavior that it is practically invisible. It works well because the intentional states it posits genuinely exist and possess the properties generally associated with them. The modularity model has two principal components. First, whereas the child-scientist approach claims that mentalizing utilizes domain-general cognitive equipment, the modularity approach posits one or more domain-specific modules, which use proprietary representations and computations for the mental domain. Second, the modularity approach holds that these modules are innate cognitive structures, which mature or come on line at preprogrammed stages and are not acquired through learning. The investigators concluded that autism impairs a domain-specific capacity dedicated to mentalizing. Gordon, Jane Heal, and Alvin Goldman explained simulation theory in such a way that mind readers simulate a target by trying to create similar mental states of their own as proxies or surrogates of those of the target. These initial pretend states are fed into the mind reader's own cognitive mechanisms to generate additional states, some of which are then imputed to the target.
APA, Harvard, Vancouver, ISO, and other styles
6

Boden, Margaret A. 2. General intelligence as the Holy Grail. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0002.

Full text
Abstract:
A host of state-of-the-art AI applications exist, designed for countless specific tasks and used in almost every area of life, by laymen and professionals alike. Many outperform even the most expert humans. In that sense, progress has been spectacular. But the AI pioneers were also hoping for systems with general intelligence. ‘General intelligence as the Holy Grail’ explains why artificial general intelligence is still highly elusive despite recent increases in computer power. It considers the general AI strategies in recent research—heuristics, planning, mathematical simplification, and different forms of knowledge representation—and discusses the concepts of the frame problem, agents and distributed cognition, machine learning, and generalist systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Kenny, Neil, ed. Literature, Learning, and Social Hierarchy in Early Modern Europe. British Academy, 2022. http://dx.doi.org/10.5871/bacad/9780197267332.001.0001.

Full text
Abstract:
Before the ascendancy of the language of social class, European societies were conceived as hierarchies of orders, degrees, estates, dignities, and ranks. What was the relationship, from the fifteenth century to the seventeenth, between that social hierarchy and another major facet of early modern life—literature and learning (understood in a broad sense as literate cultural activity and production)? Literature and learning were not just contiguous with social hierarchy, but also overlapped with it. The volume fosters Europe-wide consideration of this relationship, rather than providing a systematic survey organized by territory, genre, discourse, or period. The territories featured are largely Western European—England, France, Germany and the Low Countries, Italy, and Portugal. The genres, discourses, and practices featured include poetry, theatre, masque, architecture, philosophy, law, printing, publishing, translating, and scribe-hiring. First, the volume examines the role of languages—especially elite written ones such as Latin, cosmopolitan vernaculars, or technical vocabulary—in enabling some groups to acquire social literacies and practices. The focus is not just on these processes of acquisition but also on the accompanying exclusions, resistances, doubts, and contradictions. Next, the role of cultural production in generating social status is examined in relation to printers and publishers, theatre actors, and a woman poet from an artisanal milieu. Some chapters then focus more on the literary and other representations themselves, examining how they represent social hierarchy and the place within it of certain groups. The closing chapters emphasize that the relationship of literature and learning to social hierarchy is profoundly two-way.
APA, Harvard, Vancouver, ISO, and other styles
8

Fox, Roy F. MediaSpeak. Praeger Publishers, 2000. http://dx.doi.org/10.5040/9798400684258.

Full text
Abstract:
This book defines and analyzes the content, structure, and values of three predominant types of public discourse, which are labeled Doublespeak, Salespeak, and Sensationspeak. These media messages are examined to determine how they are constructed and how they influence individuals, ideology, and culture. Discussions are illustrated with a diverse range of examples from popular culture, magazines, Internet sites, politics, television, and film. Fox argues that the Information Age has replaced actual reality with representations of reality. He states that electronic media dominates our lives. Together, these three voices saturate media and technology, profoundly influencing American culture. Fox suggests specific strategies for recognizing and understanding these coded messages. This lively and informative discussion will appeal to anyone who is interested in learning how print and electronic media manipulate both individuals and society as a whole. The extensive research will appeal to media, communications, journalism, and cultural studies scholars alike.
APA, Harvard, Vancouver, ISO, and other styles
9

Caselli, Tommaso, Eduard Hovy, Martha Palmer, and Piek Vossen, eds. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.

Full text
Abstract:
Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.
APA, Harvard, Vancouver, ISO, and other styles
10

Haney, Craig, and Shirin Bakhshay. Contexts of Ill-Treatment. Edited by Metin Başoğlu. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199374625.003.0006.

Full text
Abstract:
In contrast to most international definitions of cruel, inhuman, or degrading treatment (CIDT), and of torture per se, which focus primarily on individual acts or discrete forms of ill-treatment that are suffered at the hands of another (typically, a representative of the state), this chapter applies Bașoğlu’s “learning theory model of torture” to discuss the potential relationships between certain “contexts of ill-treatment”—especially, harsh conditions of prison confinement and other forms of involuntary detention—to CIDT and torture per se. It reviews the nature and adverse psychological effects of confinement and detention, including very severe conditions of the sort that exist in a number of international sites and are pervaded by unpredictable and uncontrollable traumas and stressors. This chapter also examines whether and how certain of these contexts of captivity may facilitate abuse, interact with and exacerbate other forms of ill-treatment and, at the extremes, themselves constitute CIDT and torture.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "States representation learning"

1

Balagopalan, Sarada. "Children’s Participation in Their Right to Education: Learning from the Delhi High Court Cases, 1998–2001." In The Politics of Children’s Rights and Representation, 81–103. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-04480-9_4.

Full text
Abstract:
AbstractWith several states in the majority world having passed legislation around free and compulsory education and millions of marginal children now enrolled in schools, the question of how we frame children’s participation in their right to education assumes considerable significance. By drawing together discussions around children’s representations, participation and educational equity, this chapter critically opens up the particular dynamic that has helped produce educational equity as a continually deferrable goal. It argues that the dominant representation of first-generation learners as economically marginal children is variously, and continually, leveraged to justify their presence within unequal and deeply segregated school spaces. To help problematize this narrative of assumed victimhood, the chapter discusses a set of court cases adjudicated in the Delhi High Court between 1997 and 2001. These cases not only highlight the state’s role in perpetuating existing inequalities but also draw attention to how these dominant representations had a deleterious effect on marginal children’s school experiences. By countering a simplistic narrative around school attendance as an adequate measure of children’s learning and participation in education, these Delhi High Court cases foreground marginal children’s primary identity as learners. They thus help expose how the current fuzziness around children’s participation in schooling has helped produce schooling as a critical compensatory technology that is no longer about guaranteeing educational equity.
APA, Harvard, Vancouver, ISO, and other styles
2

Bouajjani, Ahmed, Wael-Amine Boutglay, and Peter Habermehl. "Data-driven Numerical Invariant Synthesis with Automatic Generation of Attributes." In Computer Aided Verification, 282–303. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_14.

Full text
Abstract:
AbstractWe propose a data-driven algorithm for numerical invariant synthesis and verification. The algorithm is based on the ICE-DT schema for learning decision trees from samples of positive and negative states and implications corresponding to program transitions. The main issue we address is the discovery of relevant attributes to be used in the learning process of numerical invariants. We define a method for solving this problem guided by the data sample. It is based on the construction of a separator that covers positive states and excludes negative ones, consistent with the implications. The separator is constructed using an abstract domain representation of convex sets. The generalization mechanism of the decision tree learning from the constraints of the separator allows the inference of general invariants, accurate enough for proving the targeted property. We implemented our algorithm and showed its efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Schestakov, Stefan, Paul Heinemeyer, and Elena Demidova. "Road Network Representation Learning with Vehicle Trajectories." In Advances in Knowledge Discovery and Data Mining, 57–69. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33383-5_5.

Full text
Abstract:
AbstractSpatio-temporal traffic patterns reflecting the mobility behavior of road users are essential for learning effective general-purpose road representations. Such patterns are largely neglected in state-of-the-art road representation learning, mainly focusing on modeling road topology and static road features. Incorporating traffic patterns into road network representation learning is particularly challenging due to the complex relationship between road network structure and mobility behavior of road users. In this paper, we present TrajRNE – a novel trajectory-based road embedding model incorporating vehicle trajectory information into road network representation learning. Our experiments on two real-world datasets demonstrate that TrajRNE outperforms state-of-the-art road representation learning baselines on various downstream tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

Merckling, Astrid, Alexandre Coninx, Loic Cressot, Stephane Doncieux, and Nicolas Perrin. "State Representation Learning from Demonstration." In Machine Learning, Optimization, and Data Science, 304–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wingate, David. "Predictively Defined Representations of State." In Adaptation, Learning, and Optimization, 415–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stoffl, Lucas, Andy Bonnetto, Stéphane d’Ascoli, and Alexander Mathis. "Elucidating the Hierarchical Nature of Behavior with Masked Autoencoders." In Lecture Notes in Computer Science, 106–25. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73039-9_7.

Full text
Abstract:
AbstractNatural behavior is hierarchical. Yet, there is a paucity of benchmarks addressing this aspect. Recognizing the scarcity of large-scale hierarchical behavioral benchmarks, we create a novel synthetic basketball playing benchmark (Shot7M2). Beyond synthetic data, we extend BABEL into a hierarchical action segmentation benchmark (hBABEL). Then, we develop a masked autoencoder framework (hBehaveMAE) to elucidate the hierarchical nature of motion capture data in an unsupervised fashion. We find that hBehaveMAE learns interpretable latents on Shot7M2 and hBABEL, where lower encoder levels show a superior ability to represent fine-grained movements, while higher encoder levels capture complex actions and activities. Additionally, we evaluate hBehaveMAE on MABe22, a representation learning benchmark with short and long-term behavioral states. hBehaveMAE achieves state-of-the-art performance without domain-specific feature extraction. Together, these components synergistically contribute towards unveiling the hierarchical organization of natural behavior. Models and benchmarks are available at https://github.com/amathislab/BehaveMAE.
APA, Harvard, Vancouver, ISO, and other styles
7

Steccanella, Lorenzo, and Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases, 84–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sychev, Oleg. "Visualizing Program State as a Clustered Graph for Learning Programming." In Diagrammatic Representation and Inference, 404–7. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Howard, Eric, Iftekher S. Chowdhury, and Ian Nagle. "Matrix Product State Representations for Machine Learning." In Artificial Intelligence in Intelligent Systems, 455–68. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77445-5_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ding, Ning, Weize Chen, Zhengyan Zhang, Shengding Hu, Ganqu Cui, Yuan Yao, Yujia Qin, et al. "Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning." In Representation Learning for Natural Language Processing, 491–521. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_14.

Full text
Abstract:
AbstractThe aforementioned representation learning methods have shown their effectiveness in various NLP scenarios and tasks. Large-scale pre-trained language models (i.e., big models) are the state of the art of representation learning for NLP and beyond. With the rapid growth of data scale and the development of computation devices, big models bring us to a new era of AI and NLP. Standing on the new giants of big models, there are many new challenges and opportunities for representation learning. In the last chapter, we will provide a 2023 outlook for the future directions of representation learning techniques for NLP by summarizing ten key open problems for pre-trained models.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "States representation learning"

1

Drexler, Dominik, Simon Ståhlberg, Blai Bonet, and Hector Geffner. "Symmetries and Expressive Requirements for Learning General Policies." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 845–55. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/79.

Full text
Abstract:
State symmetries play an important role in planning and generalized planning. In the first case, state symmetries can be used to reduce the size of the search; in the second, to reduce the size of the training set. In the case of general planning, however, it is also critical to distinguish non-symmetric states, i.e., states that represent non-isomorphic relational structures. However, while the language of first-order logic distinguishes non-symmetric states, the languages and architectures used to represent and learn general policies do not. In particular, recent approaches for learning general policies use state features derived from description logics or learned via graph neural networks (GNNs) that are known to be limited by the expressive power of C2, first-order logic with two variables and counting. In this work, we address the problem of detecting symmetries in planning and generalized planning and use the results to assess the expressive requirements for learning general policies over various planning domains. For this, we map planning states to plain graphs, run off-the-shelf algorithms to determine whether two states are isomorphic with respect to the goal, and run coloring algorithms to determine if C2 features computed logically or via GNNs distinguish non-isomorphic states. Symmetry detection results in more effective learning, while the failure to detect non-symmetries prevents general policies from being learned at all in certain domains.
APA, Harvard, Vancouver, ISO, and other styles
2

Nikolich, Aleksandr, Konstantin Korolev, Sergei Bratchikov, Igor Kiselev, and Artem Shelmanov. "Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for Russian." In Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), 189–99. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.mrl-1.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ziyi, Xiangtao Hu, Yongle Zhang, and Fujie Zhou. "Task-Oriented Reinforcement Learning with Interest State Representation." In 2024 International Conference on Advanced Robotics and Mechatronics (ICARM), 721–28. IEEE, 2024. http://dx.doi.org/10.1109/icarm62033.2024.10715850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Balyo, Tomáš, Martin Suda, Lukáš Chrpa, Dominik Šafránek, Stephan Gocht, Filip Dvořák, Roman Barták, and G. Michael Youngblood. "Planning Domain Model Acquisition from State Traces without Action Parameters." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 812–22. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/76.

Full text
Abstract:
Existing planning action domain model acquisition approaches consider different types of state traces from which they learn. The differences in state traces refer to the level of observability of state changes (from full to none) and whether the observations have some noise (the state changes might be inaccurately logged). However, to the best of our knowledge, all the existing approaches consider state traces in which each state change corresponds to an action specified by its name and all its parameters (all objects that are relevant to the action). Furthermore, the names and types of all the parameters of the actions to be learned are given. These assumptions are too strong. In this paper, we propose a method that learns action schema from state traces with fully observable state changes but without the parameters of actions responsible for the state changes (only action names are part of the state traces). Although we can easily deduce the number (and names) of the actions that will be in the learned domain model, we still need to deduce the number and types of the parameters of each action alongside its precondition and effects. We show that this task is at least as hard as graph isomorphism. However, our experimental evaluation on a large collection of IPC benchmarks shows that our approach is still practical as the number of required parameters is usually small. Compared to the state-of-the-art learning tools SAM and Extended SAM our new algorithm can provide better results in terms of learning action models more similar to reference models, even though it uses less information and has fewer restrictions on the input traces.
APA, Harvard, Vancouver, ISO, and other styles
5

Rodriguez, Ivan D., Blai Bonet, Javier Romero, and Hector Geffner. "Learning First-Order Representations for Planning from Black Box States: New Results." In 18th International Conference on Principles of Knowledge Representation and Reasoning {KR-2021}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/kr.2021/51.

Full text
Abstract:
Recently Bonet and Geffner have shown that first-order representations for planning domains can be learned from the structure of the state space without any prior knowledge about the action schemas or domain predicates. For this, the learning problem is formulated as the search for a simplest first-order domain description D that along with information about instances I_i (number of objects and initial state) determine state space graphs G(P_i) that match the observed state graphs G_i where P_i = (D, I_i). The search is cast and solved approximately by means of a SAT solver that is called over a large family of propositional theories that differ just in the parameters encoding the possible number of action schemas and domain predicates, their arities, and the number of objects. In this work, we push the limits of these learners by moving to an answer set programming (ASP) encoding using the CLINGO system. The new encodings are more transparent and concise, extending the range of possible models while facilitating their exploration. We show that the domains introduced by Bonet and Geffner can be solved more efficiently in the new approach, often optimally, and furthermore, that the approach can be easily extended to handle partial information about the state graphs as well as noise that prevents some states from being distinguished.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Chao, Yujing Hu, Shangdong Yang, Tangjie Lv, Changjie Fan, Wenbin Li, Chongjie Zhang, and Yang Gao. "STAR: Spatio-Temporal State Compression for Multi-Agent Tasks with Rich Observations." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/14.

Full text
Abstract:
This paper focuses on the problem of learning compressed state representations for multi-agent tasks. Under the assumption of rich observation, we pinpoint that the state representations should be compressed both spatially and temporally to enable efficient prioritization of task-relevant features, while existing works typically fail. To overcome this limitation, we propose a novel method named Spatio-Temporal stAte compRession (STAR) that explicitly defines both spatial and temporal compression operations on the learned state representations to encode per-agent task-relevant features. Specifically, we first formalize this problem by introducing Task Informed Partially Observable Stochastic Game (TI-POSG). Then, we identify the spatial representation compression in it as encoding the latent states from the joint observations of all agents, and achieve this by learning representations that approximate the latent states based on the information theoretical principle. After that, we further extract the task-relevant features of each agent from these representations by aligning them based on their reward similarities, which is regarded as the temporal representation compression. Structurally, we implement these two compression by learning a set of agent-specific decoding functions and incorporate them into a critic shared by agents for scalable learning. We evaluate our method by developing decentralized policies on 12 maps of the StarCraft Multi-Agent Challenge benchmark, and the superior performance demonstrates its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Zhengwei, Zhenyang Lin, Yurou Chen, and Zhiyong Liu. "Efficient Offline Meta-Reinforcement Learning via Robust Task Representations and Adaptive Policy Generation." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/500.

Full text
Abstract:
Zero-shot adaptation is crucial for agents facing new tasks. Offline Meta-Reinforcement Learning (OMRL), utilizing offline multi-task datasets to train policies, offers a way to attain this ability. Although most OMRL methods construct task representations via contrastive learning and merge them with states for policy input, these methods may have inherent problems. Specifically, integrating task representations with states for policy input limits learning efficiency, due to failing to leverage the similarities among tasks. Moreover, uniformly sampling an equal number of negative samples from different tasks in contrastive learning can hinder differentiation of more similar tasks, potentially diminishing task representation robustness. In this paper, we introduce an OMRL algorithm to tackle the aforementioned issues. We design a network structure for efficient learning by leveraging task similarity. It features shared lower layers for common feature extraction with a hypernetworks-driven upper layer, customized to process features per task's attributes. Furthermore, to achieve robust task representations for generating task-specific control policies, we utilize contrastive learning and introduce a novel method to construct negative sample pairs based on task similarity. Experimental results show that our method notably boosts learning efficiency and zero-shot adaptation in new tasks, surpassing previous methods across multiple challenging domains.
APA, Harvard, Vancouver, ISO, and other styles
8

Ståhlberg, Simon, Blai Bonet, and Hector Geffner. "Learning General Policies with Policy Gradient Methods." In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/63.

Full text
Abstract:
While reinforcement learning methods have delivered remarkable results in a number of settings, generalization, i.e., the ability to produce policies that generalize in a reliable and systematic way, has remained a challenge. The problem of generalization has been addressed formally in classical planning where provable correct policies that generalize over all instances of a given domain have been learned using combinatorial methods. The aim of this work is to bring these two research threads together to illuminate the conditions under which (deep) reinforcement learning approaches, and in particular, policy optimization methods, can be used to learn policies that generalize like combinatorial methods do. We draw on lessons learned from previous combinatorial and deep learning approaches, and extend them in a convenient way. From the former, we model policies as state transition classifiers, as (ground) actions are not general and change from instance to instance. From the latter, we use graph neural networks (GNNs) adapted to deal with relational structures for representing value functions over planning states, and in our case, policies. With these ingredients in place, we find that actor-critic methods can be used to learn policies that generalize almost as well as those obtained using combinatorial approaches while avoiding the scalability bottleneck and the use of feature pools. Moreover, the limitations of the DRL methods on the benchmarks considered have little to do with deep learning or reinforcement learning algorithms, and result from the well-understood expressive limitations of GNNs, and the tradeoff between optimality and generalization (general policies cannot be optimal in some domains). Both of these limitations are addressed without changing the basic DRL methods by adding derived predicates and an alternative cost structure to optimize.
APA, Harvard, Vancouver, ISO, and other styles
9

De Giacomo, Giuseppe, Marco Favorito, Luca Iocchi, Fabio Patrizi, and Alessandro Ronca. "Temporal Logic Monitoring Rewards via Transducers." In 17th International Conference on Principles of Knowledge Representation and Reasoning {KR-2020}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/kr.2020/89.

Full text
Abstract:
In Markov Decision Processes (MDPs), rewards are assigned according to a function of the last state and action. This is often limiting, when the considered domain is not naturally Markovian, but becomes so after careful engineering of extended state space. The extended states record information from the past that is sufficient to assign rewards by looking just at the last state and action. Non-Markovian Reward Decision Processes (NRMDPs) extend MDPs by allowing for non-Markovian rewards, which depend on the history of states and actions. Non-Markovian rewards can be specified in temporal logics on finite traces such as LTLf/LDLf, with the great advantage of a higher abstraction and succinctness; they can then be automatically compiled into an MDP with an extended state space. We contribute to the techniques to handle temporal rewards and to the solutions to engineer them. We first present an approach to compiling temporal rewards which merges the formula automata into a single transducer, sometimes saving up to an exponential number of states. We then define monitoring rewards, which add a further level of abstraction to temporal rewards by adopting the four-valued conditions of runtime monitoring; we argue that our compilation technique allows for an efficient handling of monitoring rewards. Finally, we discuss application to reinforcement learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Huihui, and Lei Wei. "General purpose representation and association machine: Part 4: Improve learning for three-states and multi-tasks." In IEEE SOUTHEASTCON 2013. IEEE, 2013. http://dx.doi.org/10.1109/secon.2013.6567485.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "States representation learning"

1

Singh, Abhijeet, Mauricio Romero, and Karthik Muralidharan. COVID-19 Learning Loss and Recovery: Panel Data Evidence from India. Research on Improving Systems of Education (RISE), September 2022. http://dx.doi.org/10.35489/bsg-risewp_2022/112.

Full text
Abstract:
We use a near-representative household panel survey of ∼19,000 primary-school-aged children in rural Tamil Nadu to study the extent of ‘learning loss’ after COVID-19 school closures, the pace of recovery in the months after schools reopened, and the role of a flagship compensatory intervention introduced by the state government. Students tested in December 2021, after 18 months of school closures, displayed severe deficits in learning of about 0.7 standard deviations (σ) in math and 0.34σ in language compared to identically-aged students in the same villages in 2019. Using multiple rounds of in-person testing, we find that two-thirds of this deficit was made up in the 6 months after school reopening. Using value-added models, we attribute ∼24% of the cohort-level recovery to a government-run after-school remediation program which improved test scores for attendees by 0.17σ in math and 0.09σ in Tamil after 3-4 months. Further, while learning loss was regressive, the recovery was progressive, likely reflecting (in part) the greater take up of the remediation program by more socioeconomically disadvantaged students. These positive results from a state-wide program delivered at scale by the government may provide a useful template for both recovery from COVID-19 learning losses, and bridging learning gaps more generally in low-and-middle-income countries.
APA, Harvard, Vancouver, ISO, and other styles
2

Lalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, February 2022. http://dx.doi.org/10.36687/inetwp178.

Full text
Abstract:
How much does money drive legislative outcomes in the United States? In this article, we use aggregated campaign finance data as well as a Transformer based text embedding model to predict roll call votes for legislation in the US Congress with more than 90% accuracy. In a series of model comparisons in which the input feature sets are varied, we investigate the extent to which campaign finance is predictive of voting behavior in comparison with variables like partisan affiliation. We find that the financial interests backing a legislator’s campaigns are independently predictive in both chambers of Congress, but also uncover a sizable asymmetry between the Senate and the House of Representatives. These findings are cross-referenced with a Representational Similarity Analysis (RSA) linking legislators’ financial and voting records, in which we show that “legislators who vote together get paid together”, again discovering an asymmetry between the House and the Senate in the additional predictive power of campaign finance once party is accounted for. We suggest an explanation of these facts in terms of Thomas Ferguson’s Investment Theory of Party Competition: due to a number of structural differences between the House and Senate, but chiefly the lower amortized cost of obtaining individuated influence with Senators, political investors prefer operating on the House using the party as a proxy.
APA, Harvard, Vancouver, ISO, and other styles
3

Babu M.G., Sarath, Debjani Ghosh, Jaideep Gupte, Md Asif Raza, Eric Kasper, and Priyanka Mehra. Kerala’s Grass-roots-led Pandemic Response: Deciphering the Strength of Decentralisation. Institute of Development Studies (IDS), June 2021. http://dx.doi.org/10.19088/ids.2021.049.

Full text
Abstract:
This paper presents an analysis of the role of decentralised institutions to understand the learning and challenges of the grass-roots-led pandemic response of Kerala. The study is based on interviews with experts and frontline workers to ensure the representation of all stakeholders dealing with the outbreak, from the state level to the household level, and a review of published government orders, health guidelines, and news articles. The outcome of the study shows that along with the decentralised system of governance, the strong grass-roots-level network of Accredited Social Health Activists (ASHA) workers, volunteer groups, and Kudumbashree members played a pivotal role in pandemic management in the state. The efficient functioning of local bodies in the state, experience gained from successive disasters, and the Nipah outbreak naturally aided grass-roots-level actions. The lessons others can draw from Kerala are the importance of public expenditure on health, investment for building social capital, and developing the local self-delivery system.
APA, Harvard, Vancouver, ISO, and other styles
4

Tarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan, and Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.

Full text
Abstract:
The article deals with the analysis of the impact of the using AR technology in the study of a foreign language by university students. It is stated out that AR technology can be a good tool for learning a foreign language. The use of elements of AR in the course of studying a foreign language, in particular in the form of virtual excursions, is proposed. Advantages of using AR technology in the study of the German language are identified, namely: the possibility of involvement of different channels of information perception, the integrity of the representation of the studied object, the faster and better memorization of new vocabulary, the development of communicative foreign language skills. The ease and accessibility of using QR codes to obtain information about the object of study from open Internet sources is shown. The results of a survey of students after virtual tours are presented. A reorientation of methodological support for the study of a foreign language at universities is proposed. Attention is drawn to the use of AR elements in order to support students with different learning styles (audio, visual, kinesthetic).
APA, Harvard, Vancouver, ISO, and other styles
5

Tarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan, and Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, November 2020. http://dx.doi.org/10.31812/123456789/4421.

Full text
Abstract:
The article deals with the analysis of the impact of the using AR technology in the study of a foreign language by university students. It is stated out that AR technology can be a good tool for learning a foreign language. The use of elements of AR in the course of studying a foreign language, in particular in the form of virtual excursions, is proposed. Advantages of using AR technology in the study of the German language are identified, namely: the possibility of involvement of different channels of information perception, the integrity of the representation of the studied object, the faster and better memorization of new vocabulary, the development of communicative foreign language skills. The ease and accessibility of using QR codes to obtain information about the object of study from open Internet sources is shown. The results of a survey of students after virtual tours are presented. A reorientation of methodological support for the study of a foreign language at universities is proposed. Attention is drawn to the use of AR elements in order to support students with different learning styles (audio, visual, kinesthetic).
APA, Harvard, Vancouver, ISO, and other styles
6

Goodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier, and Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, March 2023. http://dx.doi.org/10.46999/arxn5612.

Full text
Abstract:
Assessments, especially those used for high-stakes decision making, draw on evidence-based frameworks. Such frameworks inform every aspect of the testing process, from development to results reporting. The frameworks that language assessment professionals use draw on theory in language learning, assessment design, and measurement and psychometrics in order to provide underpinnings for the evaluation of language skills including speaking, writing, reading, and listening. This paper focuses on the construct, or underlying trait, of writing ability. The paper conceptualizes the writing construct for the Duolingo English Test, a digital-first assessment. “Digital-first” includes technology such as artificial intelligence (AI) and machine learning, with human expert involvement, throughout all item development, test scoring, and security processes. This work is situated in the Burstein et al. (2022) theoretical ecosystem for digital-first assessment, the first representation of its kind that incorporates design, validation/measurement, and security all situated directly in assessment practices that are digital first. The paper first provides background information about the Duolingo English Test and then defines the writing construct, including the purposes for writing. It also introduces principles underpinning the design of writing items and illustrates sample items that assess the writing construct.
APA, Harvard, Vancouver, ISO, and other styles
7

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Kendall Niles, Ken Pathak, and Joe Tom. Widened attention-enhanced atrous convolutional network for efficient embedded vision applications under resource constraints. Engineer Research and Development Center (U.S.), November 2024. http://dx.doi.org/10.21079/11681/49459.

Full text
Abstract:
Onboard image analysis enables real-time autonomous capabilities for unmanned platforms including aerial, ground, and aquatic drones. Performing classification on embedded systems, rather than transmitting data, allows rapid perception and decision-making critical for time-sensitive applications such as search and rescue, hazardous environment exploration, and military operations. To fully capitalize on these systems’ potential, specialized deep learning solutions are needed that balance accuracy and computational efficiency for time-sensitive inference. This article introduces the widened attention-enhanced atrous convolution-based efficient network (WACEfNet), a new convolutional neural network designed specifically for real-time visual classification challenges using resource-constrained embedded devices. WACEfNet builds on EfficientNet and integrates innovative width-wise feature processing, atrous convolutions, and attention modules to improve representational power without excessive over-head. Extensive benchmarking confirms state-of-the-art performance from WACEfNet for aerial imaging applications while remaining suitable for embedded deployment. The improvements in accuracy and speed demonstrate the potential of customized deep learning advancements to unlock new capabilities for unmanned aerial vehicles and related embedded systems with tight size, weight, and power constraints. This research offers an optimized framework, combining widened residual learning and attention mechanisms, to meet the unique demands of high-fidelity real-time analytics across a variety of embedded perception paradigms.
APA, Harvard, Vancouver, ISO, and other styles
8

Iatsyshyn, Anna V., Valeriia O. Kovach, Yevhen O. Romanenko, Iryna I. Deinega, Andrii V. Iatsyshyn, Oleksandr O. Popov, Yulii G. Kutsan, Volodymyr O. Artemchuk, Oleksandr Yu Burov, and Svitlana H. Lytvynova. Application of augmented reality technologies for preparation of specialists of new technological era. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3749.

Full text
Abstract:
Augmented reality is one of the most modern information visualization technologies. Number of scientific studies on different aspects of augmented reality technology development and application is analyzed in the research. Practical examples of augmented reality technologies for various industries are described. Very often augmented reality technologies are used for: social interaction (communication, entertainment and games); education; tourism; areas of purchase/sale and presentation. There are various scientific and mass events in Ukraine, as well as specialized training to promote augmented reality technologies. There are following results of the research: main benefits that educational institutions would receive from introduction of augmented reality technology are highlighted; it is determined that application of augmented reality technologies in education would contribute to these technologies development and therefore need increase for specialists in the augmented reality; growth of students' professional level due to application of augmented reality technologies is proved; adaptation features of augmented reality technologies in learning disciplines for students of different educational institutions are outlined; it is advisable to apply integrated approach in the process of preparing future professionals of new technological era; application of augmented reality technologies increases motivation to learn, increases level of information assimilation due to the variety and interactivity of its visual representation. Main difficulties of application of augmented reality technologies are financial, professional and methodical. Following factors are necessary for introduction of augmented reality technologies: state support for such projects and state procurement for development of augmented reality technologies; conduction of scientific research and experimental confirmation of effectiveness and pedagogical expediency of augmented reality technologies application for training of specialists of different specialties; systematic conduction of number of national and international events on dissemination and application of augmented reality technology. It is confirmed that application of augmented reality technologies is appropriate for training of future specialists of new technological era.
APA, Harvard, Vancouver, ISO, and other styles
9

State Legislator Representation: A Data-Driven Learning Guide. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, April 2009. http://dx.doi.org/10.3886/stateleg.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography
We use cookies to improve our website's functionality. Learn more