Статті в журналах з теми "Representation learning (artifical intelligence)"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Representation learning (artifical intelligence).

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Representation learning (artifical intelligence)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hamilton, William L. "Graph Representation Learning." Synthesis Lectures on Artificial Intelligence and Machine Learning 14, no. 3 (September 15, 2020): 1–159. http://dx.doi.org/10.2200/s01045ed1v01y202009aim046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning." Journal of Artificial Intelligence Research 61 (January 31, 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We consider the problem of constructing abstract representations for planning in high-dimensional, continuous environments. We assume an agent equipped with a collection of high-level actions, and construct representations provably capable of evaluating plans composed of sequences of those actions. We first consider the deterministic planning case, and show that the relevant computation involves set operations performed over sets of states. We define the specific collection of sets that is necessary and sufficient for planning, and use them to construct a grounded abstract symbolic representation that is provably suitable for deterministic planning. The resulting representation can be expressed in PDDL, a canonical high-level planning domain language; we construct such a representation for the Playroom domain and solve it in milliseconds using an off-the-shelf planner. We then consider probabilistic planning, which we show requires generalizing from sets of states to distributions over states. We identify the specific distributions required for planning, and use them to construct a grounded abstract symbolic representation that correctly estimates the expected reward and probability of success of any plan. In addition, we show that learning the relevant probability distributions corresponds to specific instances of probabilistic density estimation and probabilistic classification. We construct an agent that autonomously learns the correct abstract representation of a computer game domain, and rapidly solves it. Finally, we apply these techniques to create a physical robot system that autonomously learns its own symbolic representation of a mobile manipulation task directly from sensorimotor data---point clouds, map locations, and joint angles---and then plans using that representation. Together, these results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high-level representations.
3

Rezayi, Saed. "Learning Better Representations Using Auxiliary Knowledge." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16133–34. http://dx.doi.org/10.1609/aaai.v37i13.26927.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Representation Learning is the core of Machine Learning and Artificial Intelligence as it summarizes input data points into low dimensional vectors. This low dimensional vectors should be accurate portrayals of the input data, thus it is crucial to find the most effective and robust representation possible for given input as the performance of the ML task is dependent on the resulting representations. In this summary, we discuss an approach to augment representation learning which relies on external knowledge. We briefly describe the shortcoming of the existing techniques and describe how an auxiliary knowledge source could result in obtaining improved representations.
4

FROMMBERGER, LUTZ. "LEARNING TO BEHAVE IN SPACE: A QUALITATIVE SPATIAL REPRESENTATION FOR ROBOT NAVIGATION WITH REINFORCEMENT LEARNING." International Journal on Artificial Intelligence Tools 17, no. 03 (June 2008): 465–82. http://dx.doi.org/10.1142/s021821300800400x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The representation of the surrounding world plays an important role in robot navigation, especially when reinforcement learning is applied. This work uses a qualitative abstraction mechanism to create a representation of space consisting of the circular order of detected landmarks and the relative position of walls towards the agent's moving direction. The use of this representation does not only empower the agent to learn a certain goal-directed navigation strategy faster compared to metrical representations, but also facilitates reusing structural knowledge of the world at different locations within the same environment. Acquired policies are also applicable in scenarios with different metrics and corridor angles. Furthermore, gained structural knowledge can be separated, leading to a generally sensible navigation behavior that can be transferred to environments lacking landmark information and/or totally unknown environments.
5

Haghir Chehreghani, Morteza, and Mostafa Haghir Chehreghani. "Learning representations from dendrograms." Machine Learning 109, no. 9-10 (August 16, 2020): 1779–802. http://dx.doi.org/10.1007/s10994-020-05895-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract We propose unsupervised representation learning and feature extraction from dendrograms. The commonly used Minimax distance measures correspond to building a dendrogram with single linkage criterion, with defining specific forms of a level function and a distance function over that. Therefore, we extend this method to arbitrary dendrograms. We develop a generalized framework wherein different distance measures and representations can be inferred from different types of dendrograms, level functions and distance functions. Via an appropriate embedding, we compute a vector-based representation of the inferred distances, in order to enable many numerical machine learning algorithms to employ such distances. Then, to address the model selection problem, we study the aggregation of different dendrogram-based distances respectively in solution space and in representation space in the spirit of deep representations. In the first approach, for example for the clustering problem, we build a graph with positive and negative edge weights according to the consistency of the clustering labels of different objects among different solutions, in the context of ensemble methods. Then, we use an efficient variant of correlation clustering to produce the final clusters. In the second approach, we investigate the combination of different distances and features sequentially in the spirit of multi-layered architectures to obtain the final features. Finally, we demonstrate the effectiveness of our approach via several numerical studies.
6

Saitta, Lorenza. "Representation change in machine learning." AI Communications 9, no. 1 (1996): 14–20. http://dx.doi.org/10.3233/aic-1996-9102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rives, Alexander, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences." Proceedings of the National Academy of Sciences 118, no. 15 (April 5, 2021): e2016239118. http://dx.doi.org/10.1073/pnas.2016239118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.
8

Kang, Zhao, Xiao Lu, Jian Liang, Kun Bai, and Zenglin Xu. "Relation-Guided Representation Learning." Neural Networks 131 (November 2020): 93–102. http://dx.doi.org/10.1016/j.neunet.2020.07.014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Prorok, Máté. "Applications of artificial intelligence systems." Deliberationes 15, Különszám (2022): 76–88. http://dx.doi.org/10.54230/delib.2022.k.sz.76.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays artificial intelligence is a rapidly developing technology that encompasses the development of intelligent algorithms and machines capable of learning. Therefore, it is relevant and timely to examine the topic. These artificial intelligence algorithms and machines have the ability to perform tasks that traditionally relied on human intelligence in the past. This study provides an in-depth exploration of artificial intelligence systems and their key components. It examines various aspects of artificial intelligence systems, including natural language processing, machine learning, detection and pattern recognition, and knowledge representation and other form of artificial intelligence systems. Natural language processing enables machines to understand and generate human language, while machine learning empowers systems to learn from data and improve their performance over time. Detection and pattern recognition allow artificial intelligence systems to interpret and understand complex sensory inputs, while knowledge representation enables the storage and utilization of information. Furthermore, other form of artificial intelligence systems will be also discussed. This study sheds light on the fundamental elements of artificial intelligence systems, paving the way for their practical applications and advancements.
10

Mazoure, Bogdan, Thang Doan, Tianyu Li, Vladimir Makarenkov, Joelle Pineau, Doina Precup, and Guillaume Rabusseau. "Low-Rank Representation of Reinforcement Learning Policies." Journal of Artificial Intelligence Research 75 (October 27, 2022): 597–636. http://dx.doi.org/10.1613/jair.1.13854.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.
11

Lawler, Robert W. "Getting Intelligence Into the Minds of People." LEARNing Landscapes 6, no. 2 (June 2, 2013): 223–47. http://dx.doi.org/10.36510/learnland.v6i2.614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In conversation, Seymour Papert once asked me, "What’s the point of studying Artificial Intelligence if not to get intelligence into the minds of people?" His question inspires my juxtaposition of explorations of Natural Learning and Constructed Personal Knowledge. Since "you can’t learn about learning without learning about learning something,"1 the analyses will proceed with two examples. The first, focused on strategy learning at tic-tac-toe, concludes that learning depends on specific relationships among the elements of the context in interaction with processes of incremental cognitive change. The second analysis, focused on mastering a solution for Rubik’s Cube, argues the importance of reformulation of representations as a strategy for learning in more complex situations, and that the integration of multiple modalities of representation can be a key to "getting the intelligence into the minds of people."
12

Koohzadi, Maryam, Nasrollah Moghadam Charkari, and Foad Ghaderi. "Unsupervised representation learning based on the deep multi-view ensemble learning." Applied Intelligence 50, no. 2 (July 31, 2019): 562–81. http://dx.doi.org/10.1007/s10489-019-01526-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Hanson, Stephen José, and David J. Burr. "What connectionist models learn: Learning and representation in connectionist networks." Behavioral and Brain Sciences 13, no. 3 (September 1990): 471–89. http://dx.doi.org/10.1017/s0140525x00079760.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractConnectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and “simple” homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities (including “distributed representations”) or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explore systematically the complex interaction between learning and representation, as we try to demonstrate through the analysis of several large networks.
14

Zheng, Tingyi, Huibin Ge, Jiayi Li, and Li Wang. "Unsupervised multi-view representation learning with proximity guided representation and generalized canonical correlation analysis." Applied Intelligence 51, no. 1 (August 10, 2020): 248–64. http://dx.doi.org/10.1007/s10489-020-01821-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Li, Bentian, and Dechang Pi. "Network representation learning: a systematic literature review." Neural Computing and Applications 32, no. 21 (April 20, 2020): 16647–79. http://dx.doi.org/10.1007/s00521-020-04908-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Huang, Ming, Fuzhen Zhuang, Xiao Zhang, Xiang Ao, Zhengyu Niu, Min-Ling Zhang, and Qing He. "Supervised representation learning for multi-label classification." Machine Learning 108, no. 5 (February 13, 2019): 747–63. http://dx.doi.org/10.1007/s10994-019-05783-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Haghir Chehreghani, Morteza. "Unsupervised representation learning with Minimax distance measures." Machine Learning 109, no. 11 (July 28, 2020): 2063–97. http://dx.doi.org/10.1007/s10994-020-05886-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract We investigate the use of Minimax distances to extract in a nonparametric way the features that capture the unknown underlying patterns and structures in the data. We develop a general-purpose and computationally efficient framework to employ Minimax distances with many machine learning methods that perform on numerical data. We study both computing the pairwise Minimax distances for all pairs of objects and as well as computing the Minimax distances of all the objects to/from a fixed (test) object. We first efficiently compute the pairwise Minimax distances between the objects, using the equivalence of Minimax distances over a graph and over a minimum spanning tree constructed on that. Then, we perform an embedding of the pairwise Minimax distances into a new vector space, such that their squared Euclidean distances in the new space equal to the pairwise Minimax distances in the original space. We also study the case of having multiple pairwise Minimax matrices, instead of a single one. Thereby, we propose an embedding via first summing up the centered matrices and then performing an eigenvalue decomposition to obtain the relevant features. In the following, we study computing Minimax distances from a fixed (test) object which can be used for instance in K-nearest neighbor search. Similar to the case of all-pair pairwise Minimax distances, we develop an efficient and general-purpose algorithm that is applicable with any arbitrary base distance measure. Moreover, we investigate in detail the edges selected by the Minimax distances and thereby explore the ability of Minimax distances in detecting outlier objects. Finally, for each setting, we perform several experiments to demonstrate the effectiveness of our framework.
18

Miyamoto, Hiroyuki, Jun Morimoto, Kenji Doya, and Mitsuo Kawato. "Reinforcement learning with via-point representation." Neural Networks 17, no. 3 (April 2004): 299–305. http://dx.doi.org/10.1016/j.neunet.2003.11.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Tavanaei, Amirhossein, Timothée Masquelier, and Anthony Maida. "Representation learning using event-based STDP." Neural Networks 105 (September 2018): 294–303. http://dx.doi.org/10.1016/j.neunet.2018.05.018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jiao, Pengfei, Hongjiang Chen, Huijun Tang, Qing Bao, Long Zhang, Zhidong Zhao, and Huaming Wu. "Contrastive representation learning on dynamic networks." Neural Networks 174 (June 2024): 106240. http://dx.doi.org/10.1016/j.neunet.2024.106240.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chikwendu, Ijeoma Amuche, Xiaoling Zhang, Isaac Osei Agyemang, Isaac Adjei-Mensah, Ukwuoma Chiagoziem Chima, and Chukwuebuka Joseph Ejiyi. "A Comprehensive Survey on Deep Graph Representation Learning Methods." Journal of Artificial Intelligence Research 78 (October 25, 2023): 287–356. http://dx.doi.org/10.1613/jair.1.14768.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There has been a lot of activity in graph representation learning in recent years. Graph representation learning aims to produce graph representation vectors to represent the structure and characteristics of huge graphs precisely. This is crucial since the effectiveness of the graph representation vectors will influence how well they perform in subsequent tasks like anomaly detection, connection prediction, and node classification. Recently, there has been an increase in the use of other deep-learning breakthroughs for data-based graph problems. Graph-based learning environments have a taxonomy of approaches, and this study reviews all their learning settings. The learning problem is theoretically and empirically explored. This study briefly introduces and summarizes the Graph Neural Architecture Search (G-NAS), outlines several Graph Neural Networks’ drawbacks, and suggests some strategies to mitigate these challenges. Lastly, the study discusses several potential future study avenues yet to be explored.
22

Jurewicz, Mateusz, and Leon Derczynski. "Set-to-Sequence Methods in Machine Learning: A Review." Journal of Artificial Intelligence Research 71 (August 12, 2021): 885–924. http://dx.doi.org/10.1613/jair.1.12839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning on sets towards sequential output is an important and ubiquitous task, with applications ranging from language modelling and meta-learning to multi-agent strategy games and power grid optimization. Combining elements of representation learning and structured prediction, its two primary challenges include obtaining a meaningful, permutation invariant set representation and subsequently utilizing this representation to output a complex target permutation. This paper provides a comprehensive introduction to the _eld as well as an overview of important machine learning methods tackling both of these key challenges, with a detailed qualitative comparison of selected model architectures.
23

Tadepalli, P., and B. K. Natarajan. "A Formal Framework for Speedup Learning from Problems and Solutions." Journal of Artificial Intelligence Research 4 (June 1, 1996): 445–75. http://dx.doi.org/10.1613/jair.154.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, Explanation-Based Learning (EBL), and Probably Approximately Correct (PAC) Learning.
24

Qin, Jisheng, Xiaoqin Zeng, Shengli Wu, and Yang Zou. "Context-sensitive graph representation learning." Connection Science 34, no. 1 (September 14, 2022): 2313–31. http://dx.doi.org/10.1080/09540091.2022.2115010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ashley, Kevin D., and Edwina L. Rissland. "Law, learning and representation." Artificial Intelligence 150, no. 1-2 (November 2003): 17–58. http://dx.doi.org/10.1016/s0004-3702(03)00109-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

AL-Fayyadh, Hayder Rahm Dakheel, Salam Abdulabbas Ganim Ali, and Dr Basim Abood. "Modelling an Adaptive Learning System Using Artificial Intelligence." Webology 19, no. 1 (December 24, 2021): 01–18. http://dx.doi.org/10.14704/web/v19i1/web19001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The goal of this paper is to use artificial intelligence to build and evaluate an adaptive learning system where we adopt the basic approaches of spiking neural networks as well as artificial neural networks. Spiking neural networks receive increasing attention due to their advantages over traditional artificial neural networks. They have proven to be energy efficient, biological plausible, and up to 105 times faster if they are simulated on analogue traditional learning systems. Artificial neural network libraries use computational graphs as a pervasive representation, however, spiking models remain heterogeneous and difficult to train. Using the artificial intelligence deductive method, the paper posits two hypotheses that examines whether 1) there exists a common representation for both neural networks paradigms for tutorial mentoring, and whether 2) spiking and non-spiking models can learn a simple recognition task for learning activities for adaptive learning. The first hypothesis is confirmed by specifying and implementing a domain-specific language that generates semantically similar spiking and non-spiking neural networks for tutorial mentoring. Through three classification experiments, the second hypothesis is shown to hold for non-spiking models, but cannot be proven for the spiking models. The paper contributes three findings: 1) a domain-specific language for modelling neural network topologies in adaptive tutorial mentoring for students, 2) a preliminary model for generalizable learning through back-propagation in spiking neural networks for learning activities for students also represented in results section, and 3) a method for transferring optimised non-spiking parameters to spiking neural networks has also been developed for adaptive learning system. The latter contribution is promising because the vast machine learning literature can spill-over to the emerging field of spiking neural networks and adaptive learning computing. Future work includes improving the back-propagation model, exploring time-dependent models for learning, and adding support for adaptive learning systems.
27

Maher, Mary Lou, and Heng Li. "Learning design concepts using machine learning techniques." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 8, no. 2 (1994): 95–111. http://dx.doi.org/10.1017/s0890060400000706.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe use of machine learning techniques requires the formulation of a learning problem in a particular domain. The application of machine learning techniques in a design domain requires the consideration of the representation of the learned design knowledge, that is, a target representation, as well as the content and form of the training data, or design examples. This paper examines the use of a target representation of design concepts and the application, adaptation, or generation of machine learning techniques to generate design concepts from design examples. The examples are taken from the domain of bridge design. The primary machine learning paradigm considered is concept formation.
28

Hambadjawa, Johan Agung Pramono, and Khaerunnisa. "Development Concept of Artificial Intelligence as an Architect’s Representation: Literature Review." Arsir 8, no. 1 (March 22, 2024): 14–25. http://dx.doi.org/10.32502/arsir.v8i1.53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Human Architects cannot be replaced, even with Artificial Intelligence, but Artificial Intelligence could represent an Architect, especially in problem-solving. An architect is not eternal. Brilliant ideas vanished alongside the physical form of an Architect. By respecting the Architect who had left this world while simultaneously keeping the brilliant ideas alive, Artificial Intelligence could represent the Architect as a legacy that always lives through unique problem-solving solutions according to each individual. Using the methods done in the past, such as GANs, writers design the shape of an Artificial Intelligence by using data collection methods such as manual input, then realizing the Artificial Intelligence through machine learning, and later will be available through a digital application like a book. This book could interact with its reader. This research uses descriptive qualitative methods based on a literature review on Artificial Intelligence (AI) and the world of architecture. The results obtained are that humans will live side by side with technology, and in the future.
29

Wang, Meng-Xiang, Wang-Chien Lee, Tao-Yang Fu, and Ge Yu. "On Representation Learning for Road Networks." ACM Transactions on Intelligent Systems and Technology 12, no. 1 (December 22, 2020): 1–27. http://dx.doi.org/10.1145/3424346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lu, Run-kun, Jian-wei Liu, Si-ming Lian, and Xin Zuo. "Multi-view representation learning in multi-task scene." Neural Computing and Applications 32, no. 14 (October 29, 2019): 10403–22. http://dx.doi.org/10.1007/s00521-019-04577-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Xie, Ruobing, Stefan Heinrich, Zhiyuan Liu, Cornelius Weber, Yuan Yao, Stefan Wermter, and Maosong Sun. "Integrating Image-Based and Knowledge-Based Representation Learning." IEEE Transactions on Cognitive and Developmental Systems 12, no. 2 (June 2020): 169–78. http://dx.doi.org/10.1109/tcds.2019.2906685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sun, Yanan, Hua Mao, Yongsheng Sang, and Zhang Yi. "Explicit guiding auto-encoders for learning meaningful representation." Neural Computing and Applications 28, no. 3 (October 20, 2015): 429–36. http://dx.doi.org/10.1007/s00521-015-2082-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Dietterich, T. G. "Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition." Journal of Artificial Intelligence Research 13 (November 1, 2000): 227–303. http://dx.doi.org/10.1613/jair.639.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.
34

Kocabas, S. "A review of learning." Knowledge Engineering Review 6, no. 3 (September 1991): 195–222. http://dx.doi.org/10.1017/s0269888900005804.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLearning is one of the important research fields in artificial intelligence. This paper begins with an outline of the definitions of learning and intelligence, followed by a discussion of the aims of machine learning as an emerging science, and an historical outline of machine learning. The paper then examines the elements and various classifications of learning, and then introduces a new classification of learning based on the levels of representation and learning as knowledge-, symboland device-level learning. Similarity- and explanation-based generalization and conceptual clustering are described as knowledge level learning methods. Learning in classifiers, genetic algorithms and classifier systems are described as symbol level learning, and neural networks are described as device level systems. In accordance with this classification, methods of learning are described in terms of inputs, learning algorithms or devices, and outputs. Then there follows a discussion on the relationships between knowledge representation and learning, and a discussion on the limits of learning in knowledge systems. The paper concludes with a summary of the results drawn from this review.
35

O’Mahony, Niall, Sean Campbell, Lenka Krpalkova, Anderson Carvalho, Joseph Walsh, and Daniel Riordan. "Representation Learning for Fine-Grained Change Detection." Sensors 21, no. 13 (June 30, 2021): 4486. http://dx.doi.org/10.3390/s21134486.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.
36

Lesort, Timothée, Natalia Díaz-Rodríguez, Jean-Frano̧is Goudou, and David Filliat. "State representation learning for control: An overview." Neural Networks 108 (December 2018): 379–92. http://dx.doi.org/10.1016/j.neunet.2018.07.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

FRANKLIN, JUDY A., and KRYSTAL K. LOCKE. "RECURRENT NEURAL NETWORKS FOR MUSICAL PITCH MEMORY AND CLASSIFICATION." International Journal on Artificial Intelligence Tools 14, no. 01n02 (February 2005): 329–42. http://dx.doi.org/10.1142/s0218213005002120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. We then discuss limited results using other types of networks on the same tasks.
38

Shui, Changjian, Boyu Wang, and Christian Gagné. "On the benefits of representation regularization in invariance based domain generalization." Machine Learning 111, no. 3 (January 1, 2022): 895–915. http://dx.doi.org/10.1007/s10994-021-06080-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractA crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a novel theoretical analysis to control the unseen test environment error in the representation learning, which highlights the importance of controlling the smoothness of representation. In practice, our analysis further inspires an efficient regularization method to improve the robustness in domain generalization. The proposed regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms that ensure invariant representation learning. Empirical results show that our algorithm outperforms the base versions in various datasets and invariance criteria.
39

Li, Fuzhen, Zhenfeng Zhu, Xingxing Zhang, Jian Cheng, and Yao Zhao. "Diffusion induced graph representation learning." Neurocomputing 360 (September 2019): 220–29. http://dx.doi.org/10.1016/j.neucom.2019.06.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Littman, David, and Maarten van Someren. "International Workshop on Knowledge Representation and Organization in Machine Learning." AI Communications 1, no. 1 (1988): 44–45. http://dx.doi.org/10.3233/aic-1988-1108.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zeng, Deyu, Jing Sun, Zongze Wu, Chris Ding, and Zhigang Ren. "Data representation learning via dictionary learning and self-representation." Applied Intelligence, August 31, 2023. http://dx.doi.org/10.1007/s10489-023-04902-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Merckling, Astrid, Nicolas Perrin-Gilbert, Alex Coninx, and Stéphane Doncieux. "Exploratory State Representation Learning." Frontiers in Robotics and AI 9 (February 14, 2022). http://dx.doi.org/10.3389/frobt.2022.762051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Not having access to compact and meaningful representations is known to significantly increase the complexity of reinforcement learning (RL). For this reason, it can be useful to perform state representation learning (SRL) before tackling RL tasks. However, obtaining a good state representation can only be done if a large diversity of transitions is observed, which can require a difficult exploration, especially if the environment is initially reward-free. To solve the problems of exploration and SRL in parallel, we propose a new approach called XSRL (eXploratory State Representation Learning). On one hand, it jointly learns compact state representations and a state transition estimator which is used to remove unexploitable information from the representations. On the other hand, it continuously trains an inverse model, and adds to the prediction error of this model a k-step learning progress bonus to form the maximization objective of a discovery policy. This results in a policy that seeks complex transitions from which the trained models can effectively learn. Our experimental results show that the approach leads to efficient exploration in challenging environments with image observations, and to state representations that significantly accelerate learning in RL tasks.
43

Deshmukh, Aniket Anand, Jayanth Reddy Regatti, Eren Manavoglu, and Urun Dogan. "Representation learning for clustering via building consensus." Machine Learning, September 9, 2022. http://dx.doi.org/10.1007/s10994-022-06194-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn this paper, we focus on unsupervised representation learning for clustering of images. Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must be close in the representation space (exemplar consistency), and/or similar images must have similar cluster assignments (population consistency). We define an additional notion of consistency, consensus consistency, which ensures that representations are learned to induce similar partitions for variations in the representation space, different clustering algorithms or different initializations of a single clustering algorithm. We define a clustering loss by executing variations in the representation space and seamlessly integrate all three consistencies (consensus, exemplar and population) into an end-to-end learning framework. The proposed algorithm, consensus clustering using unsupervised representation learning (ConCURL), improves upon the clustering performance of state-of-the-art methods on four out of five image datasets. Furthermore, we extend the evaluation procedure for clustering to reflect the challenges encountered in real-world clustering tasks, such as maintaining clustering performance in cases with distribution shifts. We also perform a detailed ablation study for a deeper understanding of the proposed algorithm. The code and the trained models are available at https://github.com/JayanthRR/ConCURL_NCE.
44

Xu, Lingling, Haoran Xie, Zongxi Li, Fu Lee Wang, Weiming Wang, and Qing Li. "Contrastive Learning Models for Sentence Representations." ACM Transactions on Intelligent Systems and Technology, May 2, 2023. http://dx.doi.org/10.1145/3593590.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sentence representation learning is a crucial task in natural language processing (NLP), as the quality of learned representations directly influences downstream tasks, such as sentence classification and sentiment analysis. Transformer-based pretrained language models (PLMs) such as bidirectional encoder representations from transformers (BERT) have been extensively applied to various NLP tasks, and have exhibited moderately good performance. However, the anisotropy of the learned embedding space prevents BERT sentence embeddings from achieving good results in the semantic textual similarity tasks. It has been shown that contrastive learning can alleviate the anisotropy problem and significantly improve sentence representation performance. Therefore, there has been a surge in the development of models that utilize contrastive learning to finetune BERT-like PLMs to learn sentence representations. But no systematic review of contrastive learning models for sentence representations has been conducted. To fill this gap, this paper summarizes and categorizes the contrastive learning-based sentence representation models, common evaluation tasks for assessing the quality of learned representations, and future research directions. Furthermore, we select several representative models for exhaustive experiments to illustrate the quantitative improvement of various strategies on sentence representations.
45

Wang, Yuwei, and Yi Zeng. "Statistical Analysis of Multisensory and Text-Derived Representations on Concept Learning." Frontiers in Computational Neuroscience 16 (April 27, 2022). http://dx.doi.org/10.3389/fncom.2022.861265.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
When learning concepts, cognitive psychology research has revealed that there are two types of concept representations in the human brain: language-derived codes and sensory-derived codes. For the objective of human-like artificial intelligence, we expect to provide multisensory and text-derived representations for concepts in AI systems. Psychologists and computer scientists have published lots of datasets for the two kinds of representations, but as far as we know, no systematic work exits to analyze them together. We do a statistical study on them in this work. We want to know if multisensory vectors and text-derived vectors reflect conceptual understanding and if they are complementary in terms of cognition. Four experiments are presented in this work, all focused on multisensory representations labeled by psychologists and text-derived representations generated by computer scientists for concept learning, and the results demonstrate that (1) for the same concept, both forms of representations can properly reflect the concept, but (2) the representational similarity analysis findings reveal that the two types of representations are significantly different, (3) as the concreteness of the concept grows larger, the multisensory representation of the concept becomes closer to human beings than the text-derived representation, and (4) we verified that combining the two improves the concept representation.
46

Wickstrøm, Kristoffer K., Daniel J. Trosten, Sigurd Løkse, Ahcène Boubekki, Karl øyvind Mikalsen, Michael C. Kampffmeyer, and Robert Jenssen. "RELAX: Representation Learning Explainability." International Journal of Computer Vision, March 11, 2023. http://dx.doi.org/10.1007/s11263-023-01773-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractDespite the significant improvements that self-supervised representation learning has led to when learning from unlabeled data, no methods have been developed that explain what influences the learned representation. We address this need through our proposed approach, RELAX, which is the first approach for attribution-based explanations of representations. Our approach can also model the uncertainty in its explanations, which is essential to produce trustworthy explanations. RELAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself, providing intuitive explanations that significantly outperform the gradient-based baselines. We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning, providing insights into different learning strategies. Moreover, we conduct a user study to assess how well the proposed approach aligns with human intuition and show that the proposed method outperforms the baselines in both the quantitative and human evaluation studies. Finally, we illustrate the usability of RELAX in several use cases and highlight that incorporating uncertainty can be essential for providing faithful explanations, taking a crucial step towards explaining representations.
47

Higgins, Irina, Sébastien Racanière, and Danilo Rezende. "Symmetry-Based Representations for Artificial and Biological General Intelligence." Frontiers in Computational Neuroscience 16 (April 14, 2022). http://dx.doi.org/10.3389/fncom.2022.836498.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Biological intelligence is remarkable in its ability to produce complex behavior in many diverse situations through data efficient, generalizable, and transferable skill acquisition. It is believed that learning “good” sensory representations is important for enabling this, however there is little agreement as to what a good representation should look like. In this review article we are going to argue that symmetry transformations are a fundamental principle that can guide our search for what makes a good representation. The idea that there exist transformations (symmetries) that affect some aspects of the system but not others, and their relationship to conserved quantities has become central in modern physics, resulting in a more unified theoretical framework and even ability to predict the existence of new particles. Recently, symmetries have started to gain prominence in machine learning too, resulting in more data efficient and generalizable algorithms that can mimic some of the complex behaviors produced by biological intelligence. Finally, first demonstrations of the importance of symmetry transformations for representation learning in the brain are starting to arise in neuroscience. Taken together, the overwhelming positive effect that symmetries bring to these disciplines suggest that they may be an important general framework that determines the structure of the universe, constrains the nature of natural tasks and consequently shapes both biological and artificial intelligence.
48

Jeub, Lucas G. S., Giovanni Colavizza, Xiaowen Dong, Marya Bazzi, and Mihai Cucuringu. "Local2Global: a distributed approach for scaling representation learning on graphs." Machine Learning, February 24, 2023. http://dx.doi.org/10.1007/s10994-022-06285-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractWe propose a decentralised “local2global” approach to graph representation learning, that one can a-priori use to scale any embedding technique. Our local2global approach proceeds by first dividing the input graph into overlapping subgraphs (or “patches”) and training local representations for each patch independently. In a second step, we combine the local representations into a globally consistent representation by estimating the set of rigid motions that best align the local representations using information from the patch overlaps, via group synchronization. A key distinguishing feature of local2global relative to existing work is that patches are trained independently without the need for the often costly parameter synchronization during distributed training. This allows local2global to scale to large-scale industrial applications, where the input graph may not even fit into memory and may be stored in a distributed manner. We apply local2global on data sets of different sizes and show that our approach achieves a good trade-off between scale and accuracy on edge reconstruction and semi-supervised classification. We also consider the downstream task of anomaly detection and show how one can use local2global to highlight anomalies in cybersecurity networks.
49

Ouyang, Tinghui, and Xun Shen. "Representation learning based on hybrid polynomial approximated extreme learning machine." Applied Intelligence, October 26, 2021. http://dx.doi.org/10.1007/s10489-021-02915-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Borrego-Díaz, Joaquín, and Juan Galán Páez. "Knowledge representation for explainable artificial intelligence." Complex & Intelligent Systems, January 4, 2022. http://dx.doi.org/10.1007/s40747-021-00613-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractAlongside the particular need to explain the behavior of black box artificial intelligence (AI) systems, there is a general need to explain the behavior of any type of AI-based system (the explainable AI, XAI) or complex system that integrates this type of technology, due to the importance of its economic, political or industrial rights impact. The unstoppable development of AI-based applications in sensitive areas has led to what could be seen, from a formal and philosophical point of view, as some sort of crisis in the foundations, for which it is necessary both to provide models of the fundamentals of explainability as well as to discuss the advantages and disadvantages of different proposals. The need for foundations is also linked to the permanent challenge that the notion of explainability represents in Philosophy of Science. The paper aims to elaborate a general theoretical framework to discuss foundational characteristics of explaining, as well as how solutions (events) would be justified (explained). The approach, epistemological in nature, is based on the phenomenological-based approach to complex systems reconstruction (which encompasses complex AI-based systems). The formalized perspective is close to ideas from argumentation and induction (as learning). The soundness and limitations of the approach are addressed from Knowledge representation and reasoning paradigm and, in particular, from Computational Logic point of view. With regard to the latter, the proposal is intertwined with several related notions of explanation coming from the Philosophy of Science.

До бібліографії