Статті в журналах з теми "Dynamic Representation Learning"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Dynamic Representation Learning.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Dynamic Representation Learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lee, Jungmin, and Wongyoung Lee. "Aspects of A Study on the Multi Presentational Metaphor Education Using Online Telestration." Korean Society of Culture and Convergence 44, no. 9 (September 30, 2022): 163–73. http://dx.doi.org/10.33645/cnc.2022.9.44.9.163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study is an attempt to propose a multiple representational metaphor education model that combines linguistic representation and visual representation using online telestration. The advent of the media and online era has incorporated not only the understanding of linguistic representation l but also the understanding of visual representation into an important phase of cognitive behavior and requires the implementation of online learning. In such an era's needs, it can be said that teaching-learning makes metaphors be used as a tool for thinking and cognition in an online environment, learning leads learners to a new horizon of perception by combining linguistic representation and visual representation. The multiple representational metaphor education model using online telestration will have a two-way dynamic interaction in an online environment, and it will be possible to improve learning capabilities by expressing various representations. Multiple representational metaphor education using online telestration will allow us to consider new perspectives and various possibilities of expression to interpret the world by converging and rephrasing verbal and visual representations using media in an online environment.
2

Biswal, Siddharth, Cao Xiao, Lucas M. Glass, Elizabeth Milkovits, and Jimeng Sun. "Doctor2Vec: Dynamic Doctor Representation Learning for Clinical Trial Recruitment." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 557–64. http://dx.doi.org/10.1609/aaai.v34i01.5394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Massive electronic health records (EHRs) enable the success of learning accurate patient representations to support various predictive health applications. In contrast, doctor representation was not well studied despite that doctors play pivotal roles in healthcare. How to construct the right doctor representations? How to use doctor representation to solve important health analytic problems? In this work, we study the problem on clinical trial recruitment, which is about identifying the right doctors to help conduct the trials based on the trial description and patient EHR data of those doctors. We propose Doctor2Vec which simultaneously learns 1) doctor representations from EHR data and 2) trial representations from the description and categorical information about the trials. In particular, Doctor2Vec utilizes a dynamic memory network where the doctor's experience with patients are stored in the memory bank and the network will dynamically assign weights based on the trial representation via an attention mechanism. Validated on large real-world trials and EHR data including 2,609 trials, 25K doctors and 430K patients, Doctor2Vec demonstrated improved performance over the best baseline by up to 8.7% in PR-AUC. We also demonstrated that the Doctor2Vec embedding can be transferred to benefit data insufficiency settings including trial recruitment in less populated/newly explored country with 13.7% improvement or for rare diseases with 8.1% improvement in PR-AUC.
3

Wang, Xingqi, Mengrui Zhang, Bin Chen, Dan Wei, and Yanli Shao. "Dynamic Weighted Multitask Learning and Contrastive Learning for Multimodal Sentiment Analysis." Electronics 12, no. 13 (July 7, 2023): 2986. http://dx.doi.org/10.3390/electronics12132986.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multimodal sentiment analysis (MSA) has attracted more and more attention in recent years. This paper focuses on the representation learning of multimodal data to reach higher prediction results. We propose a model to assist in learning modality representations with multitask learning and contrastive learning. In addition, our approach obtains dynamic weights by considering the homoscedastic uncertainty of each task in multitask learning. Specially, we design two groups of subtasks, which predict the sentiment polarity of unimodal and bimodal representations, to assist in learning representation through a hard parameter-sharing mechanism in the upstream neural network. A loss weight is learned according to the homoscedastic uncertainty of each task. Moreover, a training strategy based on contrastive learning is designed to balance the inconsistency between training and inference caused by the randomness of the dropout layer. This method minimizes the MSE between two submodels. Experimental results on the MOSI and MOSEI datasets show our method achieves better performance than the current state-of-the-art methods by comprehensively considering the intramodality and intermodality interaction information.
4

Goyal, Palash, Sujit Rokka Chhetri, and Arquimedes Canedo. "dyngraph2vec: Capturing network dynamics using dynamic graph representation learning." Knowledge-Based Systems 187 (January 2020): 104816. http://dx.doi.org/10.1016/j.knosys.2019.06.024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Han, Liangzhe, Ruixing Zhang, Leilei Sun, Bowen Du, Yanjie Fu, and Tongyu Zhu. "Generic and Dynamic Graph Representation Learning for Crowd Flow Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4293–301. http://dx.doi.org/10.1609/aaai.v37i4.25548.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many deep spatio-temporal learning methods have been proposed for crowd flow modeling in recent years. However, most of them focus on designing a spatial and temporal convolution mechanism to aggregate information from nearby nodes and historical observations for a pre-defined prediction task. Different from the existing research, this paper aims to provide a generic and dynamic representation learning method for crowd flow modeling. The main idea of our method is to maintain a continuous-time representation for each node, and update the representations of all nodes continuously according to the streaming observed data. Along this line, a particular encoder-decoder architecture is proposed, where the encoder converts the newly happened transactions into a timestamped message, and then the representations of related nodes are updated according to the generated message. The role of the decoder is to guide the representation learning process by reconstructing the observed transactions based on the most recent node representations. Moreover, a number of virtual nodes are added to discover macro-level spatial patterns and also share the representations among spatially-interacted stations. Experiments have been conducted on two real-world datasets for four popular prediction tasks in crowd flow modeling. The result demonstrates that our method could achieve better prediction performance for all the tasks than baseline methods.
6

Jiao, Pengfei, Hongjiang Chen, Huijun Tang, Qing Bao, Long Zhang, Zhidong Zhao, and Huaming Wu. "Contrastive representation learning on dynamic networks." Neural Networks 174 (June 2024): 106240. http://dx.doi.org/10.1016/j.neunet.2024.106240.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Radulescu, Angela, Yeon Soon Shin, and Yael Niv. "Human Representation Learning." Annual Review of Neuroscience 44, no. 1 (July 8, 2021): 253–73. http://dx.doi.org/10.1146/annurev-neuro-092920-120559.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The central theme of this review is the dynamic interaction between information selection and learning. We pose a fundamental question about this interaction: How do we learn what features of our experiences are worth learning about? In humans, this process depends on attention and memory, two cognitive functions that together constrain representations of the world to features that are relevant for goal attainment. Recent evidence suggests that the representations shaped by attention and memory are themselves inferred from experience with each task. We review this evidence and place it in the context of work that has explicitly characterized representation learning as statistical inference. We discuss how inference can be scaled to real-world decisions by approximating beliefs based on a small number of experiences. Finally, we highlight some implications of this inference process for human decision-making in social environments.
8

Liu, Dianbo, Alex Lamb, Xu Ji, Pascal Junior Tikeng Notsawo, Michael Mozer, Yoshua Bengio, and Kenji Kawaguchi. "Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization for Heterogeneous Representational Coarseness." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8825–33. http://dx.doi.org/10.1609/aaai.v37i7.26061.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Vector Quantization (VQ) is a method for discretizing latent representations and has become a major part of the deep learning toolkit. It has been theoretically and empirically shown that discretization of representations leads to improved generalization, including in reinforcement learning where discretization can be used to bottleneck multi-agent communication to promote agent specialization and robustness. The discretization tightness of most VQ-based methods is defined by the number of discrete codes in the representation vector and the codebook size, which are fixed as hyperparameters. In this work, we propose learning to dynamically select discretization tightness conditioned on inputs, based on the hypothesis that data naturally contains variations in complexity that call for different levels of representational coarseness which is observed in many heterogeneous data sets. We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks with heterogeneity in representations.
9

Deng, Yongjian, Hao Chen, and Youfu Li. "A Dynamic GCN with Cross-Representation Distillation for Event-Based Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (March 24, 2024): 1492–500. http://dx.doi.org/10.1609/aaai.v38i2.27914.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent advances in event-based research prioritize sparsity and temporal precision. Approaches learning sparse point-based representations through graph CNNs (GCN) become more popular. Yet, these graph techniques hold lower performance than their frame-based counterpart due to two issues: (i) Biased graph structures that don't properly incorporate varied attributes (such as semantics, and spatial and temporal signals) for each vertex, resulting in inaccurate graph representations. (ii) A shortage of robust pretrained models. Here we solve the first problem by proposing a new event-based GCN (EDGCN), with a dynamic aggregation module to integrate all attributes of vertices adaptively. To address the second problem, we introduce a novel learning framework called cross-representation distillation (CRD), which leverages the dense representation of events as a cross-representation auxiliary to provide additional supervision and prior knowledge for the event graph. This frame-to-graph distillation allows us to benefit from the large-scale priors provided by CNNs while still retaining the advantages of graph-based models. Extensive experiments show our model and learning framework are effective and generalize well across multiple vision tasks.
10

Li, Jintang, Zhouxin Yu, Zulun Zhu, Liang Chen, Qi Yu, Zibin Zheng, Sheng Tian, Ruofan Wu, and Changhua Meng. "Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8588–96. http://dx.doi.org/10.1609/aaai.v37i7.26034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2.7M nodes and 13.9M edges) with significantly fewer parameters and computation overheads.
11

Wei, Hao, Guyu Hu, Wei Bai, Shiming Xia, and Zhisong Pan. "Lifelong representation learning in dynamic attributed networks." Neurocomputing 358 (September 2019): 1–9. http://dx.doi.org/10.1016/j.neucom.2019.05.038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lee, Dongha, Xiaoqian Jiang, and Hwanjo Yu. "Harmonized representation learning on dynamic EHR graphs." Journal of Biomedical Informatics 106 (June 2020): 103426. http://dx.doi.org/10.1016/j.jbi.2020.103426.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wu, Wei, and Xuemeng Zhai. "DyLFG: A Dynamic Network Learning Framework Based on Geometry." Entropy 25, no. 12 (November 30, 2023): 1611. http://dx.doi.org/10.3390/e25121611.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dynamic network representation learning has recently attracted increasing attention because real-world networks evolve over time, that is nodes and edges join or leave the networks over time. Different from static networks, the representation learning of dynamic networks should not only consider how to capture the structural information of network snapshots, but also consider how to capture the temporal dynamic information of network structure evolution from the network snapshot sequence. From the existing work on dynamic network representation, there are two main problems: (1) A significant number of methods target dynamic networks, which only allow nodes to increase over time, not decrease, which reduces the applicability of such methods to real-world networks. (2) At present, most network-embedding methods, especially dynamic network representation learning approaches, use Euclidean embedding space. However, the network itself is geometrically non-Euclidean, which leads to geometric inconsistencies between the embedded space and the underlying space of the network, which can affect the performance of the model. In order to solve the above two problems, we propose a geometry-based dynamic network learning framework, namely DyLFG. Our proposed framework targets dynamic networks, which allow nodes and edges to join or exit the network over time. In order to extract the structural information of network snapshots, we designed a new hyperbolic geometry processing layer, which is different from the previous literature. In order to deal with the temporal dynamics of the network snapshot sequence, we propose a gated recurrent unit (GRU) module based on Ricci curvature, that is the RGRU. In the proposed framework, we used a temporal attention layer and the RGRU to evolve the neural network weight matrix to capture temporal dynamics in the network snapshot sequence. The experimental results showed that our model outperformed the baseline approaches on the baseline datasets.
14

Huang, Yicong, and Zhuliang Yu. "Representation Learning for Dynamic Functional Connectivities via Variational Dynamic Graph Latent Variable Models." Entropy 24, no. 2 (January 19, 2022): 152. http://dx.doi.org/10.3390/e24020152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Latent variable models (LVMs) for neural population spikes have revealed informative low-dimensional dynamics about the neural data and have become powerful tools for analyzing and interpreting neural activity. However, these approaches are unable to determine the neurophysiological meaning of the inferred latent dynamics. On the other hand, emerging evidence suggests that dynamic functional connectivities (DFC) may be responsible for neural activity patterns underlying cognition or behavior. We are interested in studying how DFC are associated with the low-dimensional structure of neural activities. Most existing LVMs are based on a point process and fail to model evolving relationships. In this work, we introduce a dynamic graph as the latent variable and develop a Variational Dynamic Graph Latent Variable Model (VDGLVM), a representation learning model based on the variational information bottleneck framework. VDGLVM utilizes a graph generative model and a graph neural network to capture dynamic communication between nodes that one has no access to from the observed data. The proposed computational model provides guaranteed behavior-decoding performance and improves LVMs by associating the inferred latent dynamics with probable DFC.
15

Christensen, Andrew J., Ananya Sen Gupta, and Ivars Kirsteins. "Graph representation learning on braid manifolds." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A39. http://dx.doi.org/10.1121/10.0015466.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The accuracy of autonomous sonar target recognition systems is usually hindered by morphing target features, unknown target geometry, and uncertainty caused by waveguide distortions to signal. Common “black-box” neural networks are not effective in addressing these challenges since they do not produce physically interpretable features. This work seeks to use recent advancements in machine learning to extract braid features that can be interpreted by a domain expert. We utilize Graph Neural Networks (GNNs) to discover braid manifolds in sonar ping spectra data. This approach represents the sonar ping data as a sequence of timestamped, sparse, dynamic graphs. These dynamic graph sequences are used as input into a GNN to produce feature dictionaries. GNNs ability to learn on complex systems of interactions help make them resilient to environmental uncertainty. To learn the evolving braid-like features of the sonar ping spectra graphs, a modified variation of Temporal Graph Networks (TGNs) is used. TGNs can perform prediction and classification tasks on timestamped dynamic graphs. The modified TGN in this work models the evolution of the sonar ping spectra graph to eventually perform graph-based classification. [Work supported by ONR grant N00014-21-1-2420.]
16

Cadieu, Charles F., and Bruno A. Olshausen. "Learning Intermediate-Level Representations of Form and Motion from Natural Movies." Neural Computation 24, no. 4 (April 2012): 827–66. http://dx.doi.org/10.1162/neco_a_00247.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We present a model of intermediate-level visual representation that is based on learning invariances from movies of the natural environment. The model is composed of two stages of processing: an early feature representation layer and a second layer in which invariances are explicitly represented. Invariances are learned as the result of factoring apart the temporally stable and dynamic components embedded in the early feature representation. The structure contained in these components is made explicit in the activities of second-layer units that capture invariances in both form and motion. When trained on natural movies, the first layer produces a factorization, or separation, of image content into a temporally persistent part representing local edge structure and a dynamic part representing local motion structure, consistent with known response properties in early visual cortex (area V1). This factorization linearizes statistical dependencies among the first-layer units, making them learnable by the second layer. The second-layer units are split into two populations according to the factorization in the first layer. The form-selective units receive their input from the temporally persistent part (local edge structure) and after training result in a diverse set of higher-order shape features consisting of extended contours, multiscale edges, textures, and texture boundaries. The motion-selective units receive their input from the dynamic part (local motion structure) and after training result in a representation of image translation over different spatial scales and directions, in addition to more complex deformations. These representations provide a rich description of dynamic natural images and testable hypotheses regarding intermediate-level representation in visual cortex.
17

Sun, Li, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, and Philip S. Yu. "Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4375–83. http://dx.doi.org/10.1609/aaai.v35i5.16563.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning representations for graphs plays a critical role in a wide spectrum of downstream applications. In this paper, we summarize the limitations of the prior works in three folds: representation space, modeling dynamics and modeling uncertainty. To bridge this gap, we propose to learn dynamic graph representations in hyperbolic space, for the first time, which aims to infer stochastic node representations. Working with hyperbolic space, we present a novel Hyperbolic Variational Graph Neural Network, referred to as HVGNN. In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach. To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions. Furthermore, we introduce a reparameterisable sampling algorithm for the hyperbolic normal distribution to enable the gradient-based learning of HVGNN. Extensive experiments show that HVGNN outperforms state-of-the-art baselines on real-world datasets.
18

Zheng, Tingyi, Yilin Zhang, and Yuhang Wang. "Dynamic guided metric representation learning for multi-view clustering." PeerJ Computer Science 8 (March 8, 2022): e922. http://dx.doi.org/10.7717/peerj-cs.922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multi-view clustering (MVC) is a mainstream task that aims to divide objects into meaningful groups from different perspectives. The quality of data representation is the key issue in MVC. A comprehensive meaningful data representation should be with the discriminant characteristics in a single view and the correlation of multiple views. Considering this, a novel framework called Dynamic Guided Metric Representation Learning for Multi-View Clustering (DGMRL-MVC) is proposed in this paper, which can cluster multi-view data in a learned latent discriminated embedding space. Specifically, in the framework, the data representation can be enhanced by multi-steps. Firstly, the class separability is enforced with Fisher Discriminant Analysis (FDA) within each single view, while the consistence among different views is enhanced based on Hilbert-Schmidt independence criteria (HSIC). Then, the 1st enhanced representation is obtained. In the second step, a dynamic routing mechanism is introduced, in which the location or direction information is added to fulfil the expression. After that, a generalized canonical correlation analysis (GCCA) model is used to get the final ultimate common discriminated representation. The learned fusion representation can substantially improve multi-view clustering performance. Experiments validated the effectiveness of the proposed method for clustering tasks.
19

Ljubešić, Nikola. "‟Deep lexicography” – Fad or Opportunity?" Rasprave Instituta za hrvatski jezik i jezikoslovlje 46, no. 2 (October 30, 2020): 839–52. http://dx.doi.org/10.31724/rihjj.46.2.21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, we are witnessing staggering improvements in various semantic data processing tasks due to the developments in the area of deep learning, ranging from image and video processing to speech processing, and natural language understanding. In this paper, we discuss the opportunities and challenges that these developments pose for the area of electronic lexicography. We primarily focus on the concept of representation learning of the basic elements of language, namely words, and the applicability of these word representations to lexicography. We first discuss well-known approaches to learning static representations of words, the so-called word embeddings, and their usage in lexicography-related tasks such as semantic shift detection, and cross-lingual prediction of lexical features such as concreteness and imageability. We wrap up the paper with the most recent developments in the area of word representation learning in form of learning dynamic, context-aware representations of words, showcasing some dynamic word embedding examples, and discussing improvements on lexicography-relevant tasks of word sense disambiguation and word sense induction.
20

Li, Bin, Yunlong Fan, Miao Gao, Yikemaiti Sataer, and Zhiqiang Gao. "A Joint-Learning-Based Dynamic Graph Learning Framework for Structured Prediction." Electronics 12, no. 11 (May 23, 2023): 2357. http://dx.doi.org/10.3390/electronics12112357.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Graph neural networks (GNNs) have achieved remarkable success in structured prediction, owing to the GNNs’ powerful ability in learning expressive graph representations. However, most of these works learn graph representations based on a static graph constructed by an existing parser, suffering from two drawbacks: (1) the static graph might be error-prone, and the errors introduced in the static graph cannot be corrected and might accumulate in later stages, and (2) the graph construction stage and graph representation learning stage are disjoined, which negatively affects the model’s running speed. In this paper, we propose a joint-learning-based dynamic graph learning framework and apply it to two typical structured prediction tasks: syntactic dependency parsing, which aims to predict a labeled tree, and semantic dependency parsing, which aims to predict a labeled graph, for jointly learning the graph structure and graph representations. Experiments are conducted on four datasets: the Universal Dependencies 2.2, the Chinese Treebank 5.1, the English Penn Treebank 3.0 in 13 languages for syntactic dependency parsing, and the SemEval-2015 Task 18 dataset in three languages for semantic dependency parsing. The experimental results show that our best-performing model achieves a new state-of-the-art performance on most language sets of syntactic dependency and semantic dependency parsing. In addition, our model also has an advantage in running speed over the static graph-based learning model. The outstanding performance demonstrates the effectiveness of the proposed framework in structured prediction.
21

Geng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li, and Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Given an input video, its associated audio, and a brief caption, the audio-visual scene aware dialog (AVSD) task requires an agent to indulge in a question-answer dialog with a human about the audio-visual content. This task thus poses a challenging multi-modal representation learning and reasoning scenario, advancements into which could influence several human-machine interaction applications. To solve this task, we introduce a semantics-controlled multi-modal shuffled Transformer reasoning framework, consisting of a sequence of Transformer modules, each taking a modality as input and producing representations conditioned on the input question. Our proposed Transformer variant uses a shuffling scheme on their multi-head outputs, demonstrating better regularization. To encode fine-grained visual information, we present a novel dynamic scene graph representation learning pipeline that consists of an intra-frame reasoning layer producing spatio-semantic graph representations for every frame, and an inter-frame aggregation module capturing temporal cues. Our entire pipeline is trained end-to-end. We present experiments on the benchmark AVSD dataset, both on answer generation and selection tasks. Our results demonstrate state-of-the-art performances on all evaluation metrics.
22

Velasquez, Alvaro, Brett Bissey, Lior Barak, Daniel Melcer, Andre Beckus, Ismail Alkhouri, and George Atia. "Multi-Agent Tree Search with Dynamic Reward Shaping." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 652–61. http://dx.doi.org/10.1609/icaps.v32i1.19854.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sparse rewards and their representation in multi-agent domains remains a challenge for the development of multi-agent planning systems. While techniques from formal methods can be adopted to represent the underlying planning objectives, their use in facilitating and accelerating learning has witnessed limited attention in multi-agent settings. Reward shaping methods that leverage such formal representations in single-agent settings are typically static in the sense that the artificial rewards remain the same throughout the entire learning process. In contrast, we investigate the use of such formal objective representations to define novel reward shaping functions that capture the learned experience of the agents. More specifically, we leverage the automaton representation of the underlying team objectives in mixed cooperative-competitive domains such that each automaton transition is assigned an expected value proportional to the frequency with which it was observed in successful trajectories of past behavior. This form of dynamic reward shaping is proposed within a multi-agent tree search architecture wherein agents can simultaneously reason about the future behavior of other agents as well as their own future behavior.
23

Ren, Xiaobin, Kaiqi Zhao, Patricia J. Riddle, Katerina Taskova, Qingyi Pan, and Lianyan Li. "DAMR: Dynamic Adjacency Matrix Representation Learning for Multivariate Time Series Imputation." Proceedings of the ACM on Management of Data 1, no. 2 (June 13, 2023): 1–25. http://dx.doi.org/10.1145/3589333.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Missing data imputation for location-based sensor data has attracted much attention in recent years. The state-of-the-art imputation methods based on graph neural networks have a priori assumption that the spatial correlations between sensor locations are static. However, real-world data sets often exhibit dynamic spatial correlations. This paper proposes a novel approach to capturing the dynamics of spatial correlations between geographical locations as a composition of the constant, long-term trends and periodic patterns. To this end, we design a new method called Dynamic Adjacency Matrix Representation (DAMR) that extracts various dynamic patterns of spatial correlations and represents them as adjacency matrices. The adjacency matrices are then aggregated and fed into a well-designed graph representation learning layer for predicting the missing values. Through extensive experiments on six real-world data sets, we demonstrate that DAMR reduces the MAE by up to 19.4% compared with the state-of-the-art methods for the missing value imputation task
24

Achille, Alessandro, and Stefano Soatto. "A Separation Principle for Control in the Age of Deep Learning." Annual Review of Control, Robotics, and Autonomous Systems 1, no. 1 (May 28, 2018): 287–307. http://dx.doi.org/10.1146/annurev-control-060117-105140.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We review the problem of defining and inferring a state for a control system based on complex, high-dimensional, highly uncertain measurement streams, such as videos. Such a state, or representation, should contain all and only the information needed for control and discount nuisance variability in the data. It should also have finite complexity, ideally modulated depending on available resources. This representation is what we want to store in memory in lieu of the data, as it separates the control task from the measurement process. For the trivial case with no dynamics, a representation can be inferred by minimizing the information bottleneck Lagrangian in a function class realized by deep neural networks. The resulting representation has much higher dimension than the data (already in the millions) but is smaller in the sense of information content, retaining only what is needed for the task. This process also yields representations that are invariant to nuisance factors and have maximally independent components. We extend these ideas to the dynamic case, where the representation is the posterior density of the task variable given the measurements up to the current time, which is in general much simpler than the prediction density maintained by the classical Bayesian filter. Again, this can be finitely parameterized using a deep neural network, and some applications are already beginning to emerge. No explicit assumption of Markovianity is needed; instead, complexity trades off approximation of an optimal representation, including the degree of Markovianity.
25

Perlovsky, Leonid, and Gary Kuvich. "Machine Learning and Cognitive Algorithms for Engineering Applications." International Journal of Cognitive Informatics and Natural Intelligence 7, no. 4 (October 2013): 64–82. http://dx.doi.org/10.4018/ijcini.2013100104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mind is based on intelligent cognitive processes, which are not limited by language and logic only. The thought is a set of informational processes in the brain, and such processes have the same rationale as any other systematic informational processes. Their specifics are determined by the ways of how brain stores, structures, and process this information. Systematic approach allows representing them in a diagrammatic form that can be formalized. Semiotic approach allows for the universal representation of such diagrams. In that approach, logic is a way of synthesis of such structures, which is a small but clearly visible top of the iceberg. The most efforts were traditionally put into logics without paying much attention to the rest of the mechanisms that make the entire thought system working autonomously. Dynamic fuzzy logic is reviewed and its connections with semiotics are established. Dynamic fuzzy logic extends fuzzy logic in the direction of logic-processes, which include processes of fuzzification and defuzzification as parts of logic. The paper reviews basic cognitive mechanisms, including instinctual drives, emotional and conceptual mechanisms, perception, cognition, language, a model of interaction between language and cognition upon the new semiotic models. The model of interacting cognition and language is organized in an approximate hierarchy of mental representations from sensory percepts at the “bottom” to objects, contexts, situations, abstract concepts-representations, and to the most general representations at the “top” of mental hierarchy. Knowledge Instinct and emotions are driving feedbacks for these representations. Interactions of bottom-up and top-down processes in such hierarchical semiotic representation are essential for modeling cognition. Dynamic fuzzy logic is analyzed as a fundamental mechanism of these processes. Future research directions are discussed.
26

Geng, Yu, Zongbo Han, Changqing Zhang, and Qinghua Hu. "Uncertainty-Aware Multi-View Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7545–53. http://dx.doi.org/10.1609/aaai.v35i9.16924.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Learning from different data views by exploring the underlying complementary information among them can endow the representation with stronger expressive ability. However, high-dimensional features tend to contain noise, and furthermore, quality of data usually varies for different samples (even for different views), i.e., one view may be informative for one sample but not the case for another. Therefore, it is quite challenging to integrate multi-view noisy data under unsupervised setting. Traditional multi-view methods either simply treat each view with equal importance or tune the weights of different views to fixed values, which are insufficient to capture the dynamic noise in multi-view data. In this work, we devise a novel unsupervised multi-view learning approach, termed as Dynamic Uncertainty-Aware Networks (DUA-Nets). Guided by the uncertainty of data estimated from the generation perspective, intrinsic information from multiple views is integrated to obtain noise-free representations. Under the help of uncertainty estimation, DUA-Nets weigh each view of individual sample according to data quality so that the high-quality samples (or views) can be fully exploited while the effects from the noisy samples (or views) will be alleviated. Our model achieves superior performance in extensive experiments and shows the robustness to noisy data.
27

Malloy, Tyler, Yinuo Du, Fei Fang, and Cleotilde Gonzalez. "Generative Environment-Representation Instance-Based Learning: A Cognitive Model." Proceedings of the AAAI Symposium Series 2, no. 1 (January 22, 2024): 326–33. http://dx.doi.org/10.1609/aaaiss.v2i1.27696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Instance-Based Learning Theory (IBLT) suggests that humans learn to engage in dynamic decision making tasks through the accumulation of experiences, represented by the decision task features, the actions performed, and the utility of decision outcomes. This theory has been applied to the design of Instance-Based Learning (IBL) models of human behavior in a variety of contexts. One key feature of all IBL model applications is the method of accumulating instance-based memory and performing recognition-based retrieval. In simple tasks with few features, this knowledge representation and retrieval could hypothetically be done using all relevant information. However, these methods do not scale well to complex tasks when exhaustive enumeration of features is unfeasible. This requires cognitive modelers to design task-specific representations of state features, as well as similarity metrics, which can be time consuming and fail to generalize to related tasks. To address this issue, we leverage recent advancements in Artificial Neural Networks, specifically generative models (GMs), to learn representations of complex dynamic decision making tasks without relying on domain knowledge. We evaluate a range of GMs in their usefulness in forming representations that can be used by IBL models to predict human behavior in a complex decision making task. This work connects generative and cognitive models by using GMs to form representations and determine similarity.
28

Lv, Feiya, Chenglin Wen, and Meiqin Liu. "Dynamic reconstruction based representation learning for multivariable process monitoring." Journal of Process Control 81 (September 2019): 112–25. http://dx.doi.org/10.1016/j.jprocont.2019.06.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yin, Ying, Li-Xin Ji, Jian-Peng Zhang, and Yu-Long Pei. "DHNE: Network Representation Learning Method for Dynamic Heterogeneous Networks." IEEE Access 7 (2019): 134782–92. http://dx.doi.org/10.1109/access.2019.2942221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Zhang, Xiaoxian, Jianpei Zhang, and Jing Yang. "Large-scale dynamic social data representation for structure feature learning." Journal of Intelligent & Fuzzy Systems 39, no. 4 (October 21, 2020): 5253–62. http://dx.doi.org/10.3233/jifs-189010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The problems caused by network dimension disasters and computational complexity have become an important issue to be solved in the field of social network research. The existing methods for network feature learning are mostly based on static and small-scale assumptions, and there is no modified learning for the unique attributes of social networks. Therefore, existing learning methods cannot adapt to the dynamic and large-scale of current social networks. Even super large scale and other features. This paper mainly studies the feature representation learning of large-scale dynamic social network structure. In this paper, the positive and negative damping sampling of network nodes in different classes is carried out, and the dynamic feature learning method for newly added nodes is constructed, which makes the model feasible for the extraction of structural features of large-scale social networks in the process of dynamic change. The obtained node feature representation has better dynamic robustness. By selecting the real datasets of three large-scale dynamic social networks and the experiments of dynamic link prediction in social networks, it is found that DNPS has achieved a large performance improvement over the benchmark model in terms of prediction accuracy and time efficiency. When the α value is around 0.7, the model effect is optimal.
31

Najafi, Bahareh, Saeedeh Parsaeefard, and Alberto Leon-Garcia. "Entropy-Aware Time-Varying Graph Neural Networks with Generalized Temporal Hawkes Process: Dynamic Link Prediction in the Presence of Node Addition and Deletion." Machine Learning and Knowledge Extraction 5, no. 4 (October 4, 2023): 1359–81. http://dx.doi.org/10.3390/make5040069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, we consider both the addition and deletion of nodes and edges as events. These events occur at irregular time scales and are modeled using temporal point processes. Our goal is to learn the conditional intensity function of the temporal point process to investigate the influence of deletion events on node representation learning for link-level prediction. We incorporate network entropy, a measure of node and edge significance, to capture the effect of node deletion and edge removal in our framework. Additionally, we leveraged the characteristics of a generalized temporal Hawkes process, which considers the inhibitory effects of events where past occurrences can reduce future intensity. This framework enables dynamic representation learning by effectively modeling both addition and deletion events in the temporal graph. To evaluate our approach, we utilize autonomous system graphs, a family of inhomogeneous sparse graphs with instances of node and edge additions and deletions, in a link prediction task. By integrating these enhancements into our framework, we improve the accuracy of dynamic link prediction and enable better understanding of the dynamic evolution of complex networks.
32

Lai, Songxuan, Lianwen Jin, Luojun Lin, Yecheng Zhu, and Huiyun Mao. "SynSig2Vec: Learning Representations from Synthetic Dynamic Signatures for Real-World Verification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 735–42. http://dx.doi.org/10.1609/aaai.v34i01.5416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An open research problem in automatic signature verification is the skilled forgery attacks. However, the skilled forgeries are very difficult to acquire for representation learning. To tackle this issue, this paper proposes to learn dynamic signature representations through ranking synthesized signatures. First, a neuromotor inspired signature synthesis method is proposed to synthesize signatures with different distortion levels for any template signature. Then, given the templates, we construct a lightweight one-dimensional convolutional network to learn to rank the synthesized samples, and directly optimize the average precision of the ranking to exploit relative and fine-grained signature similarities. Finally, after training, fixed-length representations can be extracted from dynamic signatures of variable lengths for verification. One highlight of our method is that it requires neither skilled nor random forgeries for training, yet it surpasses the state-of-the-art by a large margin on two public benchmarks.
33

Liu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu, and Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning." Proceedings of the VLDB Endowment 14, no. 3 (November 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multi-modal transportation recommendation aims to provide the most appropriate travel route with various transportation modes according to certain criteria. After analyzing large-scale navigation data, we find that route representations exhibit two patterns: spatio-temporal autocorrelations within transportation networks and the semantic coherence of route sequences. However, there are few studies that consider both patterns when developing multi-modal transportation systems. To this end, in this paper, we study multi-modal transportation recommendation with unified route representation learning by exploiting both spatio-temporal dependencies in transportation networks and the semantic coherence of historical routes. Specifically, we propose to unify both dynamic graph representation learning and hierarchical multi-task learning for multi-modal transportation recommendations. Along this line, we first transform the multi-modal transportation network into time-dependent multi-view transportation graphs and propose a spatiotemporal graph neural network module to capture the spatial and temporal autocorrelation. Then, we introduce a coherent-aware attentive route representation learning module to project arbitrary-length routes into fixed-length representation vectors, with explicit modeling of route coherence from historical routes. Moreover, we develop a hierarchical multi-task learning module to differentiate route representations for different transport modes, and this is guided by the final recommendation feedback as well as multiple auxiliary tasks equipped in different network layers. Extensive experimental results on two large-scale real-world datasets demonstrate the performance of the proposed system outperforms eight baselines.
34

Jiang, Linxing Preston, and Rajesh P. N. Rao. "Dynamic predictive coding: A model of hierarchical sequence learning and prediction in the neocortex." PLOS Computational Biology 20, no. 2 (February 8, 2024): e1011801. http://dx.doi.org/10.1371/journal.pcbi.1011801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We introduce dynamic predictive coding, a hierarchical model of spatiotemporal prediction and sequence learning in the neocortex. The model assumes that higher cortical levels modulate the temporal dynamics of lower levels, correcting their predictions of dynamics using prediction errors. As a result, lower levels form representations that encode sequences at shorter timescales (e.g., a single step) while higher levels form representations that encode sequences at longer timescales (e.g., an entire sequence). We tested this model using a two-level neural network, where the top-down modulation creates low-dimensional combinations of a set of learned temporal dynamics to explain input sequences. When trained on natural videos, the lower-level model neurons developed space-time receptive fields similar to those of simple cells in the primary visual cortex while the higher-level responses spanned longer timescales, mimicking temporal response hierarchies in the cortex. Additionally, the network’s hierarchical sequence representation exhibited both predictive and postdictive effects resembling those observed in visual motion processing in humans (e.g., in the flash-lag illusion). When coupled with an associative memory emulating the role of the hippocampus, the model allowed episodic memories to be stored and retrieved, supporting cue-triggered recall of an input sequence similar to activity recall in the visual cortex. When extended to three hierarchical levels, the model learned progressively more abstract temporal representations along the hierarchy. Taken together, our results suggest that cortical processing and learning of sequences can be interpreted as dynamic predictive coding based on a hierarchical spatiotemporal generative model of the visual world.
35

Huang, Ru, Zijian Chen, Jianhua He, and Xiaoli Chu. "Dynamic Heterogeneous User Generated Contents-Driven Relation Assessment via Graph Representation Learning." Sensors 22, no. 4 (February 11, 2022): 1402. http://dx.doi.org/10.3390/s22041402.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cross-domain decision-making systems are suffering a huge challenge with the rapidly emerging uneven quality of user-generated data, which poses a heavy responsibility to online platforms. Current content analysis methods primarily concentrate on non-textual contents, such as images and videos themselves, while ignoring the interrelationship between each user post’s contents. In this paper, we propose a novel framework named community-aware dynamic heterogeneous graph embedding (CDHNE) for relationship assessment, capable of mining heterogeneous information, latent community structure and dynamic characteristics from user-generated contents (UGC), which aims to solve complex non-euclidean structured problems. Specifically, we introduce the Markov-chain-based metapath to extract heterogeneous contents and semantics in UGC. A edge-centric attention mechanism is elaborated for localized feature aggregation. Thereafter, we obtain the node representations from micro perspective and apply it to the discovery of global structure by a clustering technique. In order to uncover the temporal evolutionary patterns, we devise an encoder–decoder structure, containing multiple recurrent memory units, which helps to capture the dynamics for relation assessment efficiently and effectively. Extensive experiments on four real-world datasets are conducted in this work, which demonstrate that CDHNE outperforms other baselines due to the comprehensive node representation, while also exhibiting the superiority of CDHNE in relation assessment. The proposed model is presented as a method of breaking down the barriers between traditional UGC analysis and their abstract network analysis.
36

Fang, Yang, Xiang Zhao, Peixin Huang, Weidong Xiao, and Maarten de Rijke. "Scalable Representation Learning for Dynamic Heterogeneous Information Networks via Metagraphs." ACM Transactions on Information Systems 40, no. 4 (October 31, 2022): 1–27. http://dx.doi.org/10.1145/3485189.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Content representation is a fundamental task in information retrieval. Representation learning is aimed at capturing features of an information object in a low-dimensional space. Most research on representation learning for heterogeneous information networks (HINs) focuses on static HINs. In practice, however, networks are dynamic and subject to constant change. In this article, we propose a novel and scalable representation learning model, M-DHIN , to explore the evolution of a dynamic HIN. We regard a dynamic HIN as a series of snapshots with different time stamps. We first use a static embedding method to learn the initial embeddings of a dynamic HIN at the first time stamp. We describe the features of the initial HIN via metagraphs, which retains more structural and semantic information than traditional path-oriented static models. We also adopt a complex embedding scheme to better distinguish between symmetric and asymmetric metagraphs. Unlike traditional models that process an entire network at each time stamp, we build a so-called change dataset that only includes nodes involved in a triadic closure or opening process, as well as newly added or deleted nodes. Then, we utilize the above metagraph-based mechanism to train on the change dataset. As a result of this setup, M-DHIN is scalable to large dynamic HINs since it only needs to model the entire HIN once while only the changed parts need to be processed over time. Existing dynamic embedding models only express the existing snapshots and cannot predict the future network structure. To equip M-DHIN with this ability, we introduce an LSTM-based deep autoencoder model that processes the evolution of the graph via an LSTM encoder and outputs the predicted graph. Finally, we evaluate the proposed model, M-DHIN , on real-life datasets and demonstrate that it significantly and consistently outperforms state-of-the-art models.
37

Threja Malhotra, Ashu, and Jasneet Kaur. "Exploring the Role of Technological Representations to Facilitate Mathematics Learning In E-Class." International Journal of Multidisciplinary Research Configuration 1, no. 3 (July 2021): 01–05. http://dx.doi.org/10.52984/ijomrc1301.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper explores the role that is played by technological representations used in e-classes during pandemic, in promoting interactions among peers and teacher & students in order to provoke mathematical understandings. The analysis is based upon the theoretical framework as the Johonson Mathematical Representation Model which is an extension of Lesh’s Multimodal model of translations amongst the representations. Findings of the study suggest that Constructive tasks which used dynamic pictorial representations were successful in capturing the interest and curiosity among students and provided ample of opportunities to students to interact and think mathematically.
38

Feng, Pengbin, Jianfeng Ma, Teng Li, Xindi Ma, Ning Xi, and Di Lu. "Android Malware Detection via Graph Representation Learning." Mobile Information Systems 2021 (June 4, 2021): 1–14. http://dx.doi.org/10.1155/2021/5538841.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the widespread usage of Android smartphones in our daily lives, the Android platform has become an attractive target for malware authors. There is an urgent need for developing an automatic malware detection approach to prevent the spread of malware. The low code coverage and poor efficiency of the dynamic analysis limit the large-scale deployment of malware detection methods based on dynamic features. Therefore, researchers have proposed a plethora of detection approaches based on abundant static features to provide efficient malware detection. This paper explores the direction of Android malware detection based on graph representation learning. Without complex feature graph construction, we propose a new Android malware detection approach based on lightweight static analysis via the graph neural network (GNN). Instead of directly extracting Application Programming Interface (API) call information, we further analyze the source code of Android applications to extract high-level semantic information, which increases the barrier of evading detection. Particularly, we construct approximate call graphs from function invocation relationships within an Android application to represent this application and further extract intrafunction attributes, including required permission, security level, and Smali instructions’ semantic information via Word2Vec, to form the node attributes within graph structures. Then, we use the graph neural network to generate a vector representation of the application, and then malware detection is performed on this representation space. We conduct experiments on real-world application samples. The experimental results demonstrate that our approach implements high effective malware detection and outperforms state-of-the-art detection approaches.
39

Fu, Sichao, Weifeng Liu, Weili Guan, Yicong Zhou, Dapeng Tao, and Changsheng Xu. "Dynamic Graph Learning Convolutional Networks for Semi-supervised Classification." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (March 31, 2021): 1–13. http://dx.doi.org/10.1145/3412846.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Over the past few years, graph representation learning (GRL) has received widespread attention on the feature representations of the non-Euclidean data. As a typical model of GRL, graph convolutional networks (GCN) fuse the graph Laplacian-based static sample structural information. GCN thus generalizes convolutional neural networks to acquire the sample representations with the variously high-order structures. However, most of existing GCN-based variants depend on the static data structural relationships. It will result in the extracted data features lacking of representativeness during the convolution process. To solve this problem, dynamic graph learning convolutional networks (DGLCN) on the application of semi-supervised classification are proposed. First, we introduce a definition of dynamic spectral graph convolution operation. It constantly optimizes the high-order structural relationships between data points according to the loss values of the loss function, and then fits the local geometry information of data exactly. After optimizing our proposed definition with the one-order Chebyshev polynomial, we can obtain a single-layer convolution rule of DGLCN. Due to the fusion of the optimized structural information in the learning process, multi-layer DGLCN can extract richer sample features to improve classification performance. Substantial experiments are conducted on citation network datasets to prove the effectiveness of DGLCN. Experiment results demonstrate that the proposed DGLCN obtains a superior classification performance compared to several existing semi-supervised classification models.
40

Xiang, Xintao, Tiancheng Huang, and Donglin Wang. "Learning to Evolve on Dynamic Graphs (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13091–92. http://dx.doi.org/10.1609/aaai.v36i11.21682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) - a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based meta-learning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
41

Huang, Zhenhua, Zhenyu Wang, and Rui Zhang. "Cascade2vec: Learning Dynamic Cascade Representation by Recurrent Graph Neural Networks." IEEE Access 7 (2019): 144800–144812. http://dx.doi.org/10.1109/access.2019.2942853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Pan, Jianguo, Huan Li, Jiajun Teng, Qin Zhao, and Maozhen Li. "Dynamic Network Representation Learning Method Based on Improved GRU Network." Computing and Informatics 41, no. 6 (2022): 1491–509. http://dx.doi.org/10.31577/cai_2022_6_1491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

olde Scheper, Tjeerd V. "Criticality Analysis: Bio-Inspired Nonlinear Data Representation." Entropy 25, no. 12 (December 14, 2023): 1660. http://dx.doi.org/10.3390/e25121660.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The representation of arbitrary data in a biological system is one of the most elusive elements of biological information processing. The often logarithmic nature of information in amplitude and frequency presented to biosystems prevents simple encapsulation of the information contained in the input. Criticality Analysis (CA) is a bio-inspired method of information representation within a controlled Self-Organised Critical system that allows scale-free representation. This is based on the concept of a reservoir of dynamic behaviour in which self-similar data will create dynamic nonlinear representations. This unique projection of data preserves the similarity of data within a multidimensional neighbourhood. The input can be reduced dimensionally to a projection output that retains the features of the overall data, yet has a much simpler dynamic response. The method depends only on the Rate Control of Chaos applied to the underlying controlled models, which allows the encoding of arbitrary data and promises optimal encoding of data given biologically relevant networks of oscillators. The CA method allows for a biologically relevant encoding mechanism of arbitrary input to biosystems, creating a suitable model for information processing in varying complexity of organisms and scale-free data representation for machine learning.
44

Zhu, Yingjie, Gregory Nachtrab, Piper C. Keyes, William E. Allen, Liqun Luo, and Xiaoke Chen. "Dynamic salience processing in paraventricular thalamus gates associative learning." Science 362, no. 6413 (October 25, 2018): 423–29. http://dx.doi.org/10.1126/science.aat0481.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The salience of behaviorally relevant stimuli is dynamic and influenced by internal state and external environment. Monitoring such changes is critical for effective learning and flexible behavior, but the neuronal substrate for tracking the dynamics of stimulus salience is obscure. We found that neurons in the paraventricular thalamus (PVT) are robustly activated by a variety of behaviorally relevant events, including novel (“unfamiliar”) stimuli, reinforcing stimuli and their predicting cues, as well as omission of the expected reward. PVT responses are scaled with stimulus intensity and modulated by changes in homeostatic state or behavioral context. Inhibition of the PVT responses suppresses appetitive or aversive associative learning and reward extinction. Our findings demonstrate that the PVT gates associative learning by providing a dynamic representation of stimulus salience.
45

Wang, Lu, Georgia Hodges, and Juyeon Lee. "Connecting Macroscopic, Molecular, and Symbolic Representations with Immersive Technologies in High School Chemistry: The Case of Redox Reactions." Education Sciences 12, no. 7 (June 22, 2022): 428. http://dx.doi.org/10.3390/educsci12070428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Redox reaction is a difficult concept to teach and learn in chemistry courses at the secondary level. Although the significance of connecting macroscopic, molecular, and symbolic levels of representation has been emphasized in the chemistry education literature, most redox instruction involves only macroscopic and symbolic representations. To address this challenge, we designed a blended-reality immersive environment (BRE) model, which blends a traditional experiment with immersive technologies to make the molecular representations of redox reactions visible. The effectiveness of this model in supporting students’ learning of redox reactions was first reported in a different article. In this paper, we further explore the features of BRE that drive learning gains. Results from six high school classes in the U.S. with 351 students indicate that integrating the molecular representation through adding the chemical bonds concept facilitates students in making connections between macroscopic and symbolic levels to promote learning. Dynamic demonstrations of electrons’ interaction with particles support students’ understanding of the nature of redox reactions. This study shows the promise of adopting immersive technologies to present all three representations of chemistry concepts in one learning model.
46

Cai, Yuanying, Chuheng Zhang, Wei Shen, Xuyun Zhang, Wenjie Ruan, and Longbo Huang. "RePreM: Representation Pre-training with Masked Model for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6879–87. http://dx.doi.org/10.1609/aaai.v37i6.25842.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Inspired by the recent success of sequence modeling in RL and the use of masked language model for pre-training, we propose a masked model for pre-training in RL, RePreM (Representation Pre-training with Masked Model), which trains the encoder combined with transformer blocks to predict the masked states or actions in a trajectory. RePreM is simple but effective compared to existing representation pre-training methods in RL. It avoids algorithmic sophistication (such as data augmentation or estimating multiple models) with sequence modeling and generates a representation that captures long-term dynamics well. Empirically, we demonstrate the effectiveness of RePreM in various tasks, including dynamic prediction, transfer learning, and sample-efficient RL with both value-based and actor-critic methods. Moreover, we show that RePreM scales well with dataset size, dataset quality, and the scale of the encoder, which indicates its potential towards big RL models.
47

Beng Lee, Chwee, Keck Voon Ling, Peter Reimann, Yudho Ahmad Diponegoro, Chia Heng Koh, and Derwin Chew. "Dynamic scaffolding in a cloud-based problem representation system." Campus-Wide Information Systems 31, no. 5 (October 28, 2014): 346–56. http://dx.doi.org/10.1108/cwis-02-2014-0006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose – The purpose of this paper is to argue for the need to develop pre-service teachers’ problem solving ability, in particular, in the context of real-world complex problems. Design/methodology/approach – To argue for the need to develop pre-service teachers’ problem solving skills, the authors describe a web-based problem representation system that is embedded with levels of scaffolding to support the claim. Findings – The authors’ conceptualisation of this cloud-based environment is also very much aligned with the development of pre-service teachers’ systems thinking. Teacher learning itself is a complex system that involves many processes, mechanisms and interactions of elements, and the outcomes may be highly unpredictable (Opfer and Pedder, 2011). As a result of the complex nature of teacher learning, it would be meaningful to frame teacher learning as a complex system. An approach to enable pre-service teachers to be aware of this complexity is to situate them in a systems thinking context. Originality/value – This paper discusses a system which was developed for problem solving. The levels of adaptive scaffoldings embedded within the system is an innovation which is not found in other similar research projects.
48

Sun, Zheng, Shad A. Torrie, Andrew W. Sumsion, and Dah-Jye Lee. "Self-Supervised Facial Motion Representation Learning via Contrastive Subclips." Electronics 12, no. 6 (March 13, 2023): 1369. http://dx.doi.org/10.3390/electronics12061369.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Facial motion representation learning has become an exciting research topic, since biometric technologies are becoming more common in our daily lives. One of its applications is identity verification. After recording a dynamic facial motion video for enrollment, the user needs to show a matched facial appearance and make a facial motion the same as the enrollment for authentication. Some recent research papers have discussed the benefits of this new biometric technology and reported promising results for both static and dynamic facial motion verification tasks. Our work extends the existing approaches and introduces compound facial actions, which contain more than one dominant facial action in one utterance. We propose a new self-supervised pretraining method called contrastive subclips that improves the model performance with these more complex and secure facial motions. The experimental results show that the contrastive subclips method improves upon the baseline approaches, and the model performance for test data can reach 89.7% average precision.
49

Schoeneman, Frank, Varun Chandola, Nils Napp, Olga Wodo, and Jaroslaw Zola. "Learning Manifolds from Dynamic Process Data." Algorithms 13, no. 2 (January 21, 2020): 30. http://dx.doi.org/10.3390/a13020030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Scientific data, generated by computational models or from experiments, are typically results of nonlinear interactions among several latent processes. Such datasets are typically high-dimensional and exhibit strong temporal correlations. Better understanding of the underlying processes requires mapping such data to a low-dimensional manifold where the dynamics of the latent processes are evident. While nonlinear spectral dimensionality reduction methods, e.g., Isomap, and their scalable variants, are conceptually fit candidates for obtaining such a mapping, the presence of the strong temporal correlation in the data can significantly impact these methods. In this paper, we first show why such methods fail when dealing with dynamic process data. A novel method, Entropy-Isomap, is proposed to handle this shortcoming. We demonstrate the effectiveness of the proposed method in the context of understanding the fabrication process of organic materials. The resulting low-dimensional representation correctly characterizes the process control variables and allows for informative visualization of the material morphology evolution.
50

Haga, Takeshi, Hiroshi Kera, and Kazuhiko Kawamoto. "Sequential Variational Autoencoder with Adversarial Classifier for Video Disentanglement." Sensors 23, no. 5 (February 24, 2023): 2515. http://dx.doi.org/10.3390/s23052515.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we propose a sequential variational autoencoder for video disentanglement, which is a representation learning method that can be used to separately extract static and dynamic features from videos. Building sequential variational autoencoders with a two-stream architecture induces inductive bias for video disentanglement. However, our preliminary experiment demonstrated that the two-stream architecture is insufficient for video disentanglement because static features frequently contain dynamic features. Additionally, we found that dynamic features are not discriminative in the latent space. To address these problems, we introduced an adversarial classifier using supervised learning into the two-stream architecture. The strong inductive bias through supervision separates dynamic features from static features and yields discriminative representations of the dynamic features. Through a comparison with other sequential variational autoencoders, we qualitatively and quantitatively demonstrate the effectiveness of the proposed method on the Sprites and MUG datasets.

До бібліографії