Статті в журналах з теми "Learned representation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Learned representation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Learned representation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kalm, Kristjan, and Dennis Norris. "Sequence learning recodes cortical representations instead of strengthening initial ones." PLOS Computational Biology 17, no. 5 (May 24, 2021): e1008969. http://dx.doi.org/10.1371/journal.pcbi.1008969.

Повний текст джерела
Анотація:
We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patterns from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Williamson, James R. "How is representation learned?" Behavioral and Brain Sciences 21, no. 4 (August 1998): 484. http://dx.doi.org/10.1017/s0140525x9843125x.

Повний текст джерела
Анотація:
Edelman's memory-based approach to visual representation is preferable to parts-based alternatives. However, the existing algorithms for learning the shape prototypes are biologically implausible because they are nonlocal and nonconstructive. There is an alternative learning algorithm that constructs a mixture model of prototypes on-line, using only local information, and is more biologically plausible and may perform sufficiently well.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yue, Zhihan, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. "TS2Vec: Towards Universal Representation of Time Series." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8980–87. http://dx.doi.org/10.1609/aaai.v36i8.20881.

Повний текст джерела
Анотація:
This paper presents TS2Vec, a universal framework for learning representations of time series in an arbitrary semantic level. Unlike existing methods, TS2Vec performs contrastive learning in a hierarchical way over augmented context views, which enables a robust contextual representation for each timestamp. Furthermore, to obtain the representation of an arbitrary sub-sequence in the time series, we can apply a simple aggregation over the representations of corresponding timestamps. We conduct extensive experiments on time series classification tasks to evaluate the quality of time series representations. As a result, TS2Vec achieves significant improvement over existing SOTAs of unsupervised time series representation on 125 UCR datasets and 29 UEA datasets. The learned timestamp-level representations also achieve superior results in time series forecasting and anomaly detection tasks. A linear regression trained on top of the learned representations outperforms previous SOTAs of time series forecasting. Furthermore, we present a simple way to apply the learned representations for unsupervised anomaly detection, which establishes SOTA results in the literature. The source code is publicly available at https://github.com/yuezhihan/ts2vec.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mu, Shanlei, Yaliang Li, Wayne Xin Zhao, Siqing Li, and Ji-Rong Wen. "Knowledge-Guided Disentangled Representation Learning for Recommender Systems." ACM Transactions on Information Systems 40, no. 1 (January 31, 2022): 1–26. http://dx.doi.org/10.1145/3464304.

Повний текст джерела
Анотація:
In recommender systems, it is essential to understand the underlying factors that affect user-item interaction. Recently, several studies have utilized disentangled representation learning to discover such hidden factors from user-item interaction data, which shows promising results. However, without any external guidance signal, the learned disentangled representations lack clear meanings, and are easy to suffer from the data sparsity issue. In light of these challenges, we study how to leverage knowledge graph (KG) to guide the disentangled representation learning in recommender systems. The purpose for incorporating KG is twofold, making the disentangled representations interpretable and resolving data sparsity issue. However, it is not straightforward to incorporate KG for improving disentangled representations, because KG has very different data characteristics compared with user-item interactions. We propose a novel K nowledge-guided D isentangled R epresentations approach ( KDR ) to utilizing KG to guide the disentangled representation learning in recommender systems. The basic idea, is to first learn more interpretable disentangled dimensions (explicit disentangled representations) based on structural KG, and then align implicit disentangled representations learned from user-item interaction with the explicit disentangled representations. We design a novel alignment strategy based on mutual information maximization. It enables the KG information to guide the implicit disentangled representation learning, and such learned disentangled representations will correspond to semantic information derived from KG. Finally, the fused disentangled representations are optimized to improve the recommendation performance. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed model in terms of both performance and interpretability.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mel, Bartlett W., and József Fiser. "Minimizing Binding Errors Using Learned Conjunctive Features." Neural Computation 12, no. 4 (April 1, 2000): 731–62. http://dx.doi.org/10.1162/089976600300015574.

Повний текст джерела
Анотація:
We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors—Malsburg's classical binding problem. We begin by deriving an analytical model that makes explicit how recognition performance is affected by the number of objects that must be distinguished, the number of features included in the representation, the complexity of individual objects, and the clutter load, that is, the amount of visual material in the field of view in which multiple objects must be simultaneously recognized, independent of pose, and without explicit segmentation. Using the domain of text to model object recognition in cluttered scenes, we show that with corrections for the nonuniform probability and nonindependence of text features, the analytical model achieves good fits to measured recognition rates in simulations involving a wide range of clutter loads, word sizes, and feature counts. We then introduce a greedy algorithm for feature learning, derived from the analytical model, which grows a representation by choosing those conjunctive features that are most likely to distinguish objects from the cluttered backgrounds in which they are embedded. We show that the representations produced by this algorithm are compact, decorrelated, and heavily weighted toward features of low conjunctive order. Our results provide a more quantitative basis for understanding when spatially invariant conjunctive features can support unambiguous perception in multiobject scenes, and lead to several insights regarding the properties of visual representations optimized for specific recognition tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mel, Bartlett W., and József Fiser. "Minimizing Binding Errors Using Learned Conjunctive Features." Neural Computation 12, no. 2 (February 1, 2000): 247–78. http://dx.doi.org/10.1162/089976600300015772.

Повний текст джерела
Анотація:
We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors—Malsburg's classical binding problem. We begin by deriving an analytical model that makes explicit how recognition performance is affected by the number of objects that must be distinguished, the number of features included in the representation, the complexity of individual objects, and the clutter load, that is, the amount of visual material in the field of view in which multiple objects must be simultaneously recognized, independent of pose, and without explicit segmentation. Using the domain of text to model object recognition in cluttered scenes, we show that with corrections for the nonuniform probability and nonindependence of text features, the analytical model achieves good fits to measured recognition rates in simulations involving a wide range of clutter loads, word sizes, and feature counts. We then introduce a greedy algorithm for feature learning, derived from the analytical model, which grows a representation by choosing those conjunctive features that are most likely to distinguish objects from the cluttered backgrounds in which they are embedded. We show that the representations produced by this algorithm are compact, decorrelated, and heavily weighted toward features of low conjunctive order. Our results provide a more quantitative basis for understanding when spatially invariant conjunctive features can support unambiguous perception in multiobject scenes, and lead to several insights regarding the properties of visual representations optimized for specific recognition tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sun, Jingyuan, Shaonan Wang, Jiajun Zhang, and Chengqing Zong. "Towards Sentence-Level Brain Decoding with Distributed Representations." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7047–54. http://dx.doi.org/10.1609/aaai.v33i01.33017047.

Повний текст джерела
Анотація:
Decoding human brain activities based on linguistic representations has been actively studied in recent years. However, most previous studies exclusively focus on word-level representations, and little is learned about decoding whole sentences from brain activation patterns. This work is our effort to mend the gap. In this paper, we build decoders to associate brain activities with sentence stimulus via distributed representations, the currently dominant sentence representation approach in natural language processing (NLP). We carry out a systematic evaluation, covering both widely-used baselines and state-of-the-art sentence representation models. We demonstrate how well different types of sentence representations decode the brain activation patterns and give empirical explanations of the performance difference. Moreover, to explore how sentences are neurally represented in the brain, we further compare the sentence representation’s correspondence to different brain areas associated with high-level cognitive functions. We find the supervised structured representation models most accurately probe the language atlas of human brain. To the best of our knowledge, this work is the first comprehensive evaluation of distributed sentence representations for brain decoding. We hope this work can contribute to decoding brain activities with NLP representation models, and understanding how linguistic items are neurally represented.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Elio, Renée. "Representation of Similar Well-Learned Cognitive Procedures." Cognitive Science 10, no. 1 (January 1986): 41–73. http://dx.doi.org/10.1207/s15516709cog1001_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Partarakis, Nikos, Voula Doulgeraki, Effie Karuzaki, George Galanakis, Xenophon Zabulis, Carlo Meghini, Valentina Bartalesi, and Daniele Metilli. "A Web-Based Platform for Traditional Craft Documentation." Multimodal Technologies and Interaction 6, no. 5 (May 10, 2022): 37. http://dx.doi.org/10.3390/mti6050037.

Повний текст джерела
Анотація:
A web-based authoring platform for the representation of traditional crafts is proposed. This platform is rooted in a systematic method for craft representation, the adoption, knowledge, and representation standards of the cultural heritage (CH) domain, and the integration of outcomes from advanced digitization techniques. In this paper, we present the implementation of this method by an online, collaborative documentation platform where digital assets are curated into digitally preservable craft representations. The approach is demonstrated through the representation of three traditional crafts as use cases, and the lessons learned from this endeavor are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Ke, Jiayong Liu, and Jing-Yan Wang. "Learning Domain-Independent Deep Representations by Mutual Information Minimization." Computational Intelligence and Neuroscience 2019 (June 16, 2019): 1–14. http://dx.doi.org/10.1155/2019/9414539.

Повний текст джерела
Анотація:
Domain transfer learning aims to learn common data representations from a source domain and a target domain so that the source domain data can help the classification of the target domain. Conventional transfer representation learning imposes the distributions of source and target domain representations to be similar, which heavily relies on the characterization of the distributions of domains and the distribution matching criteria. In this paper, we proposed a novel framework for domain transfer representation learning. Our motive is to make the learned representations of data points independent from the domains which they belong to. In other words, from an optimal cross-domain representation of a data point, it is difficult to tell which domain it is from. In this way, the learned representations can be generalized to different domains. To measure the dependency between the representations and the corresponding domain which the data points belong to, we propose to use the mutual information between the representations and the domain-belonging indicators. By minimizing such mutual information, we learn the representations which are independent from domains. We build a classwise deep convolutional network model as a representation model and maximize the margin of each data point of the corresponding class, which is defined over the intraclass and interclass neighborhood. To learn the parameters of the model, we construct a unified minimization problem where the margins are maximized while the representation-domain mutual information is minimized. In this way, we learn representations which are not only discriminate but also independent from domains. An iterative algorithm based on the Adam optimization method is proposed to solve the minimization to learn the classwise deep model parameters and the cross-domain representations simultaneously. Extensive experiments over benchmark datasets show its effectiveness and advantage over existing domain transfer learning methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Nazarov, Aleksei. "A radically emergentist approach to phonological features: implications for grammars." Nordlyd 41, no. 1 (January 21, 2015): 21. http://dx.doi.org/10.7557/12.3253.

Повний текст джерела
Анотація:
Phonological features are often assumed to be innate (Chomsky & Halle 1968) or learned as a prerequisite for learning grammar (Dresher 2013). In this paper, I show an alternative approach: features are learned in parallel with grammar. This allows for addressing an interesting question: is it really optimal that the phonological grammar only use phonological features to refer to segmental material (Chomsky & Halle 1968), or could it be more advantageous for the grammar to refer to segmental material on more than one level of representation? The learner considered here finds that it is only optimal for the grammar to use phonological features to refer to multiple segments in the same pattern (e.g., the class of nasals), but when a pattern refers to a single segment, it may be at least equally good for the grammar to refer to this single segment as a bare segment label (for instance, [m] instead of [labial, nasal]). In this way, the grammar uses different kinds of representational units (features and non-features) for the same sound – which mimics models with multiple layers of representation (such as Goldrick 2001, Boersma 2007).
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Rives, Alexander, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences." Proceedings of the National Academy of Sciences 118, no. 15 (April 5, 2021): e2016239118. http://dx.doi.org/10.1073/pnas.2016239118.

Повний текст джерела
Анотація:
In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Reinert, Sandra, Mark Hübener, Tobias Bonhoeffer, and Pieter M. Goltstein. "Mouse prefrontal cortex represents learned rules for categorization." Nature 593, no. 7859 (April 21, 2021): 411–17. http://dx.doi.org/10.1038/s41586-021-03452-z.

Повний текст джерела
Анотація:
AbstractThe ability to categorize sensory stimuli is crucial for an animal’s survival in a complex environment. Memorizing categories instead of individual exemplars enables greater behavioural flexibility and is computationally advantageous. Neurons that show category selectivity have been found in several areas of the mammalian neocortex1–4, but the prefrontal cortex seems to have a prominent role4,5 in this context. Specifically, in primates that are extensively trained on a categorization task, neurons in the prefrontal cortex rapidly and flexibly represent learned categories6,7. However, how these representations first emerge in naive animals remains unexplored, leaving it unclear whether flexible representations are gradually built up as part of semantic memory or assigned more or less instantly during task execution8,9. Here we investigate the formation of a neuronal category representation throughout the entire learning process by repeatedly imaging individual cells in the mouse medial prefrontal cortex. We show that mice readily learn rule-based categorization and generalize to novel stimuli. Over the course of learning, neurons in the prefrontal cortex display distinct dynamics in acquiring category selectivity and are differentially engaged during a later switch in rules. A subset of neurons selectively and uniquely respond to categories and reflect generalization behaviour. Thus, a category representation in the mouse prefrontal cortex is gradually acquired during learning rather than recruited ad hoc. This gradual process suggests that neurons in the medial prefrontal cortex are part of a specific semantic memory for visual categories.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhu, Yi, Lei Li, and Xindong Wu. "Stacked Convolutional Sparse Auto-Encoders for Representation Learning." ACM Transactions on Knowledge Discovery from Data 15, no. 2 (April 2021): 1–21. http://dx.doi.org/10.1145/3434767.

Повний текст джерела
Анотація:
Deep learning seeks to achieve excellent performance for representation learning in image datasets. However, supervised deep learning models such as convolutional neural networks require a large number of labeled image data, which is intractable in applications, while unsupervised deep learning models like stacked denoising auto-encoder cannot employ label information. Meanwhile, the redundancy of image data incurs performance degradation on representation learning for aforementioned models. To address these problems, we propose a semi-supervised deep learning framework called stacked convolutional sparse auto-encoder, which can learn robust and sparse representations from image data with fewer labeled data records. More specifically, the framework is constructed by stacking layers. In each layer, higher layer feature representations are generated by features of lower layers in a convolutional way with kernels learned by a sparse auto-encoder. Meanwhile, to solve the data redundance problem, the algorithm of Reconstruction Independent Component Analysis is designed to train on patches for sphering the input data. The label information is encoded using a Softmax Regression model for semi-supervised learning. With this framework, higher level representations are learned by layers mapping from image data. It can boost the performance of the base subsequent classifiers such as support vector machines. Extensive experiments demonstrate the superior classification performance of our framework compared to several state-of-the-art representation learning methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Jafariakinabad, Fereshteh, and Kien A. Hua. "A Self-Supervised Representation Learning of Sentence Structure for Authorship Attribution." ACM Transactions on Knowledge Discovery from Data 16, no. 4 (August 31, 2022): 1–16. http://dx.doi.org/10.1145/3491203.

Повний текст джерела
Анотація:
The syntactic structure of sentences in a document substantially informs about its authorial writing style. Sentence representation learning has been widely explored in recent years and it has been shown that it improves the generalization of different downstream tasks across many domains. Even though utilizing probing methods in several studies suggests that these learned contextual representations implicitly encode some amount of syntax, explicit syntactic information further improves the performance of deep neural models in the domain of authorship attribution. These observations have motivated us to investigate the explicit representation learning of syntactic structure of sentences. In this article, we propose a self-supervised framework for learning structural representations of sentences. The self-supervised network contains two components; a lexical sub-network and a syntactic sub-network which take the sequence of words and their corresponding structural labels as the input, respectively. Due to the n -to-1 mapping of words to their structural labels, each word will be embedded into a vector representation which mainly carries structural information. We evaluate the learned structural representations of sentences using different probing tasks, and subsequently utilize them in the authorship attribution task. Our experimental results indicate that the structural embeddings significantly improve the classification tasks when concatenated with the existing pre-trained word embeddings.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Zabulis, Xenophon, Nikolaos Partarakis, Carlo Meghini, Arnaud Dubois, Sotiris Manitsaris, Hansgeorg Hauser, Nadia Magnenat Thalmann, et al. "A Representation Protocol for Traditional Crafts." Heritage 5, no. 2 (March 30, 2022): 716–41. http://dx.doi.org/10.3390/heritage5020040.

Повний текст джерела
Анотація:
A protocol for the representation of traditional crafts and the tools to implement this are proposed. The proposed protocol is a method for the systematic collection and organization of digital assets and knowledge, their representation into a formal model, and their utilization for research, education, and preservation. A set of digital tools accompanies this protocol that enables the online curation of craft representations. The proposed approach was elaborated and evaluated with craft practitioners in three case studies. Lessons learned are shared and an outlook for future work is provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Liu, Hao, Bin Wang, Zhimin Bao, Mobai Xue, Sheng Kang, Deqiang Jiang, Yinsong Liu, and Bo Ren. "Perceiving Stroke-Semantic Context: Hierarchical Contrastive Learning for Robust Scene Text Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1702–10. http://dx.doi.org/10.1609/aaai.v36i2.20062.

Повний текст джерела
Анотація:
We introduce Perceiving Stroke-Semantic Context (PerSec), a new approach to self-supervised representation learning tailored for Scene Text Recognition (STR) task. Considering scene text images carry both visual and semantic properties, we equip our PerSec with dual context perceivers which can contrast and learn latent representations from low-level stroke and high-level semantic contextual spaces simultaneously via hierarchical contrastive learning on unlabeled text image data. Experiments in un- and semi-supervised learning settings on STR benchmarks demonstrate our proposed framework can yield a more robust representation for both CTC-based and attention-based decoders than other contrastive learning methods. To fully investigate the potential of our method, we also collect a dataset of 100 million unlabeled text images, named UTI-100M, covering 5 scenes and 4 languages. By leveraging hundred-million-level unlabeled data, our PerSec shows significant performance improvement when fine-tuning the learned representation on the labeled data. Furthermore, we observe that the representation learned by PerSec presents great generalization, especially under few labeled data scenes.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Do, Thanh Ha, Salvatore Tabbone, and Oriol Ramos Terrades. "Sparse representation over learned dictionary for symbol recognition." Signal Processing 125 (August 2016): 36–47. http://dx.doi.org/10.1016/j.sigpro.2015.12.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chakraborti, Tapabrata, Brendan McCane, Steven Mills, and Umapada Pal. "Distance Metric Learned Collaborative Representation Classifier(DML-CRC)." IEEE Letters of the Computer Society 3, no. 2 (July 1, 2020): 34–37. http://dx.doi.org/10.1109/locs.2020.2997647.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Nakisa, Ramin Charles, and Kim Plunkett. "Evolution of a Rapidly Learned Representation for Speech." Language and Cognitive Processes 13, no. 2-3 (June 1998): 105–27. http://dx.doi.org/10.1080/016909698386492.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chandar, Sarath, Mitesh M. Khapra, Hugo Larochelle, and Balaraman Ravindran. "Correlational Neural Networks." Neural Computation 28, no. 2 (February 2016): 257–85. http://dx.doi.org/10.1162/neco_a_00801.

Повний текст джерела
Анотація:
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)–based approaches and autoencoder (AE)–based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Liu, Dong, Yan Ru, Qinpeng Li, Shibin Wang, and Jianwei Niu. "Semisupervised Community Preserving Network Embedding with Pairwise Constraints." Complexity 2020 (November 10, 2020): 1–14. http://dx.doi.org/10.1155/2020/7953758.

Повний текст джерела
Анотація:
Network embedding aims to learn the low-dimensional representations of nodes in networks. It preserves the structure and internal attributes of the networks while representing nodes as low-dimensional dense real-valued vectors. These vectors are used as inputs of machine learning algorithms for network analysis tasks such as node clustering, classification, link prediction, and network visualization. The network embedding algorithms, which considered the community structure, impose a higher level of constraint on the similarity of nodes, and they make the learned node embedding results more discriminative. However, the existing network representation learning algorithms are mostly unsupervised models; the pairwise constraint information, which represents community membership, is not effectively utilized to obtain node embedding results that are more consistent with prior knowledge. This paper proposes a semisupervised modularized nonnegative matrix factorization model, SMNMF, while preserving the community structure for network embedding; the pairwise constraints (must-link and cannot-link) information are effectively fused with the adjacency matrix and node similarity matrix of the network so that the node representations learned by the model are more interpretable. Experimental results on eight real network datasets show that, comparing with the representative network embedding methods, the node representations learned after incorporating the pairwise constraints can obtain higher accuracy in node clustering task and the results of link prediction, and network visualization tasks indicate that the semisupervised model SMNMF is more discriminative than unsupervised ones.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gao, Ruiqi, Jianwen Xie, Siyuan Huang, Yufan Ren, Song-Chun Zhu, and Ying Nian Wu. "Learning V1 Simple Cells with Vector Representation of Local Content and Matrix Representation of Local Motion." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6674–84. http://dx.doi.org/10.1609/aaai.v36i6.20622.

Повний текст джерела
Анотація:
This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components: (1) the vector representations of local contents of images and (2) the matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Thus the vector representation is equivariant as it varies according to the local displacements. Our experiments show that our model can learn Gabor-like filter pairs of quadrature phases. The profiles of the learned filters match those of simple cells in Macaque V1. Moreover, we demonstrate that the model can learn to infer local motions in either a supervised or unsupervised manner. With such a simple model, we achieve competitive results on optical flow estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Bordag, Denisa, Amit Kirschenbaum, Maria Rogahn, Andreas Opitz, and Erwin Tschirner. "SEMANTIC REPRESENTATION OF NEWLY LEARNED L2 WORDS AND THEIR INTEGRATION IN THE L2 LEXICON." Studies in Second Language Acquisition 39, no. 1 (March 15, 2016): 197–212. http://dx.doi.org/10.1017/s0272263116000048.

Повний текст джерела
Анотація:
The present semantic priming study explores the integration of newly learnt L2 German words into the L2 semantic network of German advanced learners. It provides additional evidence in support of earlier findings reporting semantic inhibition effects for emergent representations. An inhibitory mechanism is proposed that temporarily decreases the resting levels of the representations with which the new representation is linked and thus enables its selection despite its low resting level.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Xu, Joseph, and John Laird. "Combining Learned Discrete and Continuous Action Models." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 1449–54. http://dx.doi.org/10.1609/aaai.v25i1.7833.

Повний текст джерела
Анотація:
Action modeling is an important skill for agents that must perform tasks in novel domains. Previous work on action modeling has focused on learning STRIPS operators in discrete, relational domains. There has also been a separate vein of work in continuous function approximation for use in optimal control in robotics. Most real world domains are grounded in continuous dynamics but also exhibit emergent regularities at an abstract relational level of description. These two levels of regularity are often difficult to capture using a single action representation and learning method. In this paper we describe a system that combines discrete and continuous action modeling techniques in the Soar cognitive architecture. Our system accepts a continuous state representation from the environment and derives a relational state on top of it using spatial relations. The dynamics over each representation is learned separately using two simple instance-based algorithms. The predictions from the individual models are then combined in a way that takes advantage of the information captured by each representation. We empirically show that this combined model is more accurate and generalizable than each of the individual models in a spatial navigation domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Meara, Paul, and Stephen Ingle. "The formal representation of words in an L2 speaker's lexicon." Interlanguage studies bulletin (Utrecht) 2, no. 2 (December 1986): 160–71. http://dx.doi.org/10.1177/026765838600200203.

Повний текст джерела
Анотація:
This paper reports an analysis of errors made by English-speaking learners of French. Forty learners learned a set of French words, and were subsequently tested in their ability to produce a correct phonetic form for these words. Nearly two-thirds of the attempts were incorrect, but a detailed analysis of these incorrect forms showed that not all parts of the target form were equally liable to error. Initial consonants are particularly stable, while subsequent parts of words are not reliably recalled. These results share some similarities with studies of slips of the tongue in English.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Mežnar, Sebastian, Nada Lavrač, and Blaž Škrlj. "Transfer Learning for Node Regression Applied to Spreading Prediction." Complex Systems 30, no. 4 (December 15, 2021): 457–81. http://dx.doi.org/10.25088/complexsystems.30.4.457.

Повний текст джерела
Анотація:
Understanding how information propagates in real-life complex networks yields a better understanding of dynamic processes such as misinformation or epidemic spreading. The recently introduced branch of machine learning methods for learning node representations offers many novel applications, one of them being the task of spreading prediction addressed in this paper. We explore the utility of the state-of-the-art node representation learners when used to assess the effects of spreading from a given node, estimated via extensive simulations. Further, as many real-life networks are topologically similar, we systematically investigate whether the learned models generalize to previously unseen networks, showing that in some cases very good model transfer can be obtained. This paper is one of the first to explore transferability of the learned representations for the task of node regression; we show there exist pairs of networks with similar structure between which the trained models can be transferred (zero-shot) and demonstrate their competitive performance. To our knowledge, this is one of the first attempts to evaluate the utility of zero-shot transfer for the task of node regression.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Li, Xin, Feng Xu, Runliang Xia, Xin Lyu, Hongmin Gao, and Yao Tong. "Hybridizing Cross-Level Contextual and Attentive Representations for Remote Sensing Imagery Semantic Segmentation." Remote Sensing 13, no. 15 (July 29, 2021): 2986. http://dx.doi.org/10.3390/rs13152986.

Повний текст джерела
Анотація:
Semantic segmentation of remote sensing imagery is a fundamental task in intelligent interpretation. Since deep convolutional neural networks (DCNNs) performed considerable insight in learning implicit representations from data, numerous works in recent years have transferred the DCNN-based model to remote sensing data analysis. However, the wide-range observation areas, complex and diverse objects and illumination and imaging angle influence the pixels easily confused, leading to undesirable results. Therefore, a remote sensing imagery semantic segmentation neural network, named HCANet, is proposed to generate representative and discriminative representations for dense predictions. HCANet hybridizes cross-level contextual and attentive representations to emphasize the distinguishability of learned features. First of all, a cross-level contextual representation module (CCRM) is devised to exploit and harness the superpixel contextual information. Moreover, a hybrid representation enhancement module (HREM) is designed to fuse cross-level contextual and self-attentive representations flexibly. Furthermore, the decoder incorporates DUpsampling operation to boost the efficiency losslessly. The extensive experiments are implemented on the Vaihingen and Potsdam benchmarks. In addition, the results indicate that HCANet achieves excellent performance on overall accuracy and mean intersection over union. In addition, the ablation study further verifies the superiority of CCRM.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

O'Toole, Alice J., Fang Jiang, Hervé Abdi, and James V. Haxby. "Partially Distributed Representations of Objects and Faces in Ventral Temporal Cortex." Journal of Cognitive Neuroscience 17, no. 4 (April 2005): 580–90. http://dx.doi.org/10.1162/0898929053467550.

Повний текст джерела
Анотація:
Object and face representations in ventral temporal (VT) cortex were investigated by combining object confusability data from a computational model of object classification with neural response confusability data from a functional neuroimaging experiment. A pattern-based classification algorithm learned to categorize individual brain maps according to the object category being viewed by the subject. An identical algorithm learned to classify an image-based, view-dependent representation of the stimuli. High correlations were found between the confusability of object categories and the confusability of brain activity maps. This occurred even with the inclusion of multiple views of objects, and when the object classification model was tested with high spatial frequency “line drawings” of the stimuli. Consistent with a distributed representation of objects in VT cortex, the data indicate that object categories with shared image-based attributes have shared neural structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Löffler, Christoffer, Luca Reeb, Daniel Dzibela, Robert Marzilger, Nicolas Witt, Björn M. Eskofier, and Christopher Mutschler. "Deep Siamese Metric Learning: A Highly Scalable Approach to Searching Unordered Sets of Trajectories." ACM Transactions on Intelligent Systems and Technology 13, no. 1 (February 28, 2022): 1–23. http://dx.doi.org/10.1145/3465057.

Повний текст джерела
Анотація:
This work proposes metric learning for fast similarity-based scene retrieval of unstructured ensembles of trajectory data from large databases. We present a novel representation learning approach using Siamese Metric Learning that approximates a distance preserving low-dimensional representation and that learns to estimate reasonable solutions to the assignment problem. To this end, we employ a Temporal Convolutional Network architecture that we extend with a gating mechanism to enable learning from sparse data, leading to solutions to the assignment problem exhibiting varying degrees of sparsity. Our experimental results on professional soccer tracking data provides insights on learned features and embeddings, as well as on generalization, sensitivity, and network architectural considerations. Our low approximation errors for learned representations and the interactive performance with retrieval times several magnitudes smaller shows that we outperform previous state of the art.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Jean, Neal, Sherrie Wang, Anshul Samar, George Azzari, David Lobell, and Stefano Ermon. "Tile2Vec: Unsupervised Representation Learning for Spatially Distributed Data." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3967–74. http://dx.doi.org/10.1609/aaai.v33i01.33013967.

Повний текст джерела
Анотація:
Geospatial analysis lacks methods like the word vector representations and pre-trained networks that significantly boost performance across a wide range of natural language and computer vision tasks. To fill this gap, we introduce Tile2Vec, an unsupervised representation learning algorithm that extends the distributional hypothesis from natural language — words appearing in similar contexts tend to have similar meanings — to spatially distributed data. We demonstrate empirically that Tile2Vec learns semantically meaningful representations for both image and non-image datasets. Our learned representations significantly improve performance in downstream classification tasks and, similarly to word vectors, allow visual analogies to be obtained via simple arithmetic in the latent space.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Rajakumar, Alfred, John Rinzel, and Zhe S. Chen. "Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation." Neural Computation 33, no. 10 (September 16, 2021): 2603–45. http://dx.doi.org/10.1162/neco_a_01418.

Повний текст джерела
Анотація:
Abstract Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics (“neural sequences”) of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ursino, M., C. Cuppini, and E. Magosso. "A Semantic Model to Study Neural Organization of Language in Bilingualism." Computational Intelligence and Neuroscience 2010 (2010): 1–10. http://dx.doi.org/10.1155/2010/350269.

Повний текст джерела
Анотація:
A neural network model of object semantic representation is used to simulate learning of new words from a foreign language. The network consists of feature areas, devoted to description of object properties, and a lexical area, devoted to words representation. Neurons in the feature areas are implemented as Wilson-Cowan oscillators, to allow segmentation of different simultaneous objects via gamma-band synchronization. Excitatory synapses among neurons in the feature and lexical areas are learned, during a training phase, via a Hebbian rule. In this work, we first assume that some words in the first language (L1) and the corresponding object representations are initially learned during a preliminary training phase. Subsequently, second-language (L2) words are learned by simultaneously presenting the new word together with the L1 one. A competitive mechanism between the two words is also implemented by the use of inhibitory interneurons. Simulations show that, after a weak training, the L2 word allows retrieval of the object properties but requires engagement of the first language. Conversely, after a prolonged training, the L2 word becomes able to retrieve object per se. In this case, a conflict between words can occur, requiring a higher-level decision mechanism.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Yoshida, Tetsuya. "Rectifying the representation learned by Non-negative Matrix Factorization." International Journal of Knowledge-based and Intelligent Engineering Systems 17, no. 4 (November 12, 2013): 279–90. http://dx.doi.org/10.3233/kes-130278.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Yanike, Marianna, Sylvia Wirth, and Wendy A. Suzuki. "Representation of Well-Learned Information in the Monkey Hippocampus." Neuron 42, no. 3 (May 2004): 477–87. http://dx.doi.org/10.1016/s0896-6273(04)00193-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yin, Haitao. "Sparse representation with learned multiscale dictionary for image fusion." Neurocomputing 148 (January 2015): 600–610. http://dx.doi.org/10.1016/j.neucom.2014.07.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Karimi, Davood, and Rabab K. Ward. "Sinogram denoising via simultaneous sparse representation in learned dictionaries." Physics in Medicine and Biology 61, no. 9 (April 7, 2016): 3536–53. http://dx.doi.org/10.1088/0031-9155/61/9/3536.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Xie, Shufu, Shiguang Shan, Xilin Chen, Xin Meng, and Wen Gao. "Learned local Gabor patterns for face representation and recognition." Signal Processing 89, no. 12 (December 2009): 2333–44. http://dx.doi.org/10.1016/j.sigpro.2009.02.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Liao, Liang, Jing Xiao, Yating Li, Mi Wang, and Ruimin Hu. "Learned Representation of Satellite Image Series for Data Compression." Remote Sensing 12, no. 3 (February 4, 2020): 497. http://dx.doi.org/10.3390/rs12030497.

Повний текст джерела
Анотація:
Real-time transmission of satellite video data is one of the fundamentals in the applications of video satellite. Making use of the historical information to eliminate the long-term background redundancy (LBR) is considered to be a crucial way to bridge the gap between the compressed data rate and the bandwidth between the satellite and the Earth. The main challenge lies in how to deal with the variant image pixel values caused by the change of shooting conditions while keeping the structure of the same landscape unchanged. In this paper, we propose a representation learning based method to model the complex evolution of the landscape appearance under different conditions by making use of the historical image series. Under this representation model, the image is disentangled into the content part and the style part. The former represents the consistent landscape structure, while the latter represents the conditional parameters of the environment. To utilize the knowledge learned from the historical image series, we generate synthetic reference frames for the compression of video frames through image translation by the representation model. The synthetic reference frames can highly boost the compression efficiency by changing the original intra-frame prediction to inter-frame prediction for the intra-coded picture (I frame). Experimental results show that the proposed representation learning-based compression method can save an average of 44.22% bits over HEVC, which is significantly higher than that using references generated under the same conditions. Bitrate savings reached 18.07% when applied to satellite video data with arbitrarily collected reference images.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lin, Ancheng, Jun Li, and Zhenyuan Ma. "On Learning and Learned Data Representation by Capsule Networks." IEEE Access 7 (2019): 50808–22. http://dx.doi.org/10.1109/access.2019.2911622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lv, Yongliang, Yan Zheng, and Jianye Hao. "Opponent modeling with trajectory representation clustering." Intelligence & Robotics 2, no. 2 (2022): 168–79. http://dx.doi.org/10.20517/ir.2022.09.

Повний текст джерела
Анотація:
For a non-stationary opponent in a multi-agent environment, traditional methods model the opponent through its complex information to learn one or more optimal response policies. However, the response policy learned earlier is prone to catastrophic forgetting due to data imbalance in the online-updated replay buffer for non-stationary changes of opponent policies. This paper focuses on how to learn new response policies without forgetting old policies that have been learned when the opponent policy is constantly changing. We extract the representation of opponent policies and make explicit clustering distinctions through the contrastive learning autoencoder. With the idea of balancing the replay buffer, we maintain continuous learning of the trajectory data of various opponent policies that have appeared to avoid policy forgetting. Finally, we demonstrate the effectiveness of the method under a classical opponent modeling environment (soccer) and show the clustering effect of different opponent policies.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Hall, Geoffrey. "Learned Changes in the Sensitivity of Stimulus Representations: Associative and Nonassociative Mechanisms." Quarterly Journal of Experimental Psychology Section B 56, no. 1b (February 2003): 43–55. http://dx.doi.org/10.1080/02724990244000151.

Повний текст джерела
Анотація:
Central to associative learning theory is the proposal that the concurrent activation of a pair of event representations will establish or strengthen a link between them. Associative theorists have devoted much energy to establishing what representations are involved in any given learning paradigm and the rules that determine the degree to which the link is strengthened. They have paid less attention to the question of what determines that a representation will be activated, assuming, for the case of classical conditioning, that presentation of an appropriately intense stimulus from an appropriate modality will be enough. But this assumption is unjustified. Ipresent the results of experiments on the effects of stimulus exposure in rats that suggest that mere exposure to a stimulus can influence its perceptual effectiveness—that the ability of a stimulus to activate its representation can be changed by experience. This conclusion is of interest for two reasons. First, it supplies a direct explanation for the phenomenon of perceptual learning—the enhancement of stimulus discriminability produced by some forms of stimulus exposure. Second, it poses a theoretical challenge in that it seems to require the existence of a learning mechanism outside the scope of those envisaged by current formal theories of associative learning. I offer some speculations as to how this mechanism might be incorporated into such theories.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Luo, Dezhao, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye, and Weiping Wang. "Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11701–8. http://dx.doi.org/10.1609/aaai.v34i07.6840.

Повний текст джерела
Анотація:
We propose a novel self-supervised method, referred to as Video Cloze Procedure (VCP), to learn rich spatial-temporal representations. VCP first generates “blanks” by withholding video clips and then creates “options” by applying spatio-temporal operations on the withheld clips. Finally, it fills the blanks with “options” and learns representations by predicting the categories of operations applied on the clips. VCP can act as either a proxy task or a target task in self-supervised learning. As a proxy task, it converts rich self-supervised representations into video clip operations (options), which enhances the flexibility and reduces the complexity of representation learning. As a target task, it can assess learned representation models in a uniform and interpretable manner. With VCP, we train spatial-temporal representation models (3D-CNNs) and apply such models on action recognition and video retrieval tasks. Experiments on commonly used benchmarks show that the trained models outperform the state-of-the-art self-supervised models with significant margins.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Zhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu, and Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.

Повний текст джерела
Анотація:
Good representations can help RL agents perform concise modeling of their surroundings, and thus support effective decision-making in complex environments. Previous methods learn good representations by imposing extra constraints on dynamics. However, in the causal perspective, the causation between the action and its effect is not fully considered in those methods, which leads to the ignorance of the underlying relations among the action effects on the transitions. Based on the intuition that the same action always causes similar effects among different states, we induce such causation by taking the invariance of action effects among states as the relation. By explicitly utilizing such invariance, in this paper, we show that a better representation can be learned and potentially improves the sample efficiency and the generalization ability of the learned policy. We propose Invariant Action Effect Model (IAEM) to capture the invariance in action effects, where the effect of an action is represented as the residual of representations from neighboring states. IAEM is composed of two parts: (1) a new contrastive-based loss to capture the underlying invariance of action effects; (2) an individual action effect and provides a self-adapted weighting strategy to tackle the corner cases where the invariance does not hold. The extensive experiments on two benchmarks, i.e. Grid-World and Atari, show that the representations learned by IAEM preserve the invariance of action effects. Moreover, with the invariant action effect, IAEM can accelerate the learning process by 1.6x, rapidly generalize to new environments by fine-tuning on a few components, and outperform other dynamics-based representation methods by 1.4x in limited steps.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Byrne, Patrick, and Suzanna Becker. "A Principle for Learning Egocentric-Allocentric Transformation." Neural Computation 20, no. 3 (March 2008): 709–37. http://dx.doi.org/10.1162/neco.2007.10-06-361.

Повний текст джерела
Анотація:
Numerous single-unit recording studies have found mammalian hippocampal neurons that fire selectively for the animal's location in space, independent of its orientation. The population of such neurons, commonly known as place cells, is thought to maintain an allocentric, or orientation-independent, internal representation of the animal's location in space, as well as mediating long-term storage of spatial memories. The fact that spatial information from the environment must reach the brain via sensory receptors in an inherently egocentric, or viewpoint-dependent, fashion leads to the question of how the brain learns to transform egocentric sensory representations into allocentric ones for long-term memory storage. Additionally, if these long-term memory representations of space are to be useful in guiding motor behavior, then the reverse transformation, from allocentric to egocentric coordinates, must also be learned. We propose that orientation-invariant representations can be learned by neural circuits that follow two learning principles: minimization of reconstruction error and maximization of representational temporal inertia. Two different neural network models are presented that adhere to these learning principles, the first by direct optimization through gradient descent and the second using a more biologically realistic circuit based on the restricted Boltzmann machine (Hinton, 2002; Smolensky, 1986). Both models lead to orientation-invariant representations, with the latter demonstrating place-cell-like responses when trained on a linear track environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chen, Yuhao, Alexander Wong, Yuan Fang, Yifan Wu, and Linlin Xu. "Deep Residual Transform for Multi-scale Image Decomposition." Journal of Computational Vision and Imaging Systems 6, no. 1 (January 15, 2021): 1–5. http://dx.doi.org/10.15353/jcvis.v6i1.3537.

Повний текст джерела
Анотація:
Multi-scale image decomposition (MID) is a fundamental task in computer vision and image processing that involves the transformation of an image into a hierarchical representation comprising of different levels of visual granularity from coarse structures to fine details. A well-engineered MID disentangles the image signal into meaningful components which can be used in a variety of applications such as image denoising, image compression, and object classification. Traditional MID approaches such as wavelet transforms tackle the problem through carefully designed basis functions under rigid decomposition structure assumptions. However, as the information distribution varies from one type of image content to another, rigid decomposition assumptions lead to inefficiently representation, i.e., some scales can contain little to no information. To address this issue, we present Deep Residual Transform (DRT), a data-driven MID strategy where the input signal is transformed into a hierarchy of non-linear representations at different scales, with each representation being independently learned as the representational residual of previous scales at a user-controlled detail level. As such, the proposed DRT progressively disentangles scale information from the original signal by sequentially learning residual representations. The decomposition flexibility of this approach allows for highly tailored representations cater to specific types of image content, and results in greater representational efficiency and compactness. In this study, we realize the proposed transform by leveraging a hierarchy of sequentially trained autoencoders. To explore the efficacy of the proposed DRT, we leverage two datasets comprising of very different types of image content: 1) CelebFaces and 2) Cityscapes. Experimental results show that the proposed DRT achieved highly efficient information decomposition on both datasets amid their very different visual granularity characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Li, Jianfei, Yongbin Wang, and Zhulin Tao. "A Rating Prediction Recommendation Model Combined with the Optimizing Allocation for Information Granularity of Attributes." Information 13, no. 1 (January 5, 2022): 21. http://dx.doi.org/10.3390/info13010021.

Повний текст джерела
Анотація:
In recent years, graph neural networks (GNNS) have been demonstrated to be a powerful way to learn graph data. The existing recommender systems based on the implicit factor models mainly use the interactive information between users and items for training and learning. A user–item graph, a user–attribute graph, and an item–attribute graph are constructed according to the interactions between users and items. The latent factors of users and items can be learned in these graph structure data. There are many methods for learning the latent factors of users and items. Still, they do not fully consider the influence of node attribute information on the representation of the latent factors of users and items. We propose a rating prediction recommendation model, short for LNNSR, utilizing the level of information granularity allocated on each attribute by developing a granular neural network. The different granularity distribution proportion weights of each attribute can be learned in the granular neural network. The learned granularity allocation proportion weights are integrated into the latent factor representation of users and items. Thus, we can capture user-embedding representations and item-embedding representations more accurately, and it can also provide a reasonable explanation for the recommendation results. Finally, we concatenate the user latent factor-embedding and the item latent factor-embedding and then feed it into a multi-layer perceptron for rating prediction. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Schumacher, Eric H., and Eliot Hazeltine. "Hierarchical Task Representation." Current Directions in Psychological Science 25, no. 6 (December 2016): 449–54. http://dx.doi.org/10.1177/0963721416665085.

Повний текст джерела
Анотація:
Human behavior is remarkably complex—even during the performance of relatively simple tasks—yet it is often assumed that learned associations between stimuli and responses provide the representational substrate for action selection. Here, we introduce an alternative framework, called a task file, that includes hierarchical associations between stimulus features, response features, goals, and drives, which may overcome the limitations inherent in the conceptualization of response selection as being based solely on associations between stimuli and responses. We then review evidence from our own experimental research showing that even in the context of performing relatively easy tasks, the stimulus-response-association approach to response selection is inadequate to account for the interactions between discrete responses. Instead, response selection may emerge from competition between linked representations at multiple levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

SCARPETTA, SILVIA, ZHAOPING LI, and JOHN HERTZ. "LEARNING IN AN OSCILLATORY CORTICAL MODEL." Fractals 11, supp01 (February 2003): 291–300. http://dx.doi.org/10.1142/s0218348x03001951.

Повний текст джерела
Анотація:
We study a model of generalized-Hebbian learning in asymmetric oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. The learning rule is based on the synaptic plasticity observed experimentally, in particular long-term potentiation and long-term depression of the synaptic efficacies depending on the relative timing of the pre- and postsynaptic activities during learning. The learned memory or representational states can be encoded by both the amplitude and the phase patterns of the oscillating neural populations, enabling more efficient and robust information coding than in conventional models of associative memory or input representation. Depending on the class of nonlinearity of the activation function, the model can function as an associative memory for oscillatory patterns (nonlinearity of class II) or can generalize from or interpolate between the learned states, appropriate for the function of input representation (nonlinearity of class I). In the former case, simulations of the model exhibits a first order transition between the "disordered state" and the "ordered" memory state.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao, and Wing-Yin Yu. "Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.

Повний текст джерела
Анотація:
Spatio-temporal representation learning is critical for video self-supervised representation. Recent approaches mainly use contrastive learning and pretext tasks. However, these approaches learn representation by discriminating sampled instances via feature similarity in the latent space while ignoring the intermediate state of the learned representations, which limits the overall performance. In this work, taking into account the degree of similarity of sampled instances as the intermediate state, we propose a novel pretext task - spatio-temporal overlap rate (STOR) prediction. It stems from the observation that humans are capable of discriminating the overlap rates of videos in space and time. This task encourages the model to discriminate the STOR of two generated samples to learn the representations. Moreover, we employ a joint optimization combining pretext tasks with contrastive learning to further enhance the spatio-temporal representation learning. We also study the mutual influence of each component in the proposed scheme. Extensive experiments demonstrate that our proposed STOR task can favor both contrastive learning and pretext tasks and the joint optimization scheme can significantly improve the spatio-temporal representation in video understanding. The code is available at https://github.com/Katou2/CSTP.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії