Статті в журналах з теми "Attention aware"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Attention aware.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Attention aware".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wang, Zhibo, Jinxin Ma, Yongquan Zhang, Qian Wang, Ju Ren, and Peng Sun. "Attention-over-Attention Field-Aware Factorization Machine." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6323–30. http://dx.doi.org/10.1609/aaai.v34i04.6101.

Повний текст джерела
Анотація:
Factorization Machine (FM) has been a popular approach in supervised predictive tasks, such as click-through rate prediction and recommender systems, due to its great performance and efficiency. Recently, several variants of FM have been proposed to improve its performance. However, most of the state-of-the-art prediction algorithms neglected the field information of features, and they also failed to discriminate the importance of feature interactions due to the problem of redundant features. In this paper, we present a novel algorithm called Attention-over-Attention Field-aware Factorization Machine (AoAFFM) for better capturing the characteristics of feature interactions. Specifically, we propose the field-aware embedding layer to exploit the field information of features, and combine it with the attention-over-attention mechanism to learn both feature-level and interaction-level attention to estimate the weight of feature interactions. Experimental results show that the proposed AoAFFM improves FM and FFM with large margin, and outperforms state-of-the-art algorithms on three public benchmark datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Baosong, Jian Li, Derek F. Wong, Lidia S. Chao, Xing Wang, and Zhaopeng Tu. "Context-Aware Self-Attention Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 387–94. http://dx.doi.org/10.1609/aaai.v33i01.3301387.

Повний текст джерела
Анотація:
Self-attention model has shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies. However, it calculates the dependencies between representations without considering the contextual information, which has proven useful for modeling dependencies among neural representations in various natural language tasks. In this work, we focus on improving self-attention networks through capturing the richness of context. To maintain the simplicity and flexibility of the self-attention networks, we propose to contextualize the transformations of the query and key layers, which are used to calculate the relevance between elements. Specifically, we leverage the internal representations that embed both global and deep contexts, thus avoid relying on external resources. Experimental results on WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed methods. Furthermore, we conducted extensive analyses to quantify how the context vectors participate in the self-attention model.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vertegaal, Roel, and Jeffrey S. Shell. "Attentive user interfaces: the surveillance and sousveillance of gaze-aware objects." Social Science Information 47, no. 3 (September 2008): 275–98. http://dx.doi.org/10.1177/0539018408092574.

Повний текст джерела
Анотація:
Attentive user interfaces are user interfaces that aim to support users' attentional capacities. By sensing users' attention for objects and people in their everyday environment and by treating user attention as a limited resource, these interfaces avoid today's ubiquitous patterns of interruption. Focusing upon attention as a central interaction channel allows development of more sociable methods of communication and repair with ubiquitous devices. Our methods are analogous to human turn-taking in group communication. Turn-taking improves the user's ability to conduct foreground processing of conversations. Attentive user interfaces bridge the gap between foreground and periphery of user activity in a similar fashion, allowing users to move smoothly in between. The authors present a framework for augmenting user attention through attentive user interfaces. We propose 5 key properties of attentive systems: to (1) sense attention, (2) reason about attention, (3) regulate interactions, (4) communicate attention and (5) augment attention. We conclude with a discussion of privacy considerations of attentive user interfaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wu, Haiping, Khimya Khetarpal, and Doina Precup. "Self-Supervised Attention-Aware Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10311–19. http://dx.doi.org/10.1609/aaai.v35i12.17235.

Повний текст джерела
Анотація:
Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jian, Muwei, Kin-Man Lam, Junyu Dong, and Linlin Shen. "Visual-Patch-Attention-Aware Saliency Detection." IEEE Transactions on Cybernetics 45, no. 8 (August 2015): 1575–86. http://dx.doi.org/10.1109/tcyb.2014.2356200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mo, Rongyun, Shenqi Lai, Yan Yan, Zhenhua Chai, and Xiaolin Wei. "Dimension-aware attention for efficient mobile networks." Pattern Recognition 131 (November 2022): 108899. http://dx.doi.org/10.1016/j.patcog.2022.108899.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Siragusa, Giovanni, and Livio Robaldo. "Sentence Graph Attention For Content-Aware Summarization." Applied Sciences 12, no. 20 (October 14, 2022): 10382. http://dx.doi.org/10.3390/app122010382.

Повний текст джерела
Анотація:
Neural network-based encoder–decoder (ED) models are widely used for abstractive text summarization. While the encoder first reads the source document and embeds salient information, the decoder starts from such encoding to generate the summary word-by-word. However, the drawback of the ED model is that it treats words and sentences equally, without discerning the most relevant ones from the others. Many researchers have investigated this problem and provided different solutions. In this paper, we define a sentence-level attention mechanism based on the well-known PageRank algorithm to find the relevant sentences, then propagate the resulting scores into a second word-level attention layer. We tested the proposed model on the well-known CNN/Dailymail dataset, and found that it was able to generate summaries with a much higher abstractive power than state-of-the-art models, in spite of an unavoidable (but slight) decrease in terms of the Rouge scores.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Song, Junyu, Kaifang Li, Guancheng Hui, and Miaohui Zhang. "Relation Aware Attention for Penson Re-identification." Journal of Physics: Conference Series 2010, no. 1 (September 1, 2021): 012130. http://dx.doi.org/10.1088/1742-6596/2010/1/012130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lyu, Kejie, Yingming Li, and Zhongfei Zhang. "Attention-Aware Multi-Task Convolutional Neural Networks." IEEE Transactions on Image Processing 29 (2020): 1867–78. http://dx.doi.org/10.1109/tip.2019.2944522.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Celikcan, Ufuk, Gokcen Cimen, E. Bengu Kevinc, and Tolga Capin. "Attention-Aware Disparity Control in interactive environments." Visual Computer 29, no. 6-8 (April 26, 2013): 685–94. http://dx.doi.org/10.1007/s00371-013-0804-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wu, Shuzhe, Meina Kan, Shiguang Shan, and Xilin Chen. "Hierarchical Attention for Part-Aware Face Detection." International Journal of Computer Vision 127, no. 6-7 (March 2, 2019): 560–78. http://dx.doi.org/10.1007/s11263-019-01157-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Yuan, Weihua, Hong Wang, Xiaomei Yu, Nan Liu, and Zhenghao Li. "Attention-based context-aware sequential recommendation model." Information Sciences 510 (February 2020): 122–34. http://dx.doi.org/10.1016/j.ins.2019.09.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Leng, Jiaxu, Ying Liu, and Shang Chen. "Context-aware attention network for image recognition." Neural Computing and Applications 31, no. 12 (June 18, 2019): 9295–305. http://dx.doi.org/10.1007/s00521-019-04281-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Roda, Claudia, and Julie Thomas. "Attention aware systems: Introduction to special issue." Computers in Human Behavior 22, no. 4 (July 2006): 555–56. http://dx.doi.org/10.1016/j.chb.2005.12.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Shen, Aihong, Huasheng Wang, Junjie Wang, Hongchen Tan, Xiuping Liu, and Junjie Cao. "Attention-Aware Adversarial Network for Person Re-Identification." Applied Sciences 9, no. 8 (April 14, 2019): 1550. http://dx.doi.org/10.3390/app9081550.

Повний текст джерела
Анотація:
Person re-identification (re-ID) is a fundamental problem in the field of computer vision. The performance of deep learning-based person re-ID models suffers from a lack of training data. In this work, we introduce a novel image-specific data augmentation method on the feature map level to enforce feature diversity in the network. Furthermore, an attention assignment mechanism is proposed to enforce that the person re-ID classifier focuses on nearly all important regions of the input person image. To achieve this, a three-stage framework is proposed. First, a baseline classification network is trained for person re-ID. Second, an attention assignment network is proposed based on the baseline network, in which the attention module learns to suppress the response of the current detected regions and re-assign attentions to other important locations. By this means, multiple important regions for classification are highlighted by the attention map. Finally, the attention map is integrated in the attention-aware adversarial network (AAA-Net), which generates high-performance classification results with an adversarial training strategy. We evaluate the proposed method on two large-scale benchmark datasets, including Market1501 and DukeMTMC-reID. Experimental results show that our algorithm performs favorably against the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ji, Mingi, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, and Il-Chul Moon. "Sequential Recommendation with Relation-Aware Kernelized Self-Attention." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4304–11. http://dx.doi.org/10.1609/aaai.v34i04.5854.

Повний текст джерела
Анотація:
Recent studies identified that sequential Recommendation is improved by the attention mechanism. By following this development, we propose Relation-Aware Kernelized Self-Attention (RKSA) adopting a self-attention mechanism of the Transformer with augmentation of a probabilistic model. The original self-attention of Transformer is a deterministic measure without relation-awareness. Therefore, we introduce a latent space to the self-attention, and the latent space models the recommendation context from relation as a multivariate skew-normal distribution with a kernelized covariance matrix from co-occurrences, item characteristics, and user information. This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics. We experimented RKSA over the benchmark datasets, and RKSA shows significant improvements compared to the recent baseline models. Also, RKSA were able to produce a latent space model that answers the reasons for recommendation.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Chen, Kehai, Rui Wang, Masao Utiyama, and Eiichiro Sumita. "Context-aware positional representation for self-attention networks." Neurocomputing 451 (September 2021): 46–56. http://dx.doi.org/10.1016/j.neucom.2021.04.055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Peck, Evan M., Emily Carlin, and Robert Jacob. "Designing Brain-Computer Interfaces for Attention-Aware Systems." Computer 48, no. 10 (October 2015): 34–42. http://dx.doi.org/10.1109/mc.2015.315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zhang, Sanyi, Zhanjie Song, Xiaochun Cao, Hua Zhang, and Jie Zhou. "Task-Aware Attention Model for Clothing Attribute Prediction." IEEE Transactions on Circuits and Systems for Video Technology 30, no. 4 (April 2020): 1051–64. http://dx.doi.org/10.1109/tcsvt.2019.2902268.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhang, Miaohui, Ming Xin, Chengcheng Gao, Xile Wang, and Sihan Zhang. "Attention-aware scoring learning for person re-identification." Knowledge-Based Systems 203 (September 2020): 106154. http://dx.doi.org/10.1016/j.knosys.2020.106154.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wei, Wei, Zanbo Wang, Xianling Mao, Guangyou Zhou, Pan Zhou, and Sheng Jiang. "Position-aware self-attention based neural sequence labeling." Pattern Recognition 110 (February 2021): 107636. http://dx.doi.org/10.1016/j.patcog.2020.107636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Okoshi, Tadashi, Hiroki Nozaki, Jin Nakazawa, Hideyuki Tokuda, Julian Ramos, and Anind K. Dey. "Towards attention-aware adaptive notification on smart phones." Pervasive and Mobile Computing 26 (February 2016): 17–34. http://dx.doi.org/10.1016/j.pmcj.2015.10.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Yan, Haibin, and Shiwei Wang. "Learning part-aware attention networks for kinship verification." Pattern Recognition Letters 128 (December 2019): 169–75. http://dx.doi.org/10.1016/j.patrec.2019.08.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Li, Shanshan, Qiang Cai, Zhuangzi Li, Haisheng Li, Naiguang Zhang, and Xiaoyu Zhang. "Attention-aware invertible hashing network with skip connections." Pattern Recognition Letters 138 (October 2020): 556–62. http://dx.doi.org/10.1016/j.patrec.2020.09.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cao, Yi, Weifeng Zhang, Bo Song, Weike Pan, and Congfu Xu. "Position-aware context attention for session-based recommendation." Neurocomputing 376 (February 2020): 65–72. http://dx.doi.org/10.1016/j.neucom.2019.09.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Wang, Shiwei, Long Lan, Xiang Zhang, Guohua Dong, and Zhigang Luo. "Object-aware semantics of attention for image captioning." Multimedia Tools and Applications 79, no. 3-4 (November 14, 2019): 2013–30. http://dx.doi.org/10.1007/s11042-019-08209-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Gao, Xiaoyan, Fuli Feng, Xiangnan He, Heyan Huang, Xinyu Guan, Chong Feng, Zhaoyan Ming, and Tat-Seng Chua. "Hierarchical Attention Network for Visually-Aware Food Recommendation." IEEE Transactions on Multimedia 22, no. 6 (June 2020): 1647–59. http://dx.doi.org/10.1109/tmm.2019.2945180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yadav, Shweta, Pralay Ramteke, Asif Ekbal, Sriparna Saha, and Pushpak Bhattacharyya. "Exploring Disorder-Aware Attention for Clinical Event Extraction." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 1s (April 28, 2020): 1–21. http://dx.doi.org/10.1145/3372328.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Liu, Shiguang, Yaxi Jiang, and Huarong Luo. "Attention-aware color theme extraction for fabric images." Textile Research Journal 88, no. 5 (December 23, 2016): 552–65. http://dx.doi.org/10.1177/0040517516685278.

Повний текст джерела
Анотація:
Color configuration plays an important role in art, design, and communication, which can influence the user’s experiences, feelings, and psychological well-being. It is laborious to manually select a color theme from scratch for handling large batches of images. Alternatively, it can inspire designers’ creations and save their time as well by leveraging the color themes in existing art works (e.g. fabric, paintings). However, it is challenging to automatically extract perceptually plausible color themes from fabric images. This paper presents a new automatic framework for extracting color themes from fabric images. A saliency map is built to help recognize the visual attention regions of the input image. Since the saliency map separates the image into visual attention regions (foreground) and non-visual attention regions (background), we respectively compute the dominant colors of these two regions, and merge them to form the initial target color theme based on certain rules. The dominant colors are extracted, accounting for the characteristics of the fabric and hue distributions of fabric images so as to acquire visually plausible results. Our method can be used to transfer colors between two fabric images for fabric color design. We tested our method thoroughly with various fabric images (e.g. cotton, silk, and linen) with different texture patterns (e.g. plain and twill). Experiments show that our method is more efficient and can generate more visually plausible results than state-of-the-art algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

ARUGA, Yuki, Liz RINCON-ARDILA, and Gentiane VENTURE. "Human Aware Navigation Based on Human Attention Estimation." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2020 (2020): 1P2—I07. http://dx.doi.org/10.1299/jsmermd.2020.1p2-i07.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Roda, Claudia, and Julie Thomas. "Attention aware systems: Theories, applications, and research agenda." Computers in Human Behavior 22, no. 4 (July 2006): 557–87. http://dx.doi.org/10.1016/j.chb.2005.12.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Fan, Shaokun, Lele Kang, and J. Leon Zhao. "Workflow-aware attention tracking to enhance collaboration management." Information Systems Frontiers 17, no. 6 (May 30, 2015): 1253–64. http://dx.doi.org/10.1007/s10796-015-9565-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Sun, Jinsheng, Xiaojuan Ban, Bing Han, Xueyuan Yang, and Chao Yao. "Interactive Image Segmentation Based on Feature-Aware Attention." Symmetry 14, no. 11 (November 12, 2022): 2396. http://dx.doi.org/10.3390/sym14112396.

Повний текст джерела
Анотація:
Interactive segmentation is a technique for picking objects of interest in images according to users’ input interactions. Some recent works take the users’ interactive input to guide the deep neural network training, where the users’ click information is utilized as weak-supervised information. However, limited by the learning capability of the model, this structure does not accurately represent the user’s interaction intention. In this work, we propose a multi-click interactive segmentation solution for employing human intention to refine the segmentation results. We propose a coarse segmentation network to extract semantic information and generate rough results. Then, we designed a feature-aware attention module according to the symmetry of user intention and image semantic information. Finally, we establish a refinement module to combine the feature-aware results with coarse masks to generate precise intentional segmentation. Furthermore, the feature-aware module is trained as a plug-and-play tool, which can be embedded into most deep image segmentation models for exploiting users’ click information in the training process. We conduct experiments on five common datasets (SBD, GrabCut, DAVIS, Berkeley, MS COCO) and the results prove our attention module can improve the performance of image segmentation networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhang, Ruihua, Fan Yang, Yan Luo, Jianyi Liu, Jinbin Li, and Cong Wang. "Part-Aware Mask-Guided Attention for Thorax Disease Classification." Entropy 23, no. 6 (May 23, 2021): 653. http://dx.doi.org/10.3390/e23060653.

Повний текст джерела
Анотація:
Thorax disease classification is a challenging task due to complex pathologies and subtle texture changes, etc. It has been extensively studied for years largely because of its wide application in computer-aided diagnosis. Most existing methods directly learn global feature representations from whole Chest X-ray (CXR) images, without considering in depth the richer visual cues lying around informative local regions. Thus, these methods often produce sub-optimal thorax disease classification performance because they ignore the very informative pathological changes around organs. In this paper, we propose a novel Part-Aware Mask-Guided Attention Network (PMGAN) that learns complementary global and local feature representations from all-organ region and multiple single-organ regions simultaneously for thorax disease classification. Specifically, multiple innovative soft attention modules are designed to progressively guide feature learning toward the global informative regions of whole CXR image. A mask-guided attention module is designed to further search for informative regions and visual cues within the all-organ or single-organ images, where attention is elegantly regularized by automatically generated organ masks and without introducing computation during the inference stage. In addition, a multi-task learning strategy is designed, which effectively maximizes the learning of complementary local and global representations. The proposed PMGAN has been evaluated on the ChestX-ray14 dataset and the experimental results demonstrate its superior thorax disease classification performance against the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wei, Liting, Bin Li, Yun Li, and Yi Zhu. "Time Interval Aware Self-Attention approach for Knowledge Tracing." Computers and Electrical Engineering 102 (September 2022): 108179. http://dx.doi.org/10.1016/j.compeleceng.2022.108179.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chen, Chongqing, Dezhi Han, and Chin-Chen Chang. "CAAN: Context-Aware attention network for visual question answering." Pattern Recognition 132 (December 2022): 108980. http://dx.doi.org/10.1016/j.patcog.2022.108980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Xucheng, Chenning Tao, and Zhenrong Zheng. "Occlusion-aware light field depth estimation with view attention." Optics and Lasers in Engineering 160 (January 2023): 107299. http://dx.doi.org/10.1016/j.optlaseng.2022.107299.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Dun, Yaqian, Kefei Tu, Chen Chen, Chunyan Hou, and Xiaojie Yuan. "KAN: Knowledge-aware Attention Network for Fake News Detection." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 81–89. http://dx.doi.org/10.1609/aaai.v35i1.16080.

Повний текст джерела
Анотація:
The explosive growth of fake news on social media has drawn great concern both from industrial and academic communities. There has been an increasing demand for fake news detection due to its detrimental effects. Generally, news content is condensed and full of knowledge entities. However, existing methods usually focus on the textual contents and social context, and ignore the knowledge-level relationships among news entities. To address this limitation, in this paper, we propose a novel Knowledge-aware Attention Network (KAN) that incorporates external knowledge from knowledge graph for fake news detection. Firstly, we identify entity mentions in news contents and align them with the entities in knowledge graph. Then, the entities and their contexts are used as external knowledge to provide complementary information. Finally, we design News towards Entities (N-E) attention and News towards Entities and Entity Contexts (N-E^2C) attention to measure the importances of knowledge. Thus, our proposed model can incorporate both semantic-level and knowledge-level representations of news to detect fake news. Experimental results on three public datasets show that our model outperforms the state-of-the-art methods, and also validate the effectiveness of knowledge attention.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Zhao, Yaru, Bo Cheng, and Yingying Zhang. "Knowledge-aware Dialogue Generation with Hybrid Attention (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15951–52. http://dx.doi.org/10.1609/aaai.v35i18.17972.

Повний текст джерела
Анотація:
Using commonsense knowledge to assist dialogue generation is a big step forward for dialogue generation task. However, how to fully utilize commonsense information is always a challenge. Furthermore, the entities generated in the response do not match the information in the post most often. In this paper, we propose a dialogue generation model which uses hybrid attention to better generate rational entities. When a user post is given, the model encodes relevant knowledge graphs from a knowledge base with a graph attention mechanism. Then it will encode the user post and graphs with a co-attention mechanism, which effectively encodes complex related data. Through the above mechanism, we can get a better mutual understanding of post and knowledge. The experimental results show that our model is more effective than the current state-of-the-art model (CCM).
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Heo, Jiseong, Yooseung Wang, and Jihun Park. "Occlusion-aware spatial attention transformer for occluded object recognition." Pattern Recognition Letters 159 (July 2022): 70–76. http://dx.doi.org/10.1016/j.patrec.2022.05.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Song, Zixing, and Irwin King. "Hierarchical Heterogeneous Graph Attention Network for Syntax-Aware Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11340–48. http://dx.doi.org/10.1609/aaai.v36i10.21385.

Повний текст джерела
Анотація:
The task of summarization often requires a non-trivial understanding of the given text at the semantic level. In this work, we essentially incorporate the constituent structure into the single document summarization via the Graph Neural Networks to learn the semantic meaning of tokens. More specifically, we propose a novel hierarchical heterogeneous graph attention network over constituency-based parse trees for syntax-aware summarization. This approach reflects psychological findings that humans will pinpoint specific selection patterns to construct summaries hierarchically. Extensive experiments demonstrate that our model is effective for both the abstractive and extractive summarization tasks on five benchmark datasets from various domains. Moreover, further performance improvement can be obtained by virtue of state-of-the-art pre-trained models.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Kewei, Shuaiyuan Du, Chengxin Liu, and Zhiguo Cao. "Interior Attention-Aware Network for Infrared Small Target Detection." IEEE Transactions on Geoscience and Remote Sensing 60 (2022): 1–13. http://dx.doi.org/10.1109/tgrs.2022.3163410.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Yang, Baosong, Longyue Wang, Derek F. Wong, Shuming Shi, and Zhaopeng Tu. "Context-aware Self-Attention Networks for Natural Language Processing." Neurocomputing 458 (October 2021): 157–69. http://dx.doi.org/10.1016/j.neucom.2021.06.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Guo, Taian, Tao Dai, Ling Liu, Zexuan Zhu, and Shu-Tao Xia. "S2A: Scale-Attention-Aware Networks for Video Super-Resolution." Entropy 23, no. 11 (October 25, 2021): 1398. http://dx.doi.org/10.3390/e23111398.

Повний текст джерела
Анотація:
Convolutional Neural Networks (CNNs) have been widely used in video super-resolution (VSR). Most existing VSR methods focus on how to utilize the information of multiple frames, while neglecting the feature correlations of the intermediate features, thus limiting the feature expression of the models. To address this problem, we propose a novel SAA network, that is, Scale-and-Attention-Aware Networks, to apply different attention to different temporal-length streams, while further exploring both spatial and channel attention on separate streams with a newly proposed Criss-Cross Channel Attention Module (C3AM). Experiments on public VSR datasets demonstrate the superiority of our method over other state-of-the-art methods in terms of both quantitative and qualitative metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Lin, Haoneng, Zongshang Li, Zefan Yang, and Yi Wang. "Variance‐aware attention U‐Net for multi‐organ segmentation." Medical Physics 48, no. 12 (November 13, 2021): 7864–76. http://dx.doi.org/10.1002/mp.15322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Cao, Zhiyuan, Yufei Gao, and Jiacai Zhang. "Scale-aware attention network for weakly supervised semantic segmentation." Neurocomputing 492 (July 2022): 34–49. http://dx.doi.org/10.1016/j.neucom.2022.04.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Chen, Yifan, Han Wang, Xiaolu Sun, Bin Fan, Chu Tang, and Hui Zeng. "Deep attention aware feature learning for person re-Identification." Pattern Recognition 126 (June 2022): 108567. http://dx.doi.org/10.1016/j.patcog.2022.108567.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Mack, Wolfgang, Julian Wechsler, and Emanuël A. P. Habets. "Signal-aware direction-of-arrival estimation using attention mechanisms." Computer Speech & Language 75 (September 2022): 101363. http://dx.doi.org/10.1016/j.csl.2022.101363.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Shapira, Tal, and Yuval Shavitt. "SASA: Source-Aware Self-Attention for IP Hijack Detection." IEEE/ACM Transactions on Networking 30, no. 1 (February 2022): 437–49. http://dx.doi.org/10.1109/tnet.2021.3115935.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zha, Yongfu, Yongjian Zhang, Zhixin Liu, and Yumin Dong. "Self-Attention Based Time-Rating-Aware Context Recommender System." Computational Intelligence and Neuroscience 2022 (September 17, 2022): 1–10. http://dx.doi.org/10.1155/2022/9288902.

Повний текст джерела
Анотація:
The sequential recommendation can predict the user’s next behavior according to the user’s historical interaction sequence. To better capture users’ preferences, some sequential recommendation models propose time-aware attention networks to capture users’ long-term and short-term intentions. However, although these models have achieved good results, they ignore the influence of users on the rating information of items. We believe that in the sequential recommendation, the user’s displayed feedback (rating) on an item reflects the user’s preference for the item, which directly affects the user’s choice of the next item to a certain extent. In different periods of sequential recommendation, the user’s rating of the item reflects the change in the user’s preference. In this paper, we separately model the time interval of items in the user’s interaction sequence and the ratings of the items in the interaction sequence to obtain temporal context and rating context, respectively. Finally, we exploit the self-attention mechanism to capture the impact of temporal context and rating context on users’ preferences to predict items that users would click next. Experiments on three public benchmark datasets show that our proposed model (SATRAC) outperforms several state-of-the-art methods. The Hit@10 value of the SATRAC model on the three datasets (Movies-1M, Amazon-Movies, Amazon-CDs) increased by 0.73%, 2.73%, and 1.36%, and the NDCG@10 value increased by 5.90%, 3.47%, and 4.59%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії