To see the other types of publications on this topic, follow the link: Mechanism of attention.

Journal articles on the topic 'Mechanism of attention'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Mechanism of attention.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zang, Yubin, Zhenming Yu, Kun Xu, Minghua Chen, Sigang Yang, and Hongwei Chen. "Fiber communication receiver models based on the multi-head attention mechanism." Chinese Optics Letters 21, no. 3 (2023): 030602. http://dx.doi.org/10.3788/col202321.030602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yoo, Sungwook, Hanjun Goo, and Kyuseok Shim. "Improving Review-based Attention Mechanism." KIISE Transactions on Computing Practices 27, no. 10 (October 31, 2021): 486–91. http://dx.doi.org/10.5626/ktcp.2021.27.10.486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jia, Yuening. "Attention Mechanism in Machine Translation." Journal of Physics: Conference Series 1314 (October 2019): 012186. http://dx.doi.org/10.1088/1742-6596/1314/1/012186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sieb, R. A. "A brain mechanism for attention." Medical Hypotheses 33, no. 3 (November 1990): 145–53. http://dx.doi.org/10.1016/0306-9877(90)90164-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Da-Sol, and Jeong-Won Cha. "Image Caption Generation using Object Attention Mechanism." Journal of KIISE 46, no. 4 (April 30, 2019): 369–75. http://dx.doi.org/10.5626/jok.2019.46.4.369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Spironelli, Chiara, Mariaelena Tagliabue, and Carlo Umiltà. "Response Selection and Attention Orienting." Experimental Psychology 56, no. 4 (January 2009): 274–82. http://dx.doi.org/10.1027/1618-3169.56.4.274.

Full text
Abstract:
Recently, there has been a redirection of research efforts toward the exploration of the role of hemispheric lateralization in determining Simon effect asymmetries. The present study aimed at implementing a connectionist model that simulates the cognitive mechanisms implied by such asymmetries, focusing on the underlying neural structure. A left-lateralized response-selection mechanism was implemented alone (Experiment 1) or along with a right-lateralized automatic attention-orienting mechanism (Experiment 2). It was found that both models yielded Simon effect asymmetries. However, whereas the first model showed a reversed pattern of asymmetry compared with human, real data, the second model’s performance strongly resembled human Simon effect asymmetries, with a significantly greater right than left Simon effect. Thus, a left-side bias in the response-selection mechanism produced a left-side biased Simon effect, whereas a right-side bias in the attention system produced a right-side biased Simon effect. In conclusion, results showed that the bias of the attention system had a larger impact than the bias of the response-selection mechanism in producing Simon effect asymmetries.
APA, Harvard, Vancouver, ISO, and other styles
7

Songlin Yin, Songlin Yin, and Fei Tan Songlin Yin. "YOLOv4-A: Research on Traffic Sign Detection Based on Hybrid Attention Mechanism." 電腦學刊 33, no. 6 (December 2022): 181–92. http://dx.doi.org/10.53106/199115992022123306015.

Full text
Abstract:
<p>Aiming at the problem of false detection and missed detection in the traffic sign detection task, an improved YOLOv4 detection algorithm is proposed. Based on the YOLOv4 algorithm, the Efficient Channel Attention Module (ECA) and the Convolutional Block Attention Module (CBAM) are added to form YOLOv4-A algorithm. At the same time, the global K-means clustering algorithm is used to regenerate smaller anchors, which makes the network converge faster and reduces the error rate. The YOLOv4-A algorithm re-calibrates the detection branch features in the two dimensions of channel and space, so that the network can focus and enhance the effective features, and suppress the interference features, which improves the detection ability of the algorithm. Experiments on the TT100K traffic sign dataset show that the proposed algorithm has a particularly significant improvement in the performance of small target detection. Compared with the YOLOv4 algorithm, the precision and mAP@0.5 of the proposed algorithm are increased by 5.38% and 5.75%.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Mao, Guojun, Guanyi Liao, Hengliang Zhu, and Bo Sun. "Multibranch Attention Mechanism Based on Channel and Spatial Attention Fusion." Mathematics 10, no. 21 (November 6, 2022): 4150. http://dx.doi.org/10.3390/math10214150.

Full text
Abstract:
Recently, it has been demonstrated that the performance of an object detection network can be improved by embedding an attention module into it. In this work, we propose a lightweight and effective attention mechanism named multibranch attention (M3Att). For the input feature map, our M3Att first uses the grouped convolutional layer with a pyramid structure for feature extraction, and then calculates channel attention and spatial attention simultaneously and fuses them to obtain more complementary features. It is a “plug and play” module that can be easily added to the object detection network and significantly improves the performance of the object detection network with a small increase in parameters. We demonstrate the effectiveness of M3Att on various challenging object detection tasks, including PASCAL VOC2007, PASCAL VOC2012, KITTI, and Zhanjiang Underwater Robot Competition. The experimental results show that this method dramatically improves the object detection effect, especially for the PASCAL VOC2007, and the mapping index of the original network increased by 4.93% when embedded in the YOLOV4 (You Only Look Once v4) network.
APA, Harvard, Vancouver, ISO, and other styles
9

V, Ms Malge Shraddha. "Generating Image Descriptions using Attention Mechanism." International Journal for Research in Applied Science and Engineering Technology 9, no. 3 (March 31, 2021): 1047–56. http://dx.doi.org/10.22214/ijraset.2021.33397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yakura, Hiromu, Shinnosuke Shinozaki, Reon Nishimura, Yoshihiro Oyama, and Jun Sakuma. "Neural malware analysis with attention mechanism." Computers & Security 87 (November 2019): 101592. http://dx.doi.org/10.1016/j.cose.2019.101592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Longjuan, Chunjie Cao, Binghui Zou, Jun Ye, and Jin Zhang. "License Plate Recognition via Attention Mechanism." Computers, Materials & Continua 75, no. 1 (2023): 1801–14. http://dx.doi.org/10.32604/cmc.2023.032785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tao, Xueying, Huaizong Shao, Qiang Li, Ye Pan, and Zhongqi Fu. "External Attention Mechanism-Based Modulation Classification." Journal of Physics: Conference Series 2425, no. 1 (February 1, 2023): 012051. http://dx.doi.org/10.1088/1742-6596/2425/1/012051.

Full text
Abstract:
Abstract This paper considers the modulation classification of radio frequency (RF) signals. An external attention mechanism-based convolution neural network (EACNN) is proposed. Thanks to the external attention layers, the EACNN network can capture the potential correlations of different modulation data, which helps reduce computational consumption and memory costs efficiently during training. Moreover, to account for the variation of the signals induced by channel fading, we further propose a customized batch normalization (BN) layer in EACNN to improve the classification accuracy with less training time. Numerical experiments on RML2016.a dataset shows that the proposed method outperforms the baseline method CNN2 by 7% in terms of classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Yeom, Chanho, Jieun Lee, and Sanghyun Park. "OANet: Ortho-Attention Net Based on Attention Mechanism for Database Performance Prediction." Journal of KIISE 49, no. 11 (November 30, 2022): 1026–31. http://dx.doi.org/10.5626/jok.2022.49.11.1026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Himabindu, Dakshayani D., and Praveen S. Kumar. "A Streamlined Attention Mechanism for Image Classification and Fine-Grained Visual Recognition." MENDEL 27, no. 2 (December 21, 2021): 59–67. http://dx.doi.org/10.13164/mendel.2021.2.059.

Full text
Abstract:
In the recent advancements attention mechanism in deep learning had played a vital role in proving better results in tasks under computer vision. There exists multiple kinds of works under attention mechanism which includes under image classification, fine-grained visual recognition, image captioning, video captioning, object detection and recognition tasks. Global and local attention are the two attention based mechanisms which helps in interpreting the attentive partial. Considering this criteria, there exists channel and spatial attention where in channel attention considers the most attentive channel among the produced block of channels and spatial attention considers which region among the space needs to be focused on. We have proposed a streamlined attention block module which helps in enhancing the feature based learning with less number of additional layers i.e., a GAP layer followed by a linear layer with an incorporation of second order pooling(GSoP) after every layer in the utilized encoder. This mechanism has produced better range dependencies by the conducted experimentation. We have experimented our model on CIFAR-10, CIFAR-100 and FGVC-Aircrafts datasets considering finegrained visual recognition. We were successful in achieving state-of-the-result for FGVC-Aircrafts with an accuracy of 97%.
APA, Harvard, Vancouver, ISO, and other styles
15

Yin, Wenpeng, and Hinrich Schütze. "Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms." Transactions of the Association for Computational Linguistics 6 (December 2018): 687–702. http://dx.doi.org/10.1162/tacl_a_00249.

Full text
Abstract:
In NLP, convolutional neural networks (CNNs) have benefited less than recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into convolution). Convolution is the differentiator of CNNs in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size context in the input text t x. In this work, we propose an attentive convolution network, ATTCONV. It extends the context scope of the convolution operation, deriving higher-level features for a word not only from local context, but also from information extracted from nonlocal context by the attention mechanism commonly used in RNNs. This nonlocal context can come (i) from parts of the input text t x that are distant or (ii) from extra (i.e., external) contexts t y. Experiments on sentence modeling with zero-context (sentiment analysis), single-context (textual entailment) and multiple-context (claim verification) demonstrate the effectiveness of ATTCONV in sentence representation learning with the incorporation of context. In particular, attentive convolution outperforms attentive pooling and is a strong competitor to popular attentive RNNs. 1
APA, Harvard, Vancouver, ISO, and other styles
16

Mathôt, Sebastiaan, and Jan Theeuwes. "Visual attention and stability." Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1564 (February 27, 2011): 516–27. http://dx.doi.org/10.1098/rstb.2010.0187.

Full text
Abstract:
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.
APA, Harvard, Vancouver, ISO, and other styles
17

Zheng, Menghua, Jiayu Xu, Yinjie Shen, Chunwei Tian, Jian Li, Lunke Fei, Ming Zong, and Xiaoyang Liu. "Attention-based CNNs for Image Classification: A Survey." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012068. http://dx.doi.org/10.1088/1742-6596/2171/1/012068.

Full text
Abstract:
Abstract Deep learning techniques as well as CNNs can learn power context information, they have been widely applied in image recognition. However, deep CNNs may reply on large width and large depth, which may increase computational costs. Attention mechanism fused into CNNs can address this problem. In this paper, we summary an attention mechanism acts a CNN for image classification. Firstly, the survey shows the development of CNNs for image classification. Then, we illustrate basis of CNNs and attention mechanisms for image classification. Next, we give the main architecture of CNNs with attentions, public and our collected datasets, experimental results in image classification. Finally, we point out potential research points, challenges attention-based for image classification and summary the whole paper.
APA, Harvard, Vancouver, ISO, and other styles
18

Chou, Kenny F., and Kamal Sen. "AIM: A network model of attention in auditory cortex." PLOS Computational Biology 17, no. 8 (August 27, 2021): e1009356. http://dx.doi.org/10.1371/journal.pcbi.1009356.

Full text
Abstract:
Attentional modulation of cortical networks is critical for the cognitive flexibility required to process complex scenes. Current theoretical frameworks for attention are based almost exclusively on studies in visual cortex, where attentional effects are typically modest and excitatory. In contrast, attentional effects in auditory cortex can be large and suppressive. A theoretical framework for explaining attentional effects in auditory cortex is lacking, preventing a broader understanding of cortical mechanisms underlying attention. Here, we present a cortical network model of attention in primary auditory cortex (A1). A key mechanism in our network is attentional inhibitory modulation (AIM) of cortical inhibitory neurons. In this mechanism, top-down inhibitory neurons disinhibit bottom-up cortical circuits, a prominent circuit motif observed in sensory cortex. Our results reveal that the same underlying mechanisms in the AIM network can explain diverse attentional effects on both spatial and frequency tuning in A1. We find that a dominant effect of disinhibition on cortical tuning is suppressive, consistent with experimental observations. Functionally, the AIM network may play a key role in solving the cocktail party problem. We demonstrate how attention can guide the AIM network to monitor an acoustic scene, select a specific target, or switch to a different target, providing flexible outputs for solving the cocktail party problem.
APA, Harvard, Vancouver, ISO, and other styles
19

Xuezhi, Xiang, Syed Masroor Ali, and Ghulam Farid. "OPTICAL FLOW ESTIMATION USING CHANNEL ATTENTION MECHANISM." Journal of Flow Visualization and Image Processing 26, no. 4 (2019): 371–93. http://dx.doi.org/10.1615/jflowvisimageproc.2019031771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Hankun, Daojing He, and Sammy Chan. "Fraudulent News Headline Detection with Attention Mechanism." Computational Intelligence and Neuroscience 2021 (March 15, 2021): 1–7. http://dx.doi.org/10.1155/2021/6679661.

Full text
Abstract:
E-mail systems and online social media platforms are ideal places for news dissemination, but a serious problem is the spread of fraudulent news headlines. The previous method of detecting fraudulent news headlines was mainly laborious manual review. While the total number of news headlines goes as high as 1.48 million, manual review becomes practically infeasible. For news headline text data, attention mechanism has powerful processing capability. In this paper, we propose the models based on LSTM and attention layer, which fit the context of news headlines efficiently and can detect fraudulent news headlines quickly and accurately. Based on multi-head attention mechanism eschewing recurrent unit and reducing sequential computation, we build Mini-Transformer Deep Learning model to further improve the classification performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Kehua, Yaodong Wang, Wei Zhang, Jiqing Yao, and Yuquan Le. "Keyphrase Generation Based on Self-Attention Mechanism." Computers, Materials & Continua 61, no. 2 (2019): 569–81. http://dx.doi.org/10.32604/cmc.2019.05952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Dongli, Shengliang Xiang, Yan Zhou, Jinzhen Mu, Haibin Zhou, and Richard Irampaye. "Multiple-Attention Mechanism Network for Semantic Segmentation." Sensors 22, no. 12 (June 13, 2022): 4477. http://dx.doi.org/10.3390/s22124477.

Full text
Abstract:
Contextual information and the dependencies between dimensions is vital in image semantic segmentation. In this paper, we propose a multiple-attention mechanism network (MANet) for semantic segmentation in a very effective and efficient way. Concretely, the contributions are as follows: (1) a novel dual-attention mechanism for capturing feature dependencies in spatial and channel dimensions, where the adjacent position attention captures the dependencies between pixels well; (2) a new cross-dimensional interactive attention feature fusion module, which strengthens the fusion of fine location structure information in low-level features and category semantic information in high-level features. We conduct extensive experiments on semantic segmentation benchmarks including PASCAL VOC 2012 and Cityscapes datasets. Our MANet achieves the mIoU scores of 75.5% and 72.8% on PASCAL VOC 2012 and Cityscapes datasets, respectively. The effectiveness of the network is higher than the previous popular semantic segmentation networks under the same conditions.
APA, Harvard, Vancouver, ISO, and other styles
23

Feng-lei, REN, ZHOU Hai-bo, YANG Lu, and HE Xin. "Lane detection based on dual attention mechanism." Chinese Optics 15 (2022): 1–9. http://dx.doi.org/10.37188/co.2022-0033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

An, XuDong, Lei Zhao, Han Wu, and QinJuan Zhang. "Channel estimation algorithm based on attention mechanism." Journal of Physics: Conference Series 2290, no. 1 (June 1, 2022): 012112. http://dx.doi.org/10.1088/1742-6596/2290/1/012112.

Full text
Abstract:
Abstract As the key to wireless communication, channel estimation has become a hot research topic in recent years. In this paper, we propose a deep learning method based on the channel estimation of inverse convolutional network and expanded convolutional network to address the problems that the performance of traditional channel estimation algorithms in orthogonal frequency division multiplexing (OFDM) systems can hardly meet the communication requirements of complex scenarios and are greatly affected by noise. The method constructs a lightweight deconvolutional network using the correlation of channels, and achieves channel interpolation and estimation step by step with a few layers of deconvolutional operations, which achieves channel estimation with low complexity. To improve the estimation performance, an expanded convolutional network is further constructed to suppress the channel noise and improve the accuracy of channel estimation. The simulation results show that the channel estimation can be performed at different signal levels. The simulation results show that the proposed deep learning method based on deconvolution and dilation convolution has lower estimation error and lower complexity than the traditional methods under different signal-to-noise ratios (SNR).
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Chiyu, Hong Li, Xinrong Li, Feifei Hou, and Xun Hu. "Guided attention mechanism: Training network more efficiently." Journal of Intelligent & Fuzzy Systems 38, no. 2 (February 6, 2020): 2323–35. http://dx.doi.org/10.3233/jifs-191257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

余, 晨. "Image Tampering Detection Based on Attention Mechanism." Computer Science and Application 12, no. 03 (2022): 729–38. http://dx.doi.org/10.12677/csa.2022.123074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Mingfei, Yukio Miyasaka, and Masahiro Fujita. "Parallel Scheduling Attention Mechanism: Generalization and Optimization." IPSJ Transactions on System LSI Design Methodology 15 (2022): 2–15. http://dx.doi.org/10.2197/ipsjtsldm.15.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ahmad Khan, Wasim, Hafiz Usman Akmal, Ahmad Ullah, Aqdas Malik, Sagheer Abbas, Abdullah Ahmad, and Abdullah Farooq. "Intelligent Virtual Security System using Attention Mechanism." ICST Transactions on Scalable Information Systems 5, no. 16 (April 13, 2018): 154473. http://dx.doi.org/10.4108/eai.13-4-2018.154473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Daihong, Jiang, Hu yuanzheng, Dai Lei, and Peng Jin. "Facial Expression Recognition Based on Attention Mechanism." Scientific Programming 2021 (March 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/6624251.

Full text
Abstract:
At present, traditional facial expression recognition methods of convolutional neural networks are based on local ideas for feature expression, which results in the model’s low efficiency in capturing the dependence between long-range pixels, leading to poor performance for facial expression recognition. In order to solve the above problems, this paper combines a self-attention mechanism with a residual network and proposes a new facial expression recognition model based on the global operation idea. This paper first introduces the self-attention mechanism on the basis of the residual network and finds the relative importance of a location by calculating the weighted average of all location pixels, then introduces channel attention to learn different features on the channel domain, and generates channel attention to focus on the interactive features in different channels so that the robustness can be improved; finally, it merges the self-attention mechanism and the channel attention mechanism to increase the model’s ability to extract globally important features. The accuracy of this paper on the CK+ and FER2013 datasets is 97.89% and 74.15%, respectively, which fully confirmed the effectiveness and superiority of the model in extracting global features.
APA, Harvard, Vancouver, ISO, and other styles
30

Gul, M. Shahzeb Khan, M. Umair Mukati, Michel Batz, Soren Forchhammer, and Joachim Keinert. "Attention Mechanism-Based Light-Field View Synthesis." IEEE Access 10 (2022): 7895–913. http://dx.doi.org/10.1109/access.2022.3142949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Weiqian, and Bugao Xu. "Aspect-Based Fashion Recommendation With Attention Mechanism." IEEE Access 8 (2020): 141814–23. http://dx.doi.org/10.1109/access.2020.3013639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhu, Yaling, Jungang Yang, Xinpu Deng, Chao Xiao, and Wei An. "Infrared Pedestrian Detection Based on Attention Mechanism." Journal of Physics: Conference Series 1634 (September 2020): 012032. http://dx.doi.org/10.1088/1742-6596/1634/1/012032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yuehuan. "Small-target predetection with an attention mechanism." Optical Engineering 41, no. 4 (April 1, 2002): 872. http://dx.doi.org/10.1117/1.1459054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bandera, Juan Pedro, R. Marfil, Antonio Jesús Palomino, Ricardo Vázquez-Martín, and Antonio Bandera. "Visual Attention Mechanism for a Social Robot." Applied Bionics and Biomechanics 9, no. 4 (2012): 409–25. http://dx.doi.org/10.1155/2012/320850.

Full text
Abstract:
This paper describes a visual perception system for a social robot. The central part of this system is an artificial attention mechanism that discriminates the most relevant information from all the visual information perceived by the robot. It is composed by three stages. At the preattentive stage, the concept of saliency is implemented based on ‘proto-objects’ [37]. From these objects, different saliency maps are generated. Then, the semiattentive stage identifies and tracks significant items according to the tasks to accomplish. This tracking process allows to implement the ‘inhibition of return’. Finally, the attentive stage fixes the field of attention to the most relevant object depending on the behaviours to carry out. Three behaviours have been implemented and tested which allow the robot to detect visual landmarks in an initially unknown environment, and to recognize and capture the upper-body motion of people interested in interact with it.
APA, Harvard, Vancouver, ISO, and other styles
35

Eisenberg, Nancy. "An explanatory mechanism that merits more attention." Behavioral and Brain Sciences 14, no. 4 (December 1991): 749. http://dx.doi.org/10.1017/s0140525x00072319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sang, Hai-Feng, Zi-Zhen Chen, and Da-Kuo He. "Human Motion prediction based on attention mechanism." Multimedia Tools and Applications 79, no. 9-10 (December 6, 2019): 5529–44. http://dx.doi.org/10.1007/s11042-019-08269-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Neumann, Odmar, and Ingrid Scharlau. "Visual attention and the mechanism of metacontrast." Psychological Research 71, no. 6 (June 8, 2006): 626–33. http://dx.doi.org/10.1007/s00426-006-0061-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Maofu, Lingjun Li, Huijun Hu, Weili Guan, and Jing Tian. "Image caption generation with dual attention mechanism." Information Processing & Management 57, no. 2 (March 2020): 102178. http://dx.doi.org/10.1016/j.ipm.2019.102178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Yu, and Ming Zhu. "Saliency Prediction Based On Lightweight Attention Mechanism." Journal of Physics: Conference Series 1486 (April 2020): 072066. http://dx.doi.org/10.1088/1742-6596/1486/7/072066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Jiao, Shanshan, Jiabao Wang, Guyu Hu, Zhisong Pan, Lin Du, and Jin Zhang. "Joint Attention Mechanism for Person Re-Identification." IEEE Access 7 (2019): 90497–506. http://dx.doi.org/10.1109/access.2019.2927170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Qimeng, Long Yu, Shengwei Tian, and Jinmiao Song. "Attention Mechanism for Uyghur Personal Pronouns Resolution." ACM Transactions on Asian and Low-Resource Language Information Processing 19, no. 6 (November 25, 2020): 1–13. http://dx.doi.org/10.1145/3412323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gun, Li. "Advances and Application of Visual Attention Mechanism." International Journal of Data Science and Analysis 3, no. 4 (2017): 24. http://dx.doi.org/10.11648/j.ijdsa.20170304.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Krimpas, Panagiotis, and Christina Valavani. "Attention mechanism and skip-gram embedded phrases." Comparative Legilinguistics 52 (January 9, 2023): 318–50. http://dx.doi.org/10.14746/cl.52.2022.14.

Full text
Abstract:
This article examines common translation errors that occur in the translation of legal texts. In particular, it focuses on how German texts containing legal terminology are rendered into Modern Greek by the Google translation machine. Our case study is the Google-assisted translation of the original (German) version of the Constitution of the Federal Republic of Germany into Modern Greek. A training method is proposed for phrase extraction based on the occurrence frequency, which goes through the Skip-gram algorithm to be then integrated into the Self Attention Mechanism proposed by Vaswani et al. (2017) in order to minimise human effort and contribute to the development of a robust machine translation system for multi-word legal terms and special phrases. This Neural Machine Translation approach aims at developing vectorised phrases from large corpora and process them for translation. The research direction is to increase the in-domain training data set and enrich the vector dimension with more information for legal concepts (domain specific features).
APA, Harvard, Vancouver, ISO, and other styles
44

Ashtari, Amirsaman, Chang Wook Seo, Cholmin Kang, Sihun Cha, and Junyong Noh. "Reference Based Sketch Extraction via Attention Mechanism." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–16. http://dx.doi.org/10.1145/3550454.3555504.

Full text
Abstract:
We propose a model that extracts a sketch from a colorized image in such a way that the extracted sketch has a line style similar to a given reference sketch while preserving the visual content identically to the colorized image. Authentic sketches drawn by artists have various sketch styles to add visual interest and contribute feeling to the sketch. However, existing sketch-extraction methods generate sketches with only one style. Moreover, existing style transfer models fail to transfer sketch styles because they are mostly designed to transfer textures of a source style image instead of transferring the sparse line styles from a reference sketch. Lacking the necessary volumes of data for standard training of translation systems, at the core of our GAN-based solution is a self-reference sketch style generator that produces various reference sketches with a similar style but different spatial layouts. We use independent attention modules to detect the edges of a colorized image and reference sketch as well as the visual correspondences between them. We apply several loss terms to imitate the style and enforce sparsity in the extracted sketches. Our sketch-extraction method results in a close imitation of a reference sketch style drawn by an artist and outperforms all baseline methods. Using our method, we produce a synthetic dataset representing various sketch styles and improve the performance of auto-colorization models, in high demand in comics. The validity of our approach is confirmed via qualitative and quantitative evaluations.
APA, Harvard, Vancouver, ISO, and other styles
45

Ren, Junhua, Guowu Zhao, Yadong Ma, De Zhao, Tao Liu, and Jun Yan. "Automatic Pavement Crack Detection Fusing Attention Mechanism." Electronics 11, no. 21 (November 6, 2022): 3622. http://dx.doi.org/10.3390/electronics11213622.

Full text
Abstract:
Pavement cracks can result in the degradation of pavement performance. Due to the lack of timely inspection and reparation for the pavement cracks, with the development of cracks, the safety and service life of the pavement can be decreased. To curb the development of pavement cracks, detecting these cracks accurately plays an important role. In this paper, an automatic pavement crack detection method is proposed. For achieving real-time inspection, the YOLOV5 was selected as the base model. Due to the small size of the pavement cracks, the accuracy of most of the pavement crack deep learning-based methods cannot reach a high degree. To further improve the accuracy of those kind of methods, attention modules were employed. Based on the self-building datasets collected in Linyi city, the performance among various crack detection models was evaluated. The results showed that adding attention modules can effectively enhance the ability of crack detection. The precision of YOLOV5-CoordAtt reaches 95.27%. It was higher than other conventional and deep learning methods. According to the pictures of the results, the proposed methods can detect accurately under various situations.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhao, Bowen, Huanlai Xing, Xinhan Wang, Fuhong Song, and Zhiwen Xiao. "Rethinking attention mechanism in time series classification." Information Sciences 627 (May 2023): 97–114. http://dx.doi.org/10.1016/j.ins.2023.01.093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Qianqian, Hongyang Wei, Jiaying Chen, Xusheng Du, and Jiong Yu. "Video Anomaly Detection Based on Attention Mechanism." Symmetry 15, no. 2 (February 16, 2023): 528. http://dx.doi.org/10.3390/sym15020528.

Full text
Abstract:
Camera surveillance is widely used in residential areas, highways, schools and other public places. The monitoring and scanning of sudden abnormal events depend on humans. Human anomaly monitoring not only consumes a lot of manpower and time but also has a large error in anomaly detection. Video anomaly detection based on AE (Auto-Encoder) is currently the dominant research approach. The model has a highly symmetrical network structure in the encoding and decoding stages. The model is trained by learning standard video sequences, and the anomalous events are later determined in terms of reconstruction error and prediction error. However, in the case of limited computing power, the complex model will greatly reduce the detection efficiency, and unnecessary background information will seriously affect the detection accuracy of the model. This paper uses the AE loaded with dynamic prototype units as the basic model. We introduce an attention mechanism to improve the feature representation ability of the model. Deep separable convolution operation can effectively reduce the number of model parameters and complexity. Finally, we conducted experiments on three publicly available datasets of real scenarios (UCSD Ped1, UCSD Ped2 and CUHK Avenue). The experimental results show that compared with the baseline model, the accuracy of our model improved by 1.9%, 1.4% and 6.6%, respectively, across the three datasets. Compared with many popular models, the validity of our model in anomaly detection is verified.
APA, Harvard, Vancouver, ISO, and other styles
48

YU, YUANLONG, GEORGE K. I. MANN, and RAYMOND G. GOSINE. "A SINGLE-OBJECT TRACKING METHOD FOR ROBOTS USING OBJECT-BASED VISUAL ATTENTION." International Journal of Humanoid Robotics 09, no. 04 (December 2012): 1250030. http://dx.doi.org/10.1142/s0219843612500302.

Full text
Abstract:
It is a quite challenging problem for robots to track the target in complex environment due to appearance changes of the target and background, large variation of motion, partial and full occlusion, motion of the camera and so on. However, humans are capable to cope with these difficulties by using their cognitive capability, mainly including the visual attention and learning mechanisms. This paper therefore presents a single-object tracking method for robots based on the object-based attention mechanism. This tracking method consists of four modules: pre-attentive segmentation, top-down attentional selection, post-attentive processing and online learning of the target model. The pre-attentive segmentation module first divides the scene into uniform proto-objects. Then the top-down attention module selects one proto-object over the predicted region by using a discriminative feature of the target. The post-attentive processing module then validates the attended proto-object. If it is confirmed to be the target, it is used to obtain the complete target region. Otherwise, the recovery mechanism is automatically triggered to globally search for the target. Given the complete target region, the online learning algorithm autonomously updates the target model, which consists of appearance and saliency components. The saliency component is used to automatically select a discriminative feature for top-down attention, while the appearance component is used for bias estimation in the top-down attention module and validation in the post-attentive processing module. Experiments have shown that this proposed method outperforms other algorithms without using attention for tracking a single target in cluttered and dynamically changing environment.
APA, Harvard, Vancouver, ISO, and other styles
49

Yuan, Chun-Miao, Xue-Mei Sun, and Hu Zhao. "Speech Separation Using Convolutional Neural Network and Attention Mechanism." Discrete Dynamics in Nature and Society 2020 (July 25, 2020): 1–10. http://dx.doi.org/10.1155/2020/2196893.

Full text
Abstract:
Speech information is the most important means of human communication, and it is crucial to separate the target voice from the mixed sound signals. This paper proposes a speech separation model based on convolutional neural networks and attention mechanism. The magnitude spectrum of the mixed speech signals, as the input, has its high dimensionality. By analyzing the characteristics of the convolutional neural network and attention mechanism, it can be found that the convolutional neural network can effectively extract low-dimensional features and mine the spatiotemporal structure information in the speech signals, and the attention mechanism can reduce the loss of sequence information. The accuracy of speech separation can be improved effectively by combining two mechanisms. Compared to the typical speech separation model DRNN-2 + discrim, this method achieves 0.27 dB GNSDR gain and 0.51 dB GSIR gain, which illustrates that the speech separation model proposed in this paper has achieved an ideal separation effect.
APA, Harvard, Vancouver, ISO, and other styles
50

Xue, Lanqing, Xiaopeng Li, and Nevin L. Zhang. "Not All Attention Is Needed: Gated Attention Network for Sequence Data." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6550–57. http://dx.doi.org/10.1609/aaai.v34i04.6129.

Full text
Abstract:
Although deep neural networks generally have fixed network structures, the concept of dynamic mechanism has drawn more and more attention in recent years. Attention mechanisms compute input-dependent dynamic attention weights for aggregating a sequence of hidden states. Dynamic network configuration in convolutional neural networks (CNNs) selectively activates only part of the network at a time for different inputs. In this paper, we combine the two dynamic mechanisms for text classification tasks. Traditional attention mechanisms attend to the whole sequence of hidden states for an input sentence, while in most cases not all attention is needed especially for long sequences. We propose a novel method called Gated Attention Network (GA-Net) to dynamically select a subset of elements to attend to using an auxiliary network, and compute attention weights to aggregate the selected elements. It avoids a significant amount of unnecessary computation on unattended elements, and allows the model to pay attention to important parts of the sequence. Experiments in various datasets show that the proposed method achieves better performance compared with all baseline models with global or local attention while requiring less computation and achieving better interpretability. It is also promising to extend the idea to more complex attention-based models, such as transformers and seq-to-seq models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography