Articles de revues sur le sujet « Mechanism of attention »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Mechanism of attention.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Mechanism of attention ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Zang, Yubin, Zhenming Yu, Kun Xu, Minghua Chen, Sigang Yang et Hongwei Chen. « Fiber communication receiver models based on the multi-head attention mechanism ». Chinese Optics Letters 21, no 3 (2023) : 030602. http://dx.doi.org/10.3788/col202321.030602.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yoo, Sungwook, Hanjun Goo et Kyuseok Shim. « Improving Review-based Attention Mechanism ». KIISE Transactions on Computing Practices 27, no 10 (31 octobre 2021) : 486–91. http://dx.doi.org/10.5626/ktcp.2021.27.10.486.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Jia, Yuening. « Attention Mechanism in Machine Translation ». Journal of Physics : Conference Series 1314 (octobre 2019) : 012186. http://dx.doi.org/10.1088/1742-6596/1314/1/012186.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sieb, R. A. « A brain mechanism for attention ». Medical Hypotheses 33, no 3 (novembre 1990) : 145–53. http://dx.doi.org/10.1016/0306-9877(90)90164-a.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Park, Da-Sol, et Jeong-Won Cha. « Image Caption Generation using Object Attention Mechanism ». Journal of KIISE 46, no 4 (30 avril 2019) : 369–75. http://dx.doi.org/10.5626/jok.2019.46.4.369.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Spironelli, Chiara, Mariaelena Tagliabue et Carlo Umiltà. « Response Selection and Attention Orienting ». Experimental Psychology 56, no 4 (janvier 2009) : 274–82. http://dx.doi.org/10.1027/1618-3169.56.4.274.

Texte intégral
Résumé :
Recently, there has been a redirection of research efforts toward the exploration of the role of hemispheric lateralization in determining Simon effect asymmetries. The present study aimed at implementing a connectionist model that simulates the cognitive mechanisms implied by such asymmetries, focusing on the underlying neural structure. A left-lateralized response-selection mechanism was implemented alone (Experiment 1) or along with a right-lateralized automatic attention-orienting mechanism (Experiment 2). It was found that both models yielded Simon effect asymmetries. However, whereas the first model showed a reversed pattern of asymmetry compared with human, real data, the second model’s performance strongly resembled human Simon effect asymmetries, with a significantly greater right than left Simon effect. Thus, a left-side bias in the response-selection mechanism produced a left-side biased Simon effect, whereas a right-side bias in the attention system produced a right-side biased Simon effect. In conclusion, results showed that the bias of the attention system had a larger impact than the bias of the response-selection mechanism in producing Simon effect asymmetries.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Songlin Yin, Songlin Yin, et Fei Tan Songlin Yin. « YOLOv4-A : Research on Traffic Sign Detection Based on Hybrid Attention Mechanism ». 電腦學刊 33, no 6 (décembre 2022) : 181–92. http://dx.doi.org/10.53106/199115992022123306015.

Texte intégral
Résumé :
<p>Aiming at the problem of false detection and missed detection in the traffic sign detection task, an improved YOLOv4 detection algorithm is proposed. Based on the YOLOv4 algorithm, the Efficient Channel Attention Module (ECA) and the Convolutional Block Attention Module (CBAM) are added to form YOLOv4-A algorithm. At the same time, the global K-means clustering algorithm is used to regenerate smaller anchors, which makes the network converge faster and reduces the error rate. The YOLOv4-A algorithm re-calibrates the detection branch features in the two dimensions of channel and space, so that the network can focus and enhance the effective features, and suppress the interference features, which improves the detection ability of the algorithm. Experiments on the TT100K traffic sign dataset show that the proposed algorithm has a particularly significant improvement in the performance of small target detection. Compared with the YOLOv4 algorithm, the precision and mAP@0.5 of the proposed algorithm are increased by 5.38% and 5.75%.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mao, Guojun, Guanyi Liao, Hengliang Zhu et Bo Sun. « Multibranch Attention Mechanism Based on Channel and Spatial Attention Fusion ». Mathematics 10, no 21 (6 novembre 2022) : 4150. http://dx.doi.org/10.3390/math10214150.

Texte intégral
Résumé :
Recently, it has been demonstrated that the performance of an object detection network can be improved by embedding an attention module into it. In this work, we propose a lightweight and effective attention mechanism named multibranch attention (M3Att). For the input feature map, our M3Att first uses the grouped convolutional layer with a pyramid structure for feature extraction, and then calculates channel attention and spatial attention simultaneously and fuses them to obtain more complementary features. It is a “plug and play” module that can be easily added to the object detection network and significantly improves the performance of the object detection network with a small increase in parameters. We demonstrate the effectiveness of M3Att on various challenging object detection tasks, including PASCAL VOC2007, PASCAL VOC2012, KITTI, and Zhanjiang Underwater Robot Competition. The experimental results show that this method dramatically improves the object detection effect, especially for the PASCAL VOC2007, and the mapping index of the original network increased by 4.93% when embedded in the YOLOV4 (You Only Look Once v4) network.
Styles APA, Harvard, Vancouver, ISO, etc.
9

V, Ms Malge Shraddha. « Generating Image Descriptions using Attention Mechanism ». International Journal for Research in Applied Science and Engineering Technology 9, no 3 (31 mars 2021) : 1047–56. http://dx.doi.org/10.22214/ijraset.2021.33397.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yakura, Hiromu, Shinnosuke Shinozaki, Reon Nishimura, Yoshihiro Oyama et Jun Sakuma. « Neural malware analysis with attention mechanism ». Computers & ; Security 87 (novembre 2019) : 101592. http://dx.doi.org/10.1016/j.cose.2019.101592.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wang, Longjuan, Chunjie Cao, Binghui Zou, Jun Ye et Jin Zhang. « License Plate Recognition via Attention Mechanism ». Computers, Materials & ; Continua 75, no 1 (2023) : 1801–14. http://dx.doi.org/10.32604/cmc.2023.032785.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Tao, Xueying, Huaizong Shao, Qiang Li, Ye Pan et Zhongqi Fu. « External Attention Mechanism-Based Modulation Classification ». Journal of Physics : Conference Series 2425, no 1 (1 février 2023) : 012051. http://dx.doi.org/10.1088/1742-6596/2425/1/012051.

Texte intégral
Résumé :
Abstract This paper considers the modulation classification of radio frequency (RF) signals. An external attention mechanism-based convolution neural network (EACNN) is proposed. Thanks to the external attention layers, the EACNN network can capture the potential correlations of different modulation data, which helps reduce computational consumption and memory costs efficiently during training. Moreover, to account for the variation of the signals induced by channel fading, we further propose a customized batch normalization (BN) layer in EACNN to improve the classification accuracy with less training time. Numerical experiments on RML2016.a dataset shows that the proposed method outperforms the baseline method CNN2 by 7% in terms of classification accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Yeom, Chanho, Jieun Lee et Sanghyun Park. « OANet : Ortho-Attention Net Based on Attention Mechanism for Database Performance Prediction ». Journal of KIISE 49, no 11 (30 novembre 2022) : 1026–31. http://dx.doi.org/10.5626/jok.2022.49.11.1026.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Himabindu, Dakshayani D., et Praveen S. Kumar. « A Streamlined Attention Mechanism for Image Classification and Fine-Grained Visual Recognition ». MENDEL 27, no 2 (21 décembre 2021) : 59–67. http://dx.doi.org/10.13164/mendel.2021.2.059.

Texte intégral
Résumé :
In the recent advancements attention mechanism in deep learning had played a vital role in proving better results in tasks under computer vision. There exists multiple kinds of works under attention mechanism which includes under image classification, fine-grained visual recognition, image captioning, video captioning, object detection and recognition tasks. Global and local attention are the two attention based mechanisms which helps in interpreting the attentive partial. Considering this criteria, there exists channel and spatial attention where in channel attention considers the most attentive channel among the produced block of channels and spatial attention considers which region among the space needs to be focused on. We have proposed a streamlined attention block module which helps in enhancing the feature based learning with less number of additional layers i.e., a GAP layer followed by a linear layer with an incorporation of second order pooling(GSoP) after every layer in the utilized encoder. This mechanism has produced better range dependencies by the conducted experimentation. We have experimented our model on CIFAR-10, CIFAR-100 and FGVC-Aircrafts datasets considering finegrained visual recognition. We were successful in achieving state-of-the-result for FGVC-Aircrafts with an accuracy of 97%.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Yin, Wenpeng, et Hinrich Schütze. « Attentive Convolution : Equipping CNNs with RNN-style Attention Mechanisms ». Transactions of the Association for Computational Linguistics 6 (décembre 2018) : 687–702. http://dx.doi.org/10.1162/tacl_a_00249.

Texte intégral
Résumé :
In NLP, convolutional neural networks (CNNs) have benefited less than recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into convolution). Convolution is the differentiator of CNNs in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size context in the input text t x. In this work, we propose an attentive convolution network, ATTCONV. It extends the context scope of the convolution operation, deriving higher-level features for a word not only from local context, but also from information extracted from nonlocal context by the attention mechanism commonly used in RNNs. This nonlocal context can come (i) from parts of the input text t x that are distant or (ii) from extra (i.e., external) contexts t y. Experiments on sentence modeling with zero-context (sentiment analysis), single-context (textual entailment) and multiple-context (claim verification) demonstrate the effectiveness of ATTCONV in sentence representation learning with the incorporation of context. In particular, attentive convolution outperforms attentive pooling and is a strong competitor to popular attentive RNNs. 1
Styles APA, Harvard, Vancouver, ISO, etc.
16

Mathôt, Sebastiaan, et Jan Theeuwes. « Visual attention and stability ». Philosophical Transactions of the Royal Society B : Biological Sciences 366, no 1564 (27 février 2011) : 516–27. http://dx.doi.org/10.1098/rstb.2010.0187.

Texte intégral
Résumé :
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Zheng, Menghua, Jiayu Xu, Yinjie Shen, Chunwei Tian, Jian Li, Lunke Fei, Ming Zong et Xiaoyang Liu. « Attention-based CNNs for Image Classification : A Survey ». Journal of Physics : Conference Series 2171, no 1 (1 janvier 2022) : 012068. http://dx.doi.org/10.1088/1742-6596/2171/1/012068.

Texte intégral
Résumé :
Abstract Deep learning techniques as well as CNNs can learn power context information, they have been widely applied in image recognition. However, deep CNNs may reply on large width and large depth, which may increase computational costs. Attention mechanism fused into CNNs can address this problem. In this paper, we summary an attention mechanism acts a CNN for image classification. Firstly, the survey shows the development of CNNs for image classification. Then, we illustrate basis of CNNs and attention mechanisms for image classification. Next, we give the main architecture of CNNs with attentions, public and our collected datasets, experimental results in image classification. Finally, we point out potential research points, challenges attention-based for image classification and summary the whole paper.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Chou, Kenny F., et Kamal Sen. « AIM : A network model of attention in auditory cortex ». PLOS Computational Biology 17, no 8 (27 août 2021) : e1009356. http://dx.doi.org/10.1371/journal.pcbi.1009356.

Texte intégral
Résumé :
Attentional modulation of cortical networks is critical for the cognitive flexibility required to process complex scenes. Current theoretical frameworks for attention are based almost exclusively on studies in visual cortex, where attentional effects are typically modest and excitatory. In contrast, attentional effects in auditory cortex can be large and suppressive. A theoretical framework for explaining attentional effects in auditory cortex is lacking, preventing a broader understanding of cortical mechanisms underlying attention. Here, we present a cortical network model of attention in primary auditory cortex (A1). A key mechanism in our network is attentional inhibitory modulation (AIM) of cortical inhibitory neurons. In this mechanism, top-down inhibitory neurons disinhibit bottom-up cortical circuits, a prominent circuit motif observed in sensory cortex. Our results reveal that the same underlying mechanisms in the AIM network can explain diverse attentional effects on both spatial and frequency tuning in A1. We find that a dominant effect of disinhibition on cortical tuning is suppressive, consistent with experimental observations. Functionally, the AIM network may play a key role in solving the cocktail party problem. We demonstrate how attention can guide the AIM network to monitor an acoustic scene, select a specific target, or switch to a different target, providing flexible outputs for solving the cocktail party problem.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Xuezhi, Xiang, Syed Masroor Ali et Ghulam Farid. « OPTICAL FLOW ESTIMATION USING CHANNEL ATTENTION MECHANISM ». Journal of Flow Visualization and Image Processing 26, no 4 (2019) : 371–93. http://dx.doi.org/10.1615/jflowvisimageproc.2019031771.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Liu, Hankun, Daojing He et Sammy Chan. « Fraudulent News Headline Detection with Attention Mechanism ». Computational Intelligence and Neuroscience 2021 (15 mars 2021) : 1–7. http://dx.doi.org/10.1155/2021/6679661.

Texte intégral
Résumé :
E-mail systems and online social media platforms are ideal places for news dissemination, but a serious problem is the spread of fraudulent news headlines. The previous method of detecting fraudulent news headlines was mainly laborious manual review. While the total number of news headlines goes as high as 1.48 million, manual review becomes practically infeasible. For news headline text data, attention mechanism has powerful processing capability. In this paper, we propose the models based on LSTM and attention layer, which fit the context of news headlines efficiently and can detect fraudulent news headlines quickly and accurately. Based on multi-head attention mechanism eschewing recurrent unit and reducing sequential computation, we build Mini-Transformer Deep Learning model to further improve the classification performance.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Yang, Kehua, Yaodong Wang, Wei Zhang, Jiqing Yao et Yuquan Le. « Keyphrase Generation Based on Self-Attention Mechanism ». Computers, Materials & ; Continua 61, no 2 (2019) : 569–81. http://dx.doi.org/10.32604/cmc.2019.05952.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Wang, Dongli, Shengliang Xiang, Yan Zhou, Jinzhen Mu, Haibin Zhou et Richard Irampaye. « Multiple-Attention Mechanism Network for Semantic Segmentation ». Sensors 22, no 12 (13 juin 2022) : 4477. http://dx.doi.org/10.3390/s22124477.

Texte intégral
Résumé :
Contextual information and the dependencies between dimensions is vital in image semantic segmentation. In this paper, we propose a multiple-attention mechanism network (MANet) for semantic segmentation in a very effective and efficient way. Concretely, the contributions are as follows: (1) a novel dual-attention mechanism for capturing feature dependencies in spatial and channel dimensions, where the adjacent position attention captures the dependencies between pixels well; (2) a new cross-dimensional interactive attention feature fusion module, which strengthens the fusion of fine location structure information in low-level features and category semantic information in high-level features. We conduct extensive experiments on semantic segmentation benchmarks including PASCAL VOC 2012 and Cityscapes datasets. Our MANet achieves the mIoU scores of 75.5% and 72.8% on PASCAL VOC 2012 and Cityscapes datasets, respectively. The effectiveness of the network is higher than the previous popular semantic segmentation networks under the same conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Feng-lei, REN, ZHOU Hai-bo, YANG Lu et HE Xin. « Lane detection based on dual attention mechanism ». Chinese Optics 15 (2022) : 1–9. http://dx.doi.org/10.37188/co.2022-0033.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

An, XuDong, Lei Zhao, Han Wu et QinJuan Zhang. « Channel estimation algorithm based on attention mechanism ». Journal of Physics : Conference Series 2290, no 1 (1 juin 2022) : 012112. http://dx.doi.org/10.1088/1742-6596/2290/1/012112.

Texte intégral
Résumé :
Abstract As the key to wireless communication, channel estimation has become a hot research topic in recent years. In this paper, we propose a deep learning method based on the channel estimation of inverse convolutional network and expanded convolutional network to address the problems that the performance of traditional channel estimation algorithms in orthogonal frequency division multiplexing (OFDM) systems can hardly meet the communication requirements of complex scenarios and are greatly affected by noise. The method constructs a lightweight deconvolutional network using the correlation of channels, and achieves channel interpolation and estimation step by step with a few layers of deconvolutional operations, which achieves channel estimation with low complexity. To improve the estimation performance, an expanded convolutional network is further constructed to suppress the channel noise and improve the accuracy of channel estimation. The simulation results show that the channel estimation can be performed at different signal levels. The simulation results show that the proposed deep learning method based on deconvolution and dilation convolution has lower estimation error and lower complexity than the traditional methods under different signal-to-noise ratios (SNR).
Styles APA, Harvard, Vancouver, ISO, etc.
25

Wang, Chiyu, Hong Li, Xinrong Li, Feifei Hou et Xun Hu. « Guided attention mechanism : Training network more efficiently ». Journal of Intelligent & ; Fuzzy Systems 38, no 2 (6 février 2020) : 2323–35. http://dx.doi.org/10.3233/jifs-191257.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

余, 晨. « Image Tampering Detection Based on Attention Mechanism ». Computer Science and Application 12, no 03 (2022) : 729–38. http://dx.doi.org/10.12677/csa.2022.123074.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Yu, Mingfei, Yukio Miyasaka et Masahiro Fujita. « Parallel Scheduling Attention Mechanism : Generalization and Optimization ». IPSJ Transactions on System LSI Design Methodology 15 (2022) : 2–15. http://dx.doi.org/10.2197/ipsjtsldm.15.2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Ahmad Khan, Wasim, Hafiz Usman Akmal, Ahmad Ullah, Aqdas Malik, Sagheer Abbas, Abdullah Ahmad et Abdullah Farooq. « Intelligent Virtual Security System using Attention Mechanism ». ICST Transactions on Scalable Information Systems 5, no 16 (13 avril 2018) : 154473. http://dx.doi.org/10.4108/eai.13-4-2018.154473.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Daihong, Jiang, Hu yuanzheng, Dai Lei et Peng Jin. « Facial Expression Recognition Based on Attention Mechanism ». Scientific Programming 2021 (2 mars 2021) : 1–10. http://dx.doi.org/10.1155/2021/6624251.

Texte intégral
Résumé :
At present, traditional facial expression recognition methods of convolutional neural networks are based on local ideas for feature expression, which results in the model’s low efficiency in capturing the dependence between long-range pixels, leading to poor performance for facial expression recognition. In order to solve the above problems, this paper combines a self-attention mechanism with a residual network and proposes a new facial expression recognition model based on the global operation idea. This paper first introduces the self-attention mechanism on the basis of the residual network and finds the relative importance of a location by calculating the weighted average of all location pixels, then introduces channel attention to learn different features on the channel domain, and generates channel attention to focus on the interactive features in different channels so that the robustness can be improved; finally, it merges the self-attention mechanism and the channel attention mechanism to increase the model’s ability to extract globally important features. The accuracy of this paper on the CK+ and FER2013 datasets is 97.89% and 74.15%, respectively, which fully confirmed the effectiveness and superiority of the model in extracting global features.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Gul, M. Shahzeb Khan, M. Umair Mukati, Michel Batz, Soren Forchhammer et Joachim Keinert. « Attention Mechanism-Based Light-Field View Synthesis ». IEEE Access 10 (2022) : 7895–913. http://dx.doi.org/10.1109/access.2022.3142949.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Li, Weiqian, et Bugao Xu. « Aspect-Based Fashion Recommendation With Attention Mechanism ». IEEE Access 8 (2020) : 141814–23. http://dx.doi.org/10.1109/access.2020.3013639.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Zhu, Yaling, Jungang Yang, Xinpu Deng, Chao Xiao et Wei An. « Infrared Pedestrian Detection Based on Attention Mechanism ». Journal of Physics : Conference Series 1634 (septembre 2020) : 012032. http://dx.doi.org/10.1088/1742-6596/1634/1/012032.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Wang, Yuehuan. « Small-target predetection with an attention mechanism ». Optical Engineering 41, no 4 (1 avril 2002) : 872. http://dx.doi.org/10.1117/1.1459054.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Bandera, Juan Pedro, R. Marfil, Antonio Jesús Palomino, Ricardo Vázquez-Martín et Antonio Bandera. « Visual Attention Mechanism for a Social Robot ». Applied Bionics and Biomechanics 9, no 4 (2012) : 409–25. http://dx.doi.org/10.1155/2012/320850.

Texte intégral
Résumé :
This paper describes a visual perception system for a social robot. The central part of this system is an artificial attention mechanism that discriminates the most relevant information from all the visual information perceived by the robot. It is composed by three stages. At the preattentive stage, the concept of saliency is implemented based on ‘proto-objects’ [37]. From these objects, different saliency maps are generated. Then, the semiattentive stage identifies and tracks significant items according to the tasks to accomplish. This tracking process allows to implement the ‘inhibition of return’. Finally, the attentive stage fixes the field of attention to the most relevant object depending on the behaviours to carry out. Three behaviours have been implemented and tested which allow the robot to detect visual landmarks in an initially unknown environment, and to recognize and capture the upper-body motion of people interested in interact with it.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Eisenberg, Nancy. « An explanatory mechanism that merits more attention ». Behavioral and Brain Sciences 14, no 4 (décembre 1991) : 749. http://dx.doi.org/10.1017/s0140525x00072319.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Sang, Hai-Feng, Zi-Zhen Chen et Da-Kuo He. « Human Motion prediction based on attention mechanism ». Multimedia Tools and Applications 79, no 9-10 (6 décembre 2019) : 5529–44. http://dx.doi.org/10.1007/s11042-019-08269-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Neumann, Odmar, et Ingrid Scharlau. « Visual attention and the mechanism of metacontrast ». Psychological Research 71, no 6 (8 juin 2006) : 626–33. http://dx.doi.org/10.1007/s00426-006-0061-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Liu, Maofu, Lingjun Li, Huijun Hu, Weili Guan et Jing Tian. « Image caption generation with dual attention mechanism ». Information Processing & ; Management 57, no 2 (mars 2020) : 102178. http://dx.doi.org/10.1016/j.ipm.2019.102178.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Wang, Yu, et Ming Zhu. « Saliency Prediction Based On Lightweight Attention Mechanism ». Journal of Physics : Conference Series 1486 (avril 2020) : 072066. http://dx.doi.org/10.1088/1742-6596/1486/7/072066.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Jiao, Shanshan, Jiabao Wang, Guyu Hu, Zhisong Pan, Lin Du et Jin Zhang. « Joint Attention Mechanism for Person Re-Identification ». IEEE Access 7 (2019) : 90497–506. http://dx.doi.org/10.1109/access.2019.2927170.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Yang, Qimeng, Long Yu, Shengwei Tian et Jinmiao Song. « Attention Mechanism for Uyghur Personal Pronouns Resolution ». ACM Transactions on Asian and Low-Resource Language Information Processing 19, no 6 (25 novembre 2020) : 1–13. http://dx.doi.org/10.1145/3412323.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Gun, Li. « Advances and Application of Visual Attention Mechanism ». International Journal of Data Science and Analysis 3, no 4 (2017) : 24. http://dx.doi.org/10.11648/j.ijdsa.20170304.11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Krimpas, Panagiotis, et Christina Valavani. « Attention mechanism and skip-gram embedded phrases ». Comparative Legilinguistics 52 (9 janvier 2023) : 318–50. http://dx.doi.org/10.14746/cl.52.2022.14.

Texte intégral
Résumé :
This article examines common translation errors that occur in the translation of legal texts. In particular, it focuses on how German texts containing legal terminology are rendered into Modern Greek by the Google translation machine. Our case study is the Google-assisted translation of the original (German) version of the Constitution of the Federal Republic of Germany into Modern Greek. A training method is proposed for phrase extraction based on the occurrence frequency, which goes through the Skip-gram algorithm to be then integrated into the Self Attention Mechanism proposed by Vaswani et al. (2017) in order to minimise human effort and contribute to the development of a robust machine translation system for multi-word legal terms and special phrases. This Neural Machine Translation approach aims at developing vectorised phrases from large corpora and process them for translation. The research direction is to increase the in-domain training data set and enrich the vector dimension with more information for legal concepts (domain specific features).
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ashtari, Amirsaman, Chang Wook Seo, Cholmin Kang, Sihun Cha et Junyong Noh. « Reference Based Sketch Extraction via Attention Mechanism ». ACM Transactions on Graphics 41, no 6 (30 novembre 2022) : 1–16. http://dx.doi.org/10.1145/3550454.3555504.

Texte intégral
Résumé :
We propose a model that extracts a sketch from a colorized image in such a way that the extracted sketch has a line style similar to a given reference sketch while preserving the visual content identically to the colorized image. Authentic sketches drawn by artists have various sketch styles to add visual interest and contribute feeling to the sketch. However, existing sketch-extraction methods generate sketches with only one style. Moreover, existing style transfer models fail to transfer sketch styles because they are mostly designed to transfer textures of a source style image instead of transferring the sparse line styles from a reference sketch. Lacking the necessary volumes of data for standard training of translation systems, at the core of our GAN-based solution is a self-reference sketch style generator that produces various reference sketches with a similar style but different spatial layouts. We use independent attention modules to detect the edges of a colorized image and reference sketch as well as the visual correspondences between them. We apply several loss terms to imitate the style and enforce sparsity in the extracted sketches. Our sketch-extraction method results in a close imitation of a reference sketch style drawn by an artist and outperforms all baseline methods. Using our method, we produce a synthetic dataset representing various sketch styles and improve the performance of auto-colorization models, in high demand in comics. The validity of our approach is confirmed via qualitative and quantitative evaluations.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ren, Junhua, Guowu Zhao, Yadong Ma, De Zhao, Tao Liu et Jun Yan. « Automatic Pavement Crack Detection Fusing Attention Mechanism ». Electronics 11, no 21 (6 novembre 2022) : 3622. http://dx.doi.org/10.3390/electronics11213622.

Texte intégral
Résumé :
Pavement cracks can result in the degradation of pavement performance. Due to the lack of timely inspection and reparation for the pavement cracks, with the development of cracks, the safety and service life of the pavement can be decreased. To curb the development of pavement cracks, detecting these cracks accurately plays an important role. In this paper, an automatic pavement crack detection method is proposed. For achieving real-time inspection, the YOLOV5 was selected as the base model. Due to the small size of the pavement cracks, the accuracy of most of the pavement crack deep learning-based methods cannot reach a high degree. To further improve the accuracy of those kind of methods, attention modules were employed. Based on the self-building datasets collected in Linyi city, the performance among various crack detection models was evaluated. The results showed that adding attention modules can effectively enhance the ability of crack detection. The precision of YOLOV5-CoordAtt reaches 95.27%. It was higher than other conventional and deep learning methods. According to the pictures of the results, the proposed methods can detect accurately under various situations.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Zhao, Bowen, Huanlai Xing, Xinhan Wang, Fuhong Song et Zhiwen Xiao. « Rethinking attention mechanism in time series classification ». Information Sciences 627 (mai 2023) : 97–114. http://dx.doi.org/10.1016/j.ins.2023.01.093.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Zhang, Qianqian, Hongyang Wei, Jiaying Chen, Xusheng Du et Jiong Yu. « Video Anomaly Detection Based on Attention Mechanism ». Symmetry 15, no 2 (16 février 2023) : 528. http://dx.doi.org/10.3390/sym15020528.

Texte intégral
Résumé :
Camera surveillance is widely used in residential areas, highways, schools and other public places. The monitoring and scanning of sudden abnormal events depend on humans. Human anomaly monitoring not only consumes a lot of manpower and time but also has a large error in anomaly detection. Video anomaly detection based on AE (Auto-Encoder) is currently the dominant research approach. The model has a highly symmetrical network structure in the encoding and decoding stages. The model is trained by learning standard video sequences, and the anomalous events are later determined in terms of reconstruction error and prediction error. However, in the case of limited computing power, the complex model will greatly reduce the detection efficiency, and unnecessary background information will seriously affect the detection accuracy of the model. This paper uses the AE loaded with dynamic prototype units as the basic model. We introduce an attention mechanism to improve the feature representation ability of the model. Deep separable convolution operation can effectively reduce the number of model parameters and complexity. Finally, we conducted experiments on three publicly available datasets of real scenarios (UCSD Ped1, UCSD Ped2 and CUHK Avenue). The experimental results show that compared with the baseline model, the accuracy of our model improved by 1.9%, 1.4% and 6.6%, respectively, across the three datasets. Compared with many popular models, the validity of our model in anomaly detection is verified.
Styles APA, Harvard, Vancouver, ISO, etc.
48

YU, YUANLONG, GEORGE K. I. MANN et RAYMOND G. GOSINE. « A SINGLE-OBJECT TRACKING METHOD FOR ROBOTS USING OBJECT-BASED VISUAL ATTENTION ». International Journal of Humanoid Robotics 09, no 04 (décembre 2012) : 1250030. http://dx.doi.org/10.1142/s0219843612500302.

Texte intégral
Résumé :
It is a quite challenging problem for robots to track the target in complex environment due to appearance changes of the target and background, large variation of motion, partial and full occlusion, motion of the camera and so on. However, humans are capable to cope with these difficulties by using their cognitive capability, mainly including the visual attention and learning mechanisms. This paper therefore presents a single-object tracking method for robots based on the object-based attention mechanism. This tracking method consists of four modules: pre-attentive segmentation, top-down attentional selection, post-attentive processing and online learning of the target model. The pre-attentive segmentation module first divides the scene into uniform proto-objects. Then the top-down attention module selects one proto-object over the predicted region by using a discriminative feature of the target. The post-attentive processing module then validates the attended proto-object. If it is confirmed to be the target, it is used to obtain the complete target region. Otherwise, the recovery mechanism is automatically triggered to globally search for the target. Given the complete target region, the online learning algorithm autonomously updates the target model, which consists of appearance and saliency components. The saliency component is used to automatically select a discriminative feature for top-down attention, while the appearance component is used for bias estimation in the top-down attention module and validation in the post-attentive processing module. Experiments have shown that this proposed method outperforms other algorithms without using attention for tracking a single target in cluttered and dynamically changing environment.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Yuan, Chun-Miao, Xue-Mei Sun et Hu Zhao. « Speech Separation Using Convolutional Neural Network and Attention Mechanism ». Discrete Dynamics in Nature and Society 2020 (25 juillet 2020) : 1–10. http://dx.doi.org/10.1155/2020/2196893.

Texte intégral
Résumé :
Speech information is the most important means of human communication, and it is crucial to separate the target voice from the mixed sound signals. This paper proposes a speech separation model based on convolutional neural networks and attention mechanism. The magnitude spectrum of the mixed speech signals, as the input, has its high dimensionality. By analyzing the characteristics of the convolutional neural network and attention mechanism, it can be found that the convolutional neural network can effectively extract low-dimensional features and mine the spatiotemporal structure information in the speech signals, and the attention mechanism can reduce the loss of sequence information. The accuracy of speech separation can be improved effectively by combining two mechanisms. Compared to the typical speech separation model DRNN-2 + discrim, this method achieves 0.27 dB GNSDR gain and 0.51 dB GSIR gain, which illustrates that the speech separation model proposed in this paper has achieved an ideal separation effect.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Xue, Lanqing, Xiaopeng Li et Nevin L. Zhang. « Not All Attention Is Needed : Gated Attention Network for Sequence Data ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 04 (3 avril 2020) : 6550–57. http://dx.doi.org/10.1609/aaai.v34i04.6129.

Texte intégral
Résumé :
Although deep neural networks generally have fixed network structures, the concept of dynamic mechanism has drawn more and more attention in recent years. Attention mechanisms compute input-dependent dynamic attention weights for aggregating a sequence of hidden states. Dynamic network configuration in convolutional neural networks (CNNs) selectively activates only part of the network at a time for different inputs. In this paper, we combine the two dynamic mechanisms for text classification tasks. Traditional attention mechanisms attend to the whole sequence of hidden states for an input sentence, while in most cases not all attention is needed especially for long sequences. We propose a novel method called Gated Attention Network (GA-Net) to dynamically select a subset of elements to attend to using an auxiliary network, and compute attention weights to aggregate the selected elements. It avoids a significant amount of unnecessary computation on unattended elements, and allows the model to pay attention to important parts of the sequence. Experiments in various datasets show that the proposed method achieves better performance compared with all baseline models with global or local attention while requiring less computation and achieving better interpretability. It is also promising to extend the idea to more complex attention-based models, such as transformers and seq-to-seq models.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie