Siga este enlace para ver otros tipos de publicaciones sobre el tema: Mechanism of attention.

Artículos de revistas sobre el tema "Mechanism of attention"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Mechanism of attention".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zang, Yubin, Zhenming Yu, Kun Xu, Minghua Chen, Sigang Yang y Hongwei Chen. "Fiber communication receiver models based on the multi-head attention mechanism". Chinese Optics Letters 21, n.º 3 (2023): 030602. http://dx.doi.org/10.3788/col202321.030602.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yoo, Sungwook, Hanjun Goo y Kyuseok Shim. "Improving Review-based Attention Mechanism". KIISE Transactions on Computing Practices 27, n.º 10 (31 de octubre de 2021): 486–91. http://dx.doi.org/10.5626/ktcp.2021.27.10.486.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jia, Yuening. "Attention Mechanism in Machine Translation". Journal of Physics: Conference Series 1314 (octubre de 2019): 012186. http://dx.doi.org/10.1088/1742-6596/1314/1/012186.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Sieb, R. A. "A brain mechanism for attention". Medical Hypotheses 33, n.º 3 (noviembre de 1990): 145–53. http://dx.doi.org/10.1016/0306-9877(90)90164-a.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Park, Da-Sol y Jeong-Won Cha. "Image Caption Generation using Object Attention Mechanism". Journal of KIISE 46, n.º 4 (30 de abril de 2019): 369–75. http://dx.doi.org/10.5626/jok.2019.46.4.369.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Spironelli, Chiara, Mariaelena Tagliabue y Carlo Umiltà. "Response Selection and Attention Orienting". Experimental Psychology 56, n.º 4 (enero de 2009): 274–82. http://dx.doi.org/10.1027/1618-3169.56.4.274.

Texto completo
Resumen
Recently, there has been a redirection of research efforts toward the exploration of the role of hemispheric lateralization in determining Simon effect asymmetries. The present study aimed at implementing a connectionist model that simulates the cognitive mechanisms implied by such asymmetries, focusing on the underlying neural structure. A left-lateralized response-selection mechanism was implemented alone (Experiment 1) or along with a right-lateralized automatic attention-orienting mechanism (Experiment 2). It was found that both models yielded Simon effect asymmetries. However, whereas the first model showed a reversed pattern of asymmetry compared with human, real data, the second model’s performance strongly resembled human Simon effect asymmetries, with a significantly greater right than left Simon effect. Thus, a left-side bias in the response-selection mechanism produced a left-side biased Simon effect, whereas a right-side bias in the attention system produced a right-side biased Simon effect. In conclusion, results showed that the bias of the attention system had a larger impact than the bias of the response-selection mechanism in producing Simon effect asymmetries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Songlin Yin, Songlin Yin y Fei Tan Songlin Yin. "YOLOv4-A: Research on Traffic Sign Detection Based on Hybrid Attention Mechanism". 電腦學刊 33, n.º 6 (diciembre de 2022): 181–92. http://dx.doi.org/10.53106/199115992022123306015.

Texto completo
Resumen
<p>Aiming at the problem of false detection and missed detection in the traffic sign detection task, an improved YOLOv4 detection algorithm is proposed. Based on the YOLOv4 algorithm, the Efficient Channel Attention Module (ECA) and the Convolutional Block Attention Module (CBAM) are added to form YOLOv4-A algorithm. At the same time, the global K-means clustering algorithm is used to regenerate smaller anchors, which makes the network converge faster and reduces the error rate. The YOLOv4-A algorithm re-calibrates the detection branch features in the two dimensions of channel and space, so that the network can focus and enhance the effective features, and suppress the interference features, which improves the detection ability of the algorithm. Experiments on the TT100K traffic sign dataset show that the proposed algorithm has a particularly significant improvement in the performance of small target detection. Compared with the YOLOv4 algorithm, the precision and mAP@0.5 of the proposed algorithm are increased by 5.38% and 5.75%.</p> <p>&nbsp;</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mao, Guojun, Guanyi Liao, Hengliang Zhu y Bo Sun. "Multibranch Attention Mechanism Based on Channel and Spatial Attention Fusion". Mathematics 10, n.º 21 (6 de noviembre de 2022): 4150. http://dx.doi.org/10.3390/math10214150.

Texto completo
Resumen
Recently, it has been demonstrated that the performance of an object detection network can be improved by embedding an attention module into it. In this work, we propose a lightweight and effective attention mechanism named multibranch attention (M3Att). For the input feature map, our M3Att first uses the grouped convolutional layer with a pyramid structure for feature extraction, and then calculates channel attention and spatial attention simultaneously and fuses them to obtain more complementary features. It is a “plug and play” module that can be easily added to the object detection network and significantly improves the performance of the object detection network with a small increase in parameters. We demonstrate the effectiveness of M3Att on various challenging object detection tasks, including PASCAL VOC2007, PASCAL VOC2012, KITTI, and Zhanjiang Underwater Robot Competition. The experimental results show that this method dramatically improves the object detection effect, especially for the PASCAL VOC2007, and the mapping index of the original network increased by 4.93% when embedded in the YOLOV4 (You Only Look Once v4) network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

V, Ms Malge Shraddha. "Generating Image Descriptions using Attention Mechanism". International Journal for Research in Applied Science and Engineering Technology 9, n.º 3 (31 de marzo de 2021): 1047–56. http://dx.doi.org/10.22214/ijraset.2021.33397.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yakura, Hiromu, Shinnosuke Shinozaki, Reon Nishimura, Yoshihiro Oyama y Jun Sakuma. "Neural malware analysis with attention mechanism". Computers & Security 87 (noviembre de 2019): 101592. http://dx.doi.org/10.1016/j.cose.2019.101592.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Wang, Longjuan, Chunjie Cao, Binghui Zou, Jun Ye y Jin Zhang. "License Plate Recognition via Attention Mechanism". Computers, Materials & Continua 75, n.º 1 (2023): 1801–14. http://dx.doi.org/10.32604/cmc.2023.032785.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Tao, Xueying, Huaizong Shao, Qiang Li, Ye Pan y Zhongqi Fu. "External Attention Mechanism-Based Modulation Classification". Journal of Physics: Conference Series 2425, n.º 1 (1 de febrero de 2023): 012051. http://dx.doi.org/10.1088/1742-6596/2425/1/012051.

Texto completo
Resumen
Abstract This paper considers the modulation classification of radio frequency (RF) signals. An external attention mechanism-based convolution neural network (EACNN) is proposed. Thanks to the external attention layers, the EACNN network can capture the potential correlations of different modulation data, which helps reduce computational consumption and memory costs efficiently during training. Moreover, to account for the variation of the signals induced by channel fading, we further propose a customized batch normalization (BN) layer in EACNN to improve the classification accuracy with less training time. Numerical experiments on RML2016.a dataset shows that the proposed method outperforms the baseline method CNN2 by 7% in terms of classification accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Yeom, Chanho, Jieun Lee y Sanghyun Park. "OANet: Ortho-Attention Net Based on Attention Mechanism for Database Performance Prediction". Journal of KIISE 49, n.º 11 (30 de noviembre de 2022): 1026–31. http://dx.doi.org/10.5626/jok.2022.49.11.1026.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Himabindu, Dakshayani D. y Praveen S. Kumar. "A Streamlined Attention Mechanism for Image Classification and Fine-Grained Visual Recognition". MENDEL 27, n.º 2 (21 de diciembre de 2021): 59–67. http://dx.doi.org/10.13164/mendel.2021.2.059.

Texto completo
Resumen
In the recent advancements attention mechanism in deep learning had played a vital role in proving better results in tasks under computer vision. There exists multiple kinds of works under attention mechanism which includes under image classification, fine-grained visual recognition, image captioning, video captioning, object detection and recognition tasks. Global and local attention are the two attention based mechanisms which helps in interpreting the attentive partial. Considering this criteria, there exists channel and spatial attention where in channel attention considers the most attentive channel among the produced block of channels and spatial attention considers which region among the space needs to be focused on. We have proposed a streamlined attention block module which helps in enhancing the feature based learning with less number of additional layers i.e., a GAP layer followed by a linear layer with an incorporation of second order pooling(GSoP) after every layer in the utilized encoder. This mechanism has produced better range dependencies by the conducted experimentation. We have experimented our model on CIFAR-10, CIFAR-100 and FGVC-Aircrafts datasets considering finegrained visual recognition. We were successful in achieving state-of-the-result for FGVC-Aircrafts with an accuracy of 97%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Yin, Wenpeng y Hinrich Schütze. "Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms". Transactions of the Association for Computational Linguistics 6 (diciembre de 2018): 687–702. http://dx.doi.org/10.1162/tacl_a_00249.

Texto completo
Resumen
In NLP, convolutional neural networks (CNNs) have benefited less than recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into convolution). Convolution is the differentiator of CNNs in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size context in the input text t x. In this work, we propose an attentive convolution network, ATTCONV. It extends the context scope of the convolution operation, deriving higher-level features for a word not only from local context, but also from information extracted from nonlocal context by the attention mechanism commonly used in RNNs. This nonlocal context can come (i) from parts of the input text t x that are distant or (ii) from extra (i.e., external) contexts t y. Experiments on sentence modeling with zero-context (sentiment analysis), single-context (textual entailment) and multiple-context (claim verification) demonstrate the effectiveness of ATTCONV in sentence representation learning with the incorporation of context. In particular, attentive convolution outperforms attentive pooling and is a strong competitor to popular attentive RNNs. 1
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Mathôt, Sebastiaan y Jan Theeuwes. "Visual attention and stability". Philosophical Transactions of the Royal Society B: Biological Sciences 366, n.º 1564 (27 de febrero de 2011): 516–27. http://dx.doi.org/10.1098/rstb.2010.0187.

Texto completo
Resumen
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Zheng, Menghua, Jiayu Xu, Yinjie Shen, Chunwei Tian, Jian Li, Lunke Fei, Ming Zong y Xiaoyang Liu. "Attention-based CNNs for Image Classification: A Survey". Journal of Physics: Conference Series 2171, n.º 1 (1 de enero de 2022): 012068. http://dx.doi.org/10.1088/1742-6596/2171/1/012068.

Texto completo
Resumen
Abstract Deep learning techniques as well as CNNs can learn power context information, they have been widely applied in image recognition. However, deep CNNs may reply on large width and large depth, which may increase computational costs. Attention mechanism fused into CNNs can address this problem. In this paper, we summary an attention mechanism acts a CNN for image classification. Firstly, the survey shows the development of CNNs for image classification. Then, we illustrate basis of CNNs and attention mechanisms for image classification. Next, we give the main architecture of CNNs with attentions, public and our collected datasets, experimental results in image classification. Finally, we point out potential research points, challenges attention-based for image classification and summary the whole paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Chou, Kenny F. y Kamal Sen. "AIM: A network model of attention in auditory cortex". PLOS Computational Biology 17, n.º 8 (27 de agosto de 2021): e1009356. http://dx.doi.org/10.1371/journal.pcbi.1009356.

Texto completo
Resumen
Attentional modulation of cortical networks is critical for the cognitive flexibility required to process complex scenes. Current theoretical frameworks for attention are based almost exclusively on studies in visual cortex, where attentional effects are typically modest and excitatory. In contrast, attentional effects in auditory cortex can be large and suppressive. A theoretical framework for explaining attentional effects in auditory cortex is lacking, preventing a broader understanding of cortical mechanisms underlying attention. Here, we present a cortical network model of attention in primary auditory cortex (A1). A key mechanism in our network is attentional inhibitory modulation (AIM) of cortical inhibitory neurons. In this mechanism, top-down inhibitory neurons disinhibit bottom-up cortical circuits, a prominent circuit motif observed in sensory cortex. Our results reveal that the same underlying mechanisms in the AIM network can explain diverse attentional effects on both spatial and frequency tuning in A1. We find that a dominant effect of disinhibition on cortical tuning is suppressive, consistent with experimental observations. Functionally, the AIM network may play a key role in solving the cocktail party problem. We demonstrate how attention can guide the AIM network to monitor an acoustic scene, select a specific target, or switch to a different target, providing flexible outputs for solving the cocktail party problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Xuezhi, Xiang, Syed Masroor Ali y Ghulam Farid. "OPTICAL FLOW ESTIMATION USING CHANNEL ATTENTION MECHANISM". Journal of Flow Visualization and Image Processing 26, n.º 4 (2019): 371–93. http://dx.doi.org/10.1615/jflowvisimageproc.2019031771.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Liu, Hankun, Daojing He y Sammy Chan. "Fraudulent News Headline Detection with Attention Mechanism". Computational Intelligence and Neuroscience 2021 (15 de marzo de 2021): 1–7. http://dx.doi.org/10.1155/2021/6679661.

Texto completo
Resumen
E-mail systems and online social media platforms are ideal places for news dissemination, but a serious problem is the spread of fraudulent news headlines. The previous method of detecting fraudulent news headlines was mainly laborious manual review. While the total number of news headlines goes as high as 1.48 million, manual review becomes practically infeasible. For news headline text data, attention mechanism has powerful processing capability. In this paper, we propose the models based on LSTM and attention layer, which fit the context of news headlines efficiently and can detect fraudulent news headlines quickly and accurately. Based on multi-head attention mechanism eschewing recurrent unit and reducing sequential computation, we build Mini-Transformer Deep Learning model to further improve the classification performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Yang, Kehua, Yaodong Wang, Wei Zhang, Jiqing Yao y Yuquan Le. "Keyphrase Generation Based on Self-Attention Mechanism". Computers, Materials & Continua 61, n.º 2 (2019): 569–81. http://dx.doi.org/10.32604/cmc.2019.05952.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Wang, Dongli, Shengliang Xiang, Yan Zhou, Jinzhen Mu, Haibin Zhou y Richard Irampaye. "Multiple-Attention Mechanism Network for Semantic Segmentation". Sensors 22, n.º 12 (13 de junio de 2022): 4477. http://dx.doi.org/10.3390/s22124477.

Texto completo
Resumen
Contextual information and the dependencies between dimensions is vital in image semantic segmentation. In this paper, we propose a multiple-attention mechanism network (MANet) for semantic segmentation in a very effective and efficient way. Concretely, the contributions are as follows: (1) a novel dual-attention mechanism for capturing feature dependencies in spatial and channel dimensions, where the adjacent position attention captures the dependencies between pixels well; (2) a new cross-dimensional interactive attention feature fusion module, which strengthens the fusion of fine location structure information in low-level features and category semantic information in high-level features. We conduct extensive experiments on semantic segmentation benchmarks including PASCAL VOC 2012 and Cityscapes datasets. Our MANet achieves the mIoU scores of 75.5% and 72.8% on PASCAL VOC 2012 and Cityscapes datasets, respectively. The effectiveness of the network is higher than the previous popular semantic segmentation networks under the same conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Feng-lei, REN, ZHOU Hai-bo, YANG Lu y HE Xin. "Lane detection based on dual attention mechanism". Chinese Optics 15 (2022): 1–9. http://dx.doi.org/10.37188/co.2022-0033.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

An, XuDong, Lei Zhao, Han Wu y QinJuan Zhang. "Channel estimation algorithm based on attention mechanism". Journal of Physics: Conference Series 2290, n.º 1 (1 de junio de 2022): 012112. http://dx.doi.org/10.1088/1742-6596/2290/1/012112.

Texto completo
Resumen
Abstract As the key to wireless communication, channel estimation has become a hot research topic in recent years. In this paper, we propose a deep learning method based on the channel estimation of inverse convolutional network and expanded convolutional network to address the problems that the performance of traditional channel estimation algorithms in orthogonal frequency division multiplexing (OFDM) systems can hardly meet the communication requirements of complex scenarios and are greatly affected by noise. The method constructs a lightweight deconvolutional network using the correlation of channels, and achieves channel interpolation and estimation step by step with a few layers of deconvolutional operations, which achieves channel estimation with low complexity. To improve the estimation performance, an expanded convolutional network is further constructed to suppress the channel noise and improve the accuracy of channel estimation. The simulation results show that the channel estimation can be performed at different signal levels. The simulation results show that the proposed deep learning method based on deconvolution and dilation convolution has lower estimation error and lower complexity than the traditional methods under different signal-to-noise ratios (SNR).
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Wang, Chiyu, Hong Li, Xinrong Li, Feifei Hou y Xun Hu. "Guided attention mechanism: Training network more efficiently". Journal of Intelligent & Fuzzy Systems 38, n.º 2 (6 de febrero de 2020): 2323–35. http://dx.doi.org/10.3233/jifs-191257.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

余, 晨. "Image Tampering Detection Based on Attention Mechanism". Computer Science and Application 12, n.º 03 (2022): 729–38. http://dx.doi.org/10.12677/csa.2022.123074.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Yu, Mingfei, Yukio Miyasaka y Masahiro Fujita. "Parallel Scheduling Attention Mechanism: Generalization and Optimization". IPSJ Transactions on System LSI Design Methodology 15 (2022): 2–15. http://dx.doi.org/10.2197/ipsjtsldm.15.2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Ahmad Khan, Wasim, Hafiz Usman Akmal, Ahmad Ullah, Aqdas Malik, Sagheer Abbas, Abdullah Ahmad y Abdullah Farooq. "Intelligent Virtual Security System using Attention Mechanism". ICST Transactions on Scalable Information Systems 5, n.º 16 (13 de abril de 2018): 154473. http://dx.doi.org/10.4108/eai.13-4-2018.154473.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Daihong, Jiang, Hu yuanzheng, Dai Lei y Peng Jin. "Facial Expression Recognition Based on Attention Mechanism". Scientific Programming 2021 (2 de marzo de 2021): 1–10. http://dx.doi.org/10.1155/2021/6624251.

Texto completo
Resumen
At present, traditional facial expression recognition methods of convolutional neural networks are based on local ideas for feature expression, which results in the model’s low efficiency in capturing the dependence between long-range pixels, leading to poor performance for facial expression recognition. In order to solve the above problems, this paper combines a self-attention mechanism with a residual network and proposes a new facial expression recognition model based on the global operation idea. This paper first introduces the self-attention mechanism on the basis of the residual network and finds the relative importance of a location by calculating the weighted average of all location pixels, then introduces channel attention to learn different features on the channel domain, and generates channel attention to focus on the interactive features in different channels so that the robustness can be improved; finally, it merges the self-attention mechanism and the channel attention mechanism to increase the model’s ability to extract globally important features. The accuracy of this paper on the CK+ and FER2013 datasets is 97.89% and 74.15%, respectively, which fully confirmed the effectiveness and superiority of the model in extracting global features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Gul, M. Shahzeb Khan, M. Umair Mukati, Michel Batz, Soren Forchhammer y Joachim Keinert. "Attention Mechanism-Based Light-Field View Synthesis". IEEE Access 10 (2022): 7895–913. http://dx.doi.org/10.1109/access.2022.3142949.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Li, Weiqian y Bugao Xu. "Aspect-Based Fashion Recommendation With Attention Mechanism". IEEE Access 8 (2020): 141814–23. http://dx.doi.org/10.1109/access.2020.3013639.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Zhu, Yaling, Jungang Yang, Xinpu Deng, Chao Xiao y Wei An. "Infrared Pedestrian Detection Based on Attention Mechanism". Journal of Physics: Conference Series 1634 (septiembre de 2020): 012032. http://dx.doi.org/10.1088/1742-6596/1634/1/012032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Wang, Yuehuan. "Small-target predetection with an attention mechanism". Optical Engineering 41, n.º 4 (1 de abril de 2002): 872. http://dx.doi.org/10.1117/1.1459054.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Bandera, Juan Pedro, R. Marfil, Antonio Jesús Palomino, Ricardo Vázquez-Martín y Antonio Bandera. "Visual Attention Mechanism for a Social Robot". Applied Bionics and Biomechanics 9, n.º 4 (2012): 409–25. http://dx.doi.org/10.1155/2012/320850.

Texto completo
Resumen
This paper describes a visual perception system for a social robot. The central part of this system is an artificial attention mechanism that discriminates the most relevant information from all the visual information perceived by the robot. It is composed by three stages. At the preattentive stage, the concept of saliency is implemented based on ‘proto-objects’ [37]. From these objects, different saliency maps are generated. Then, the semiattentive stage identifies and tracks significant items according to the tasks to accomplish. This tracking process allows to implement the ‘inhibition of return’. Finally, the attentive stage fixes the field of attention to the most relevant object depending on the behaviours to carry out. Three behaviours have been implemented and tested which allow the robot to detect visual landmarks in an initially unknown environment, and to recognize and capture the upper-body motion of people interested in interact with it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Eisenberg, Nancy. "An explanatory mechanism that merits more attention". Behavioral and Brain Sciences 14, n.º 4 (diciembre de 1991): 749. http://dx.doi.org/10.1017/s0140525x00072319.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Sang, Hai-Feng, Zi-Zhen Chen y Da-Kuo He. "Human Motion prediction based on attention mechanism". Multimedia Tools and Applications 79, n.º 9-10 (6 de diciembre de 2019): 5529–44. http://dx.doi.org/10.1007/s11042-019-08269-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Neumann, Odmar y Ingrid Scharlau. "Visual attention and the mechanism of metacontrast". Psychological Research 71, n.º 6 (8 de junio de 2006): 626–33. http://dx.doi.org/10.1007/s00426-006-0061-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Liu, Maofu, Lingjun Li, Huijun Hu, Weili Guan y Jing Tian. "Image caption generation with dual attention mechanism". Information Processing & Management 57, n.º 2 (marzo de 2020): 102178. http://dx.doi.org/10.1016/j.ipm.2019.102178.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Wang, Yu y Ming Zhu. "Saliency Prediction Based On Lightweight Attention Mechanism". Journal of Physics: Conference Series 1486 (abril de 2020): 072066. http://dx.doi.org/10.1088/1742-6596/1486/7/072066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Jiao, Shanshan, Jiabao Wang, Guyu Hu, Zhisong Pan, Lin Du y Jin Zhang. "Joint Attention Mechanism for Person Re-Identification". IEEE Access 7 (2019): 90497–506. http://dx.doi.org/10.1109/access.2019.2927170.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Yang, Qimeng, Long Yu, Shengwei Tian y Jinmiao Song. "Attention Mechanism for Uyghur Personal Pronouns Resolution". ACM Transactions on Asian and Low-Resource Language Information Processing 19, n.º 6 (25 de noviembre de 2020): 1–13. http://dx.doi.org/10.1145/3412323.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Gun, Li. "Advances and Application of Visual Attention Mechanism". International Journal of Data Science and Analysis 3, n.º 4 (2017): 24. http://dx.doi.org/10.11648/j.ijdsa.20170304.11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Krimpas, Panagiotis y Christina Valavani. "Attention mechanism and skip-gram embedded phrases". Comparative Legilinguistics 52 (9 de enero de 2023): 318–50. http://dx.doi.org/10.14746/cl.52.2022.14.

Texto completo
Resumen
This article examines common translation errors that occur in the translation of legal texts. In particular, it focuses on how German texts containing legal terminology are rendered into Modern Greek by the Google translation machine. Our case study is the Google-assisted translation of the original (German) version of the Constitution of the Federal Republic of Germany into Modern Greek. A training method is proposed for phrase extraction based on the occurrence frequency, which goes through the Skip-gram algorithm to be then integrated into the Self Attention Mechanism proposed by Vaswani et al. (2017) in order to minimise human effort and contribute to the development of a robust machine translation system for multi-word legal terms and special phrases. This Neural Machine Translation approach aims at developing vectorised phrases from large corpora and process them for translation. The research direction is to increase the in-domain training data set and enrich the vector dimension with more information for legal concepts (domain specific features).
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Ashtari, Amirsaman, Chang Wook Seo, Cholmin Kang, Sihun Cha y Junyong Noh. "Reference Based Sketch Extraction via Attention Mechanism". ACM Transactions on Graphics 41, n.º 6 (30 de noviembre de 2022): 1–16. http://dx.doi.org/10.1145/3550454.3555504.

Texto completo
Resumen
We propose a model that extracts a sketch from a colorized image in such a way that the extracted sketch has a line style similar to a given reference sketch while preserving the visual content identically to the colorized image. Authentic sketches drawn by artists have various sketch styles to add visual interest and contribute feeling to the sketch. However, existing sketch-extraction methods generate sketches with only one style. Moreover, existing style transfer models fail to transfer sketch styles because they are mostly designed to transfer textures of a source style image instead of transferring the sparse line styles from a reference sketch. Lacking the necessary volumes of data for standard training of translation systems, at the core of our GAN-based solution is a self-reference sketch style generator that produces various reference sketches with a similar style but different spatial layouts. We use independent attention modules to detect the edges of a colorized image and reference sketch as well as the visual correspondences between them. We apply several loss terms to imitate the style and enforce sparsity in the extracted sketches. Our sketch-extraction method results in a close imitation of a reference sketch style drawn by an artist and outperforms all baseline methods. Using our method, we produce a synthetic dataset representing various sketch styles and improve the performance of auto-colorization models, in high demand in comics. The validity of our approach is confirmed via qualitative and quantitative evaluations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ren, Junhua, Guowu Zhao, Yadong Ma, De Zhao, Tao Liu y Jun Yan. "Automatic Pavement Crack Detection Fusing Attention Mechanism". Electronics 11, n.º 21 (6 de noviembre de 2022): 3622. http://dx.doi.org/10.3390/electronics11213622.

Texto completo
Resumen
Pavement cracks can result in the degradation of pavement performance. Due to the lack of timely inspection and reparation for the pavement cracks, with the development of cracks, the safety and service life of the pavement can be decreased. To curb the development of pavement cracks, detecting these cracks accurately plays an important role. In this paper, an automatic pavement crack detection method is proposed. For achieving real-time inspection, the YOLOV5 was selected as the base model. Due to the small size of the pavement cracks, the accuracy of most of the pavement crack deep learning-based methods cannot reach a high degree. To further improve the accuracy of those kind of methods, attention modules were employed. Based on the self-building datasets collected in Linyi city, the performance among various crack detection models was evaluated. The results showed that adding attention modules can effectively enhance the ability of crack detection. The precision of YOLOV5-CoordAtt reaches 95.27%. It was higher than other conventional and deep learning methods. According to the pictures of the results, the proposed methods can detect accurately under various situations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Zhao, Bowen, Huanlai Xing, Xinhan Wang, Fuhong Song y Zhiwen Xiao. "Rethinking attention mechanism in time series classification". Information Sciences 627 (mayo de 2023): 97–114. http://dx.doi.org/10.1016/j.ins.2023.01.093.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Zhang, Qianqian, Hongyang Wei, Jiaying Chen, Xusheng Du y Jiong Yu. "Video Anomaly Detection Based on Attention Mechanism". Symmetry 15, n.º 2 (16 de febrero de 2023): 528. http://dx.doi.org/10.3390/sym15020528.

Texto completo
Resumen
Camera surveillance is widely used in residential areas, highways, schools and other public places. The monitoring and scanning of sudden abnormal events depend on humans. Human anomaly monitoring not only consumes a lot of manpower and time but also has a large error in anomaly detection. Video anomaly detection based on AE (Auto-Encoder) is currently the dominant research approach. The model has a highly symmetrical network structure in the encoding and decoding stages. The model is trained by learning standard video sequences, and the anomalous events are later determined in terms of reconstruction error and prediction error. However, in the case of limited computing power, the complex model will greatly reduce the detection efficiency, and unnecessary background information will seriously affect the detection accuracy of the model. This paper uses the AE loaded with dynamic prototype units as the basic model. We introduce an attention mechanism to improve the feature representation ability of the model. Deep separable convolution operation can effectively reduce the number of model parameters and complexity. Finally, we conducted experiments on three publicly available datasets of real scenarios (UCSD Ped1, UCSD Ped2 and CUHK Avenue). The experimental results show that compared with the baseline model, the accuracy of our model improved by 1.9%, 1.4% and 6.6%, respectively, across the three datasets. Compared with many popular models, the validity of our model in anomaly detection is verified.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

YU, YUANLONG, GEORGE K. I. MANN y RAYMOND G. GOSINE. "A SINGLE-OBJECT TRACKING METHOD FOR ROBOTS USING OBJECT-BASED VISUAL ATTENTION". International Journal of Humanoid Robotics 09, n.º 04 (diciembre de 2012): 1250030. http://dx.doi.org/10.1142/s0219843612500302.

Texto completo
Resumen
It is a quite challenging problem for robots to track the target in complex environment due to appearance changes of the target and background, large variation of motion, partial and full occlusion, motion of the camera and so on. However, humans are capable to cope with these difficulties by using their cognitive capability, mainly including the visual attention and learning mechanisms. This paper therefore presents a single-object tracking method for robots based on the object-based attention mechanism. This tracking method consists of four modules: pre-attentive segmentation, top-down attentional selection, post-attentive processing and online learning of the target model. The pre-attentive segmentation module first divides the scene into uniform proto-objects. Then the top-down attention module selects one proto-object over the predicted region by using a discriminative feature of the target. The post-attentive processing module then validates the attended proto-object. If it is confirmed to be the target, it is used to obtain the complete target region. Otherwise, the recovery mechanism is automatically triggered to globally search for the target. Given the complete target region, the online learning algorithm autonomously updates the target model, which consists of appearance and saliency components. The saliency component is used to automatically select a discriminative feature for top-down attention, while the appearance component is used for bias estimation in the top-down attention module and validation in the post-attentive processing module. Experiments have shown that this proposed method outperforms other algorithms without using attention for tracking a single target in cluttered and dynamically changing environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Yuan, Chun-Miao, Xue-Mei Sun y Hu Zhao. "Speech Separation Using Convolutional Neural Network and Attention Mechanism". Discrete Dynamics in Nature and Society 2020 (25 de julio de 2020): 1–10. http://dx.doi.org/10.1155/2020/2196893.

Texto completo
Resumen
Speech information is the most important means of human communication, and it is crucial to separate the target voice from the mixed sound signals. This paper proposes a speech separation model based on convolutional neural networks and attention mechanism. The magnitude spectrum of the mixed speech signals, as the input, has its high dimensionality. By analyzing the characteristics of the convolutional neural network and attention mechanism, it can be found that the convolutional neural network can effectively extract low-dimensional features and mine the spatiotemporal structure information in the speech signals, and the attention mechanism can reduce the loss of sequence information. The accuracy of speech separation can be improved effectively by combining two mechanisms. Compared to the typical speech separation model DRNN-2 + discrim, this method achieves 0.27 dB GNSDR gain and 0.51 dB GSIR gain, which illustrates that the speech separation model proposed in this paper has achieved an ideal separation effect.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Xue, Lanqing, Xiaopeng Li y Nevin L. Zhang. "Not All Attention Is Needed: Gated Attention Network for Sequence Data". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6550–57. http://dx.doi.org/10.1609/aaai.v34i04.6129.

Texto completo
Resumen
Although deep neural networks generally have fixed network structures, the concept of dynamic mechanism has drawn more and more attention in recent years. Attention mechanisms compute input-dependent dynamic attention weights for aggregating a sequence of hidden states. Dynamic network configuration in convolutional neural networks (CNNs) selectively activates only part of the network at a time for different inputs. In this paper, we combine the two dynamic mechanisms for text classification tasks. Traditional attention mechanisms attend to the whole sequence of hidden states for an input sentence, while in most cases not all attention is needed especially for long sequences. We propose a novel method called Gated Attention Network (GA-Net) to dynamically select a subset of elements to attend to using an auxiliary network, and compute attention weights to aggregate the selected elements. It avoids a significant amount of unnecessary computation on unattended elements, and allows the model to pay attention to important parts of the sequence. Experiments in various datasets show that the proposed method achieves better performance compared with all baseline models with global or local attention while requiring less computation and achieving better interpretability. It is also promising to extend the idea to more complex attention-based models, such as transformers and seq-to-seq models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía