To see the other types of publications on this topic, follow the link: SELF-ATTENTION MECHANISM.

Journal articles on the topic 'SELF-ATTENTION MECHANISM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SELF-ATTENTION MECHANISM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Kehua, Yaodong Wang, Wei Zhang, Jiqing Yao, and Yuquan Le. "Keyphrase Generation Based on Self-Attention Mechanism." Computers, Materials & Continua 61, no. 2 (2019): 569–81. http://dx.doi.org/10.32604/cmc.2019.05952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Siqi, Jiangshu Wei, Gang Liu, and Bei Zhou. "Image classification model based on large kernel attention mechanism and relative position self-attention mechanism." PeerJ Computer Science 9 (April 21, 2023): e1344. http://dx.doi.org/10.7717/peerj-cs.1344.

Full text
Abstract:
The Transformer has achieved great success in many computer vision tasks. With the in-depth exploration of it, researchers have found that Transformers can better obtain long-range features than convolutional neural networks (CNN). However, there will be a deterioration of local feature details when the Transformer extracts local features. Although CNN is adept at capturing the local feature details, it cannot easily obtain the global representation of features. In order to solve the above problems effectively, this paper proposes a hybrid model consisting of CNN and Transformer inspired by Visual Attention Net (VAN) and CoAtNet. This model optimizes its shortcomings in the difficulty of capturing the global representation of features by introducing Large Kernel Attention (LKA) in CNN while using the Transformer blocks with relative position self-attention variant to alleviate the problem of detail deterioration in local features of the Transformer. Our model effectively combines the advantages of the above two structures to obtain the details of local features more accurately and capture the relationship between features far apart more efficiently on a large receptive field. Our experiments show that in the image classification task without additional training data, the proposed model in this paper can achieve excellent results on the cifar10 dataset, the cifar100 dataset, and the birds400 dataset (a public dataset on the Kaggle platform) with fewer model parameters. Among them, SE_LKACAT achieved a Top-1 accuracy of 98.01% on the cifar10 dataset with only 7.5M parameters.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Hu, Ze Wang, Yu Shi, Yingying Hua, Guoxia Xu, and Lizhen Deng. "Multimodal Fusion Method Based on Self-Attention Mechanism." Wireless Communications and Mobile Computing 2020 (September 23, 2020): 1–8. http://dx.doi.org/10.1155/2020/8843186.

Full text
Abstract:
Multimodal fusion is one of the popular research directions of multimodal research, and it is also an emerging research field of artificial intelligence. Multimodal fusion is aimed at taking advantage of the complementarity of heterogeneous data and providing reliable classification for the model. Multimodal data fusion is to transform data from multiple single-mode representations to a compact multimodal representation. In previous multimodal data fusion studies, most of the research in this field used multimodal representations of tensors. As the input is converted into a tensor, the dimensions and computational complexity increase exponentially. In this paper, we propose a low-rank tensor multimodal fusion method with an attention mechanism, which improves efficiency and reduces computational complexity. We evaluate our model through three multimodal fusion tasks, which are based on a public data set: CMU-MOSI, IEMOCAP, and POM. Our model achieves a good performance while flexibly capturing the global and local connections. Compared with other multimodal fusions represented by tensors, experiments show that our model can achieve better results steadily under a series of attention mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Fude, Chunguang Zheng, Limin Huang, Aihua Wang, Jiong Zhang, Feng Zhou, Haoxue Ju, Haitao Guo, and Yuxia Du. "Research of Self-Attention in Image Segmentation." Journal of Information Technology Research 15, no. 1 (January 2022): 1–12. http://dx.doi.org/10.4018/jitr.298619.

Full text
Abstract:
Although the traditional convolutional neural network is applied to image segmentation successfully, it has some limitations. That's the context information of the long-range on the image is not well captured. With the success of the introduction of self-attentional mechanisms in the field of natural language processing (NLP), people have tried to introduce the attention mechanism in the field of computer vision. It turns out that self-attention can really solve this long-range dependency problem. This paper is a summary on the application of self-attention to image segmentation in the past two years. And think about whether the self-attention module in this field can replace convolution operation in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Hongqiu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, and Min Zhang. "Adversarial Self-Attention for Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13727–35. http://dx.doi.org/10.1609/aaai.v37i11.26608.

Full text
Abstract:
Deep neural models (e.g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness. This paper advances self-attention mechanism to its robust variant for Transformer-based pre-trained language models (e.g. BERT). We propose Adversarial Self-Attention mechanism (ASA), which adversarially biases the attentions to effectively suppress the model reliance on features (e.g. specific keywords) and encourage its exploration of broader semantics. We conduct comprehensive evaluation across a wide range of tasks for both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gain compared to naive training for longer steps. For fine-tuning, ASA-empowered models outweigh naive models by a large margin considering both generalization and robustness.
APA, Harvard, Vancouver, ISO, and other styles
6

Xie, Fei, Dalong Zhang, and Chengming Liu. "Global–Local Self-Attention Based Transformer for Speaker Verification." Applied Sciences 12, no. 19 (October 10, 2022): 10154. http://dx.doi.org/10.3390/app121910154.

Full text
Abstract:
Transformer models are now widely used for speech processing tasks due to their powerful sequence modeling capabilities. Previous work determined an efficient way to model speaker embeddings using the Transformer model by combining transformers with convolutional networks. However, traditional global self-attention mechanisms lack the ability to capture local information. To alleviate these problems, we proposed a novel global–local self-attention mechanism. Instead of using local or global multi-head attention alone, this method performs local and global attention in parallel in two parallel groups to enhance local modeling and reduce computational cost. To better handle local location information, we introduced locally enhanced location encoding in the speaker verification task. The experimental results of the VoxCeleb1 test set and the VoxCeleb2 dev set demonstrated the improved effect of our proposed global–local self-attention mechanism. Compared with the Transformer-based Robust Embedding Extractor Baseline System, the proposed speaker Transformer network exhibited better performance in the speaker verification task.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Duofeng, Haifeng Hu, and Dihu Chen. "Transformer with sparse self‐attention mechanism for image captioning." Electronics Letters 56, no. 15 (July 2020): 764–66. http://dx.doi.org/10.1049/el.2020.0635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yujie, and Jintong Cai. "Point cloud classification network based on self-attention mechanism." Computers and Electrical Engineering 104 (December 2022): 108451. http://dx.doi.org/10.1016/j.compeleceng.2022.108451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brotchie, James, Wei Shao, Wenchao Li, and Allison Kealy. "Leveraging Self-Attention Mechanism for Attitude Estimation in Smartphones." Sensors 22, no. 22 (November 21, 2022): 9011. http://dx.doi.org/10.3390/s22229011.

Full text
Abstract:
Inertial attitude estimation is a crucial component of many modern systems and applications. Attitude estimation from commercial-grade inertial sensors has been the subject of an abundance of research in recent years due to the proliferation of Inertial Measurement Units (IMUs) in mobile devices, such as the smartphone. Traditional methodologies involve probabilistic, iterative-state estimation; however, these approaches do not generalise well over changing motion dynamics and environmental conditions, as they require context-specific parameter tuning. In this work, we explore novel methods for attitude estimation from low-cost inertial sensors using a self-attention-based neural network, the Attformer. This paper proposes to part ways from the traditional cycle of continuous integration algorithms, and formulate it as an optimisation problem. This approach separates itself by leveraging attention operations to learn the complex patterns and dynamics associated with inertial data, allowing for the linear complexity in the dimension of the feature vector to account for these patterns. Additionally, we look at combining traditional state-of-the-art approaches with our self-attention method. These models were evaluated on entirely unseen sequences, over a range of different activities, users and devices, and compared with a recent alternate deep learning approach, the unscented Kalman filter and the iOS CoreMotion API. The inbuilt iOS had a mean angular distance from the true attitude of 117.31∘, the GRU 21.90∘, the UKF 16.38∘, the Attformer 16.28∘ and, finally, the UKF–Attformer had mean angular distance of 10.86∘. We show that this plug-and-play solution outperforms previous approaches and generalises well across different users, devices and activities.
APA, Harvard, Vancouver, ISO, and other styles
10

Fan, Zhongkui, and Ye-Peng Guan. "Pedestrian attribute recognition based on dual self-attention mechanism." Computer Science and Information Systems, no. 00 (2023): 16. http://dx.doi.org/10.2298/csis220815016f.

Full text
Abstract:
Recognizing pedestrian attributes has recently obtained increasing attention due to its great potential in person re-identification, recommendation system, and other applications. Existing methods have achieved good results, but these methods do not fully utilize region information and the correlation between attributes. This paper aims at proposing a robust pedestrian attribute recognition framework. Specifically, we first propose an end-to-end framework for attribute recognition. Secondly, spatial and semantic self-attention mechanism is used for key points localization and bounding boxes generation. Finally, a hierarchical recognition strategy is proposed, the whole region is used for the global attribute recognition, and the relevant regions are used for the local attribute recognition. Experimental results on two pedestrian attribute datasets PETA and RAP show that the mean recognition accuracy reaches 84.63% and 82.70%. The heatmap analysis shows that our method can effectively improve the spatial and the semantic correlation between attributes. Compared with existing methods, it can achieve better recognition effect.
APA, Harvard, Vancouver, ISO, and other styles
11

Luo, Youtao, and Xiaoming Gao. "Lightweight Human Pose Estimation Based on Self-Attention Mechanism." Advances in Engineering Technology Research 4, no. 1 (March 21, 2023): 253. http://dx.doi.org/10.56028/aetr.4.1.253.2023.

Full text
Abstract:
To tackle the issues of numerous parameters, high computational complexity, and extended detection time prevalent in current human pose estimation network models, we have incorporated an hourglass structure to create a lightweight single-path network model, which has fewer parameters and a shorter computation time. To ensure model accuracy, we have implemented a window self-attention mechanism with a reduced parameter count. Additionally, we have redesigned this self-attention module to effectively extract local and global information, thereby enriching the feature information learned by the model. This module merges with the inverted residual network architecture, creating a separate module of WGNet. Finally, WGNet can be flexibly embedded into different stages of the model. Training and validation on COCO and MPII datasets demonstrate that this model reduces the number of parameters by 25%, computational complexity by 41%, and inference time by nearly two times, compared to Hrformer, which also utilizes the windowed self-attention mechanism, at the cost of only 3.5% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
12

Rendón-Segador, Fernando J., Juan A. Álvarez-García, and Angel Jesús Varela-Vaca. "Paying attention to cyber-attacks: A multi-layer perceptron with self-attention mechanism." Computers & Security 132 (September 2023): 103318. http://dx.doi.org/10.1016/j.cose.2023.103318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dai, Biyun, Jinlong Li, and Ruoyi Xu. "Multiple Positional Self-Attention Network for Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7610–17. http://dx.doi.org/10.1609/aaai.v34i05.6261.

Full text
Abstract:
Self-attention mechanisms have recently caused many concerns on Natural Language Processing (NLP) tasks. Relative positional information is important to self-attention mechanisms. We propose Faraway Mask focusing on the (2m + 1)-gram words and Scaled-Distance Mask putting the logarithmic distance punishment to avoid and weaken the self-attention of distant words respectively. To exploit different masks, we present Positional Self-Attention Layer for generating different Masked-Self-Attentions and a following Position-Fusion Layer in which fused positional information multiplies the Masked-Self-Attentions for generating sentence embeddings. To evaluate our sentence embeddings approach Multiple Positional Self-Attention Network (MPSAN), we perform the comparison experiments on sentiment analysis, semantic relatedness and sentence classification tasks. The result shows that our MPSAN outperforms state-of-the-art methods on five datasets and the test accuracy is improved by 0.81%, 0.6% on SST, CR datasets, respectively. In addition, we reduce training parameters and improve the time efficiency of MPSAN by lowering the dimension number of self-attention and simplifying fusion mechanism.
APA, Harvard, Vancouver, ISO, and other styles
14

Lin, Zhihui, Maomao Li, Zhuobin Zheng, Yangyang Cheng, and Chun Yuan. "Self-Attention ConvLSTM for Spatiotemporal Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11531–38. http://dx.doi.org/10.1609/aaai.v34i07.6819.

Full text
Abstract:
Spatiotemporal prediction is challenging due to the complex dynamic motion and appearance changes. Existing work concentrates on embedding additional cells into the standard ConvLSTM to memorize spatial appearances during the prediction. These models always rely on the convolution layers to capture the spatial dependence, which are local and inefficient. However, long-range spatial dependencies are significant for spatial applications. To extract spatial features with both global and local dependencies, we introduce the self-attention mechanism into ConvLSTM. Specifically, a novel self-attention memory (SAM) is proposed to memorize features with long-range dependencies in terms of spatial and temporal domains. Based on the self-attention, SAM can produce features by aggregating features across all positions of both the input itself and memory features with pair-wise similarity scores. Moreover, the additional memory is updated by a gating mechanism on aggregated features and an established highway with the memory of the previous time step. Therefore, through SAM, we can extract features with long-range spatiotemporal dependencies. Furthermore, we embed the SAM into a standard ConvLSTM to construct a self-attention ConvLSTM (SA-ConvLSTM) for the spatiotemporal prediction. In experiments, we apply the SA-ConvLSTM to perform frame prediction on the MovingMNIST and KTH datasets and traffic flow prediction on the TexiBJ dataset. Our SA-ConvLSTM achieves state-of-the-art results on both datasets with fewer parameters and higher time efficiency than previous state-of-the-art method.
APA, Harvard, Vancouver, ISO, and other styles
15

Nakata, Haruki, Kanji Tanaka, and Koji Takeda. "Exploring Self-Attention for Visual Intersection Classification." Journal of Advanced Computational Intelligence and Intelligent Informatics 27, no. 3 (May 20, 2023): 386–93. http://dx.doi.org/10.20965/jaciii.2023.p0386.

Full text
Abstract:
Self-attention has recently emerged as a technique for capturing non-local contexts in robot vision. This study introduced a self-attention mechanism into an intersection recognition system to capture non-local contexts behind the scenes. This mechanism is effective in intersection classification because most parts of the local pattern (e.g., road edges, buildings, and sky) are similar; thus, the use of a non-local context (e.g., the angle between two diagonal corners around an intersection) would be effective. This study makes three major contributions to existing literature. First, we proposed a self-attention-based approach for intersection classification. Second, we integrated the self-attention-based classifier into a unified intersection classification framework to improve the overall recognition performance. Finally, experiments using the public KITTI dataset showed that the proposed self-attention-based system outperforms conventional recognition based on local patterns and recognition based on convolution operations.
APA, Harvard, Vancouver, ISO, and other styles
16

Bae, Ara, and Wooil Kim. "Speaker Verification Employing Combinations of Self-Attention Mechanisms." Electronics 9, no. 12 (December 21, 2020): 2201. http://dx.doi.org/10.3390/electronics9122201.

Full text
Abstract:
One of the most recent speaker recognition methods that demonstrates outstanding performance in noisy environments involves extracting the speaker embedding using attention mechanism instead of average or statistics pooling. In the attention method, the speaker recognition performance is improved by employing multiple heads rather than a single head. In this paper, we propose advanced methods to extract a new embedding by compensating for the disadvantages of the single-head and multi-head attention methods. The combination method comprising single-head and split-based multi-head attentions shows a 5.39% Equal Error Rate (EER). When the single-head and projection-based multi-head attention methods are combined, the speaker recognition performance improves by 4.45%, which is the best performance in this work. Our experimental results demonstrate that the attention mechanism reflects the speaker’s properties more effectively than average or statistics pooling, and the speaker verification system could be further improved by employing combinations of different attention techniques.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Yu, Liang Hu, Yang Wu, and Wanfu Gao. "Graph Multihead Attention Pooling with Self-Supervised Learning." Entropy 24, no. 12 (November 29, 2022): 1745. http://dx.doi.org/10.3390/e24121745.

Full text
Abstract:
Graph neural networks (GNNs), which work with graph-structured data, have attracted considerable attention and achieved promising performance on graph-related tasks. While the majority of existing GNN methods focus on the convolutional operation for encoding the node representations, the graph pooling operation, which maps the set of nodes into a coarsened graph, is crucial for graph-level tasks. We argue that a well-defined graph pooling operation should avoid the information loss of the local node features and global graph structure. In this paper, we propose a hierarchical graph pooling method based on the multihead attention mechanism, namely GMAPS, which compresses both node features and graph structure into the coarsened graph. Specifically, a multihead attention mechanism is adopted to arrange nodes into a coarsened graph based on their features and structural dependencies between nodes. In addition, to enhance the expressiveness of the cluster representations, a self-supervised mechanism is introduced to maximize the mutual information between the cluster representations and the global representation of the hierarchical graph. Our experimental results show that the proposed GMAPS obtains significant and consistent performance improvements compared with state-of-the-art baselines on six benchmarks from the biological and social domains of graph classification and reconstruction tasks.
APA, Harvard, Vancouver, ISO, and other styles
18

Zheng, Jianming, Fei Cai, Taihua Shao, and Honghui Chen. "Self-Interaction Attention Mechanism-Based Text Representation for Document Classification." Applied Sciences 8, no. 4 (April 12, 2018): 613. http://dx.doi.org/10.3390/app8040613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yue, Guanci Yang, Shaobo Li, Yang Li, Ling He, and Dan Liu. "Arrhythmia classification algorithm based on multi-head self-attention mechanism." Biomedical Signal Processing and Control 79 (January 2023): 104206. http://dx.doi.org/10.1016/j.bspc.2022.104206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chun, Yutong, Chuansheng Wang, and Mingke He. "A Novel Clothing Attribute Representation Network-Based Self-Attention Mechanism." IEEE Access 8 (2020): 201762–69. http://dx.doi.org/10.1109/access.2020.3035781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hu, Wanting, Lu Cao, Qunsheng Ruan, and Qingfeng Wu. "Research on Anomaly Network Detection Based on Self-Attention Mechanism." Sensors 23, no. 11 (May 25, 2023): 5059. http://dx.doi.org/10.3390/s23115059.

Full text
Abstract:
Network traffic anomaly detection is a key step in identifying and preventing network security threats. This study aims to construct a new deep-learning-based traffic anomaly detection model through in-depth research on new feature-engineering methods, significantly improving the efficiency and accuracy of network traffic anomaly detection. The specific research work mainly includes the following two aspects: 1. In order to construct a more comprehensive dataset, this article first starts from the raw data of the classic traffic anomaly detection dataset UNSW-NB15 and combines the feature extraction standards and feature calculation methods of other classic detection datasets to re-extract and design a feature description set for the original traffic data in order to accurately and completely describe the network traffic status. We reconstructed the dataset DNTAD using the feature-processing method designed in this article and conducted evaluation experiments on it. Experiments have shown that by verifying classic machine learning algorithms, such as XGBoost, this method not only does not reduce the training performance of the algorithm but also improves its operational efficiency. 2. This article proposes a detection algorithm model based on LSTM and the recurrent neural network self-attention mechanism for important time-series information contained in the abnormal traffic datasets. With this model, through the memory mechanism of the LSTM, the time dependence of traffic features can be learned. On the basis of LSTM, a self-attention mechanism is introduced, which can weight the features at different positions in the sequence, enabling the model to better learn the direct relationship between traffic features. A series of ablation experiments were also used to demonstrate the effectiveness of each component of the model. The experimental results show that, compared to other comparative models, the model proposed in this article achieves better experimental results on the constructed dataset.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Ziye, Mingming Gong, Yanwu Xu, Chaohui Wang, Kun Zhang, and Bo Du. "Compressed Self-Attention for Deep Metric Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3561–68. http://dx.doi.org/10.1609/aaai.v34i04.5762.

Full text
Abstract:
In this paper, we aim to enhance self-attention (SA) mechanism for deep metric learning in visual perception, by capturing richer contextual dependencies in visual data. To this end, we propose a novel module, named compressed self-attention (CSA), which significantly reduces the computation and memory cost with a neglectable decrease in accuracy with respect to the original SA mechanism, thanks to the following two characteristics: i) it only needs to compute a small number of base attention maps for a small number of base feature vectors; and ii) the output at each spatial location can be simply obtained by an adaptive weighted average of the outputs calculated from the base attention maps. The high computational efficiency of CSA enables the application to high-resolution shallow layers in convolutional neural networks with little additional cost. In addition, CSA makes it practical to further partition the feature maps into groups along the channel dimension and compute attention maps for features in each group separately, thus increasing the diversity of long-range dependencies and accordingly boosting the accuracy. We evaluate the performance of CSA via extensive experiments on two metric learning tasks: person re-identification and local descriptor learning. Qualitative and quantitative comparisons with latest methods demonstrate the significance of CSA in this topic.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Zhiqin, Bo Zhang, Fen Li, and Dehua Kong. "Multihead Self Attention Hand Pose Estimation." E3S Web of Conferences 218 (2020): 03023. http://dx.doi.org/10.1051/e3sconf/202021803023.

Full text
Abstract:
In This paper, we propose a hand pose estimation neural networks architecture named MSAHP which can improve PCK (percentage correct keypoints) greatly by fusing self-attention module in CNN (Convolutional Neural Networks). The proposed network is based on a ResNet (Residual Neural Network) backbone and concatenate discriminative features through multiple different scale feature maps, then multiple head self-attention module was used to focus on the salient feature map area. In recent years, self-attention mechanism was applicated widely in NLP and speech recognition, which can improve greatly key metrics. But in compute vision especially for hand pose estimation, we did not find the application. Experiments on hand pose estimation dataset demonstrate the improved PCK of our MSAHP than the existing state-of-the-art hand pose estimation methods. Specifically, the proposed method can achieve 93.68% PCK score on our mixed test dataset.
APA, Harvard, Vancouver, ISO, and other styles
24

Ji, Mingi, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, and Il-Chul Moon. "Sequential Recommendation with Relation-Aware Kernelized Self-Attention." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4304–11. http://dx.doi.org/10.1609/aaai.v34i04.5854.

Full text
Abstract:
Recent studies identified that sequential Recommendation is improved by the attention mechanism. By following this development, we propose Relation-Aware Kernelized Self-Attention (RKSA) adopting a self-attention mechanism of the Transformer with augmentation of a probabilistic model. The original self-attention of Transformer is a deterministic measure without relation-awareness. Therefore, we introduce a latent space to the self-attention, and the latent space models the recommendation context from relation as a multivariate skew-normal distribution with a kernelized covariance matrix from co-occurrences, item characteristics, and user information. This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics. We experimented RKSA over the benchmark datasets, and RKSA shows significant improvements compared to the recent baseline models. Also, RKSA were able to produce a latent space model that answers the reasons for recommendation.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao Ning, 赵宁, and 刘立波 Liu Libo. "融合自注意力机制的人物姿态迁移生成模型." Laser & Optoelectronics Progress 59, no. 4 (2022): 0410014. http://dx.doi.org/10.3788/lop202259.0410014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Daihong, Jiang, Hu yuanzheng, Dai Lei, and Peng Jin. "Facial Expression Recognition Based on Attention Mechanism." Scientific Programming 2021 (March 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/6624251.

Full text
Abstract:
At present, traditional facial expression recognition methods of convolutional neural networks are based on local ideas for feature expression, which results in the model’s low efficiency in capturing the dependence between long-range pixels, leading to poor performance for facial expression recognition. In order to solve the above problems, this paper combines a self-attention mechanism with a residual network and proposes a new facial expression recognition model based on the global operation idea. This paper first introduces the self-attention mechanism on the basis of the residual network and finds the relative importance of a location by calculating the weighted average of all location pixels, then introduces channel attention to learn different features on the channel domain, and generates channel attention to focus on the interactive features in different channels so that the robustness can be improved; finally, it merges the self-attention mechanism and the channel attention mechanism to increase the model’s ability to extract globally important features. The accuracy of this paper on the CK+ and FER2013 datasets is 97.89% and 74.15%, respectively, which fully confirmed the effectiveness and superiority of the model in extracting global features.
APA, Harvard, Vancouver, ISO, and other styles
27

Tiwari, Prayag, Amit Kumar Jaiswal, Sahil Garg, and Ilsun You. "SANTM: Efficient Self-attention-driven Network for Text Matching." ACM Transactions on Internet Technology 22, no. 3 (August 31, 2022): 1–21. http://dx.doi.org/10.1145/3426971.

Full text
Abstract:
Self-attention mechanisms have recently been embraced for a broad range of text-matching applications. Self-attention model takes only one sentence as an input with no extra information, i.e., one can utilize the final hidden state or pooling. However, text-matching problems can be interpreted either in symmetrical or asymmetrical scopes. For instance, paraphrase detection is an asymmetrical task, while textual entailment classification and question-answer matching are considered asymmetrical tasks. In this article, we leverage attractive properties of self-attention mechanism and proposes an attention-based network that incorporates three key components for inter-sequence attention: global pointwise features, preceding attentive features, and contextual features while updating the rest of the components. Our model follows evaluation on two benchmark datasets cover tasks of textual entailment and question-answer matching. The proposed efficient Self-attention-driven Network for Text Matching outperforms the state of the art on the Stanford Natural Language Inference and WikiQA datasets with much fewer parameters.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Zuoxi, and Shoubin Dong. "HSRec: Hierarchical self-attention incorporating knowledge graph for sequential recommendation." Journal of Intelligent & Fuzzy Systems 42, no. 4 (March 4, 2022): 3749–60. http://dx.doi.org/10.3233/jifs-211953.

Full text
Abstract:
Modeling user’s fine-grained preferences and dynamic preference evolution from their chronological behaviors are challenging and crucial for sequential recommendation. In this paper, we develop a Hierarchical Self-Attention Incorporating Knowledge Graph for Sequential Recommendation (HSRec). HSRec models not only the user’s intrinsic preferences but also the user’s external potential interests to capture the user’s fine-grained preferences. Specifically, the intrinsic interest module and potential interest module are designed to capture these two preferences respectively. In the intrinsic interest module, user’s sequential patterns are characterized from their behaviors via the self-attention mechanism. As for the potential interest module, high-order paths can be generated with the help of the knowledge graph. Therefore, a hierarchical self-attention mechanism is designed to aggregate the semantic information of user interaction from these paths. Specifically, an entity-level self-attention mechanism is applied to capture the sequential patterns contained in the high-order paths while an interaction-level self-attention mechanism is designed to further capture the semantic information from user interactions. Moreover, according to the high-order semantic relevance, HSRec can explore the user’s dynamic preferences at each time, thus describing the user’s dynamic preference evolution. Finally, experiments conducted on three real world datasets demonstrate the state-of-the-art performance of the HSRec.
APA, Harvard, Vancouver, ISO, and other styles
29

Ma, Xin, Zhanzhan Liu, Mingxing Zheng, and Youqing Wang. "Application and exploration of self-attention mechanism in dynamic process monitoring." IFAC-PapersOnLine 55, no. 6 (2022): 139–44. http://dx.doi.org/10.1016/j.ifacol.2022.07.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Liang, Hong, Hui Zhou, Qian Zhang, and Ting Wu. "Object Detection Algorithm Based on Context Information and Self-Attention Mechanism." Symmetry 14, no. 5 (April 28, 2022): 904. http://dx.doi.org/10.3390/sym14050904.

Full text
Abstract:
Pursuing an object detector with good detection accuracy while ensuring detection speed has always been a challenging problem in object detection. This paper proposes a multi-scale context information fusion model combined with a self-attention block (CSA-Net). First, an improved backbone network ResNet-SA is designed with self-attention to reduce the interference of the image background area and focus on the object region. Second, this work introduces a receptive field feature enhancement module (RFFE) to combine local and global features while increasing the receptive field. Then this work adopts a spatial feature fusion pyramid with a symmetrical structure, which fuses and transfers semantic information and feature information. Finally, a sibling detection head using an anchor-free detection mechanism is applied to increase the accuracy and speed of detection at the end of the model. A large number of experiments support the above analysis and conclusions. Our model achieves an average accuracy of 46.8% on the COCO 2017 test set.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Ru, Xinjian Zhao, Jiaqi Li, Song Zhang, and Zhijie Shang. "A malicious code family classification method based on self-attention mechanism." Journal of Physics: Conference Series 2010, no. 1 (September 1, 2021): 012066. http://dx.doi.org/10.1088/1742-6596/2010/1/012066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

YE, Rui-da, Wei-jie WANG, Liang HE, Xiao-cen CHEN, and Yue XUE. "RUL prediction of aero-engine based on residual self-attention mechanism." Optics and Precision Engineering 29, no. 6 (2021): 1482–90. http://dx.doi.org/10.37188/ope.20212906.1482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Jinsong, Jianhua Peng, Shuxin Liu, Lintianran Weng, and Cong Li. "Temporal link prediction in directed networks based on self-attention mechanism." Intelligent Data Analysis 26, no. 1 (January 13, 2022): 173–88. http://dx.doi.org/10.3233/ida-205524.

Full text
Abstract:
The development of graph neural networks (GCN) makes it possible to learn structural features from evolving complex networks. Even though a wide range of realistic networks are directed ones, few existing works investigated the properties of directed and temporal networks. In this paper, we address the problem of temporal link prediction in directed networks and propose a deep learning model based on GCN and self-attention mechanism, namely TSAM. The proposed model adopts an autoencoder architecture, which utilizes graph attentional layers to capture the structural feature of neighborhood nodes, as well as a set of graph convolutional layers to capture motif features. A graph recurrent unit layer with self-attention is utilized to learn temporal variations in the snapshot sequence. We run comparative experiments on four realistic networks to validate the effectiveness of TSAM. Experimental results show that TSAM outperforms most benchmarks under two evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
34

Cheng, Kefei, Yanan Yue, and Zhiwen Song. "Sentiment Classification Based on Part-of-Speech and Self-Attention Mechanism." IEEE Access 8 (2020): 16387–96. http://dx.doi.org/10.1109/access.2020.2967103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Liao, Fei, Liangli Ma, Jingjing Pei, and Linshan Tan. "Combined Self-Attention Mechanism for Chinese Named Entity Recognition in Military." Future Internet 11, no. 8 (August 18, 2019): 180. http://dx.doi.org/10.3390/fi11080180.

Full text
Abstract:
Military named entity recognition (MNER) is one of the key technologies in military information extraction. Traditional methods for the MNER task rely on cumbersome feature engineering and specialized domain knowledge. In order to solve this problem, we propose a method employing a bidirectional long short-term memory (BiLSTM) neural network with a self-attention mechanism to identify the military entities automatically. We obtain distributed vector representations of the military corpus by unsupervised learning and the BiLSTM model combined with the self-attention mechanism is adopted to capture contextual information fully carried by the character vector sequence. The experimental results show that the self-attention mechanism can improve effectively the performance of MNER task. The F-score of the military documents and network military texts identification was 90.15% and 89.34%, respectively, which was better than other models.
APA, Harvard, Vancouver, ISO, and other styles
36

Peng, Dunlu, Weiwei Yuan, and Cong Liu. "HARSAM: A Hybrid Model for Recommendation Supported by Self-Attention Mechanism." IEEE Access 7 (2019): 12620–29. http://dx.doi.org/10.1109/access.2019.2892565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Yuhang, Xiaoli Huo, Zhiqun Gu, Jiawei Zhang, Yi Ding, Rentao Gu, and Yuefeng Ji. "Self-Attention Mechanism-Based Multi-Channel QoT Estimation in Optical Networks." Photonics 10, no. 1 (January 6, 2023): 63. http://dx.doi.org/10.3390/photonics10010063.

Full text
Abstract:
It is essential to estimate the quality of transmission (QoT) of lightpaths before their establishment for efficient planning and operation of optical networks. Due to the nonlinear effect of fibers, the deployed lightpaths influence the QoT of each other; thus, multi-channel QoT estimation is necessary, which provides complete QoT information for network optimization. Moreover, the different interfering channels have different effects on the channel under test. However, the existing artificial-neural-network-based multi-channel QoT estimators (ANN-QoT-E) neglect the different effects of the interfering channels in their input layer, which affects their estimation accuracy severely. In this paper, we propose a self-attention mechanism-based multi-channel QoT estimator (SA-QoT-E) to improve the estimation accuracy of the ANN-QoT-E. In the SA-QoT-E, the input features are designed as a sequence of feature vectors of channels that route the same path, and the self-attention mechanism dynamically assigns weights to the feature vectors of interfering channels according to their effects on the channel under test. Moreover, a hyperparameter search method is used to optimize the SA-QoT-E. The simulation results show that, compared with the ANN-QoT-E, our proposed SA-QoT-E achieves higher estimation accuracy, and can be directly applied to the network wavelength expansion scenarios without retraining.
APA, Harvard, Vancouver, ISO, and other styles
38

Wu, Peiyang, Zongxu Pan, Hairong Tang, and Yuxin Hu. "Cloudformer: A Cloud-Removal Network Combining Self-Attention Mechanism and Convolution." Remote Sensing 14, no. 23 (December 3, 2022): 6132. http://dx.doi.org/10.3390/rs14236132.

Full text
Abstract:
Optical remote-sensing images have a wide range of applications, but they are often obscured by clouds, which affects subsequent analysis. Therefore, cloud removal becomes a necessary preprocessing step. In this paper, a novel and superior transformer-based network is proposed, named Cloudformer. The proposed method novelly combines the advantages of convolution and a self-attention mechanism: it uses convolution layers to extract simple features over a small range in the shallow layer, and exerts the advantage of a self-attention mechanism in extracting correlation in a large range in the deep layer. This method also introduces Locally-enhanced Positional Encoding (LePE) to flexibly generate suitable positional encodings for different inputs and to utilize local information to enhance encoding capabilities. Exhaustive experiments on public datasets demonstrate the superior ability of the method to remove both thin and thick clouds, and the effectiveness of the proposed modules is validated by ablation studies.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Mei, Yu Yao, Hongbin Qiu, and Xiyu Song. "Adaptive Memory-Controlled Self-Attention for Polyphonic Sound Event Detection." Symmetry 14, no. 2 (February 12, 2022): 366. http://dx.doi.org/10.3390/sym14020366.

Full text
Abstract:
Polyphonic sound event detection (SED) is the task of detecting the time stamps and the class of sound event that occurred during a recording. Real life sound events overlap in recordings, and their durations vary dramatically, making them even harder to recognize. In this paper, we propose Convolutional Recurrent Neural Networks (CRNNs) to extract hidden state feature representations; then, a self-attention mechanism using a symmetric score function is introduced to memorize long-range dependencies of features that the CRNNs extract. Furthermore, we propose to use memory-controlled self-attention to explicitly compute the relations between time steps in audio representation embedding. Then, we propose a strategy for adaptive memory-controlled self-attention mechanisms. Moreover, we applied semi-supervised learning, namely, mean teacher–student methods, to exploit unlabeled audio data. The proposed methods all performed well in the Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 Sound Event Detection in Real Life Audio (task3) test and the DCASE 2021 Sound Event Detection and Separation in Domestic Environments (task4) test. In DCASE 2017 task3, our model surpassed the challenge’s winning system’s F1-score by 6.8%. We show that the proposed adaptive memory-controlled model reached the same performance level as a fixed attention width model. Experimental results indicate that the proposed attention mechanism is able to improve sound event detection. In DCASE 2021 task4, we investigated various pooling strategies in two scenarios. In addition, we found that in weakly labeled semi-supervised sound event detection, building an attention layer on top of the CRNN is needless repetition. This conclusion could be applied to other multi-instance learning problems.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Guangjie, Xin Ma, Jinlong Zhu, Yu Zhang, Danyang Yang, Jianfeng Wang, and Yi Wang. "Individualized tourism recommendation based on self-attention." PLOS ONE 17, no. 8 (August 25, 2022): e0272319. http://dx.doi.org/10.1371/journal.pone.0272319.

Full text
Abstract:
Although the era of big data has brought convenience to daily life, it has also caused many problems. In the field of scenic tourism, it is increasingly difficult for people to choose the scenic spot that meets their needs from mass information. To provide high-quality services to users, a recommended tourism model is introduced in this paper. On the one hand, the tourism system utilises the users’ historical interactions with different scenic spots to infer their short- and long-term favorites. Among them, the users’ short-term demands are modelled through self-attention mechanism, and the proportion of short- and long-term favorites is calculated using the Euclidean distance. On the other hand, the system models the relationship between multiple scenic spots to strengthen the item relationship and further form the most relevant tourist recommendations.
APA, Harvard, Vancouver, ISO, and other styles
41

Wei, Yupeng, Dazhong Wu, and Janis Terpenny. "Bearing remaining useful life prediction using self-adaptive graph convolutional networks with self-attention mechanism." Mechanical Systems and Signal Processing 188 (April 2023): 110010. http://dx.doi.org/10.1016/j.ymssp.2022.110010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ma, Suling. "A Study of Two-Way Short- and Long-Term Memory Network Intelligent Computing IoT Model-Assisted Home Education Attention Mechanism." Computational Intelligence and Neuroscience 2021 (December 21, 2021): 1–11. http://dx.doi.org/10.1155/2021/3587884.

Full text
Abstract:
This paper analyzes and collates the research on traditional homeschooling attention mechanism and homeschooling attention mechanism based on two-way short- and long-term memory network intelligent computing IoT model and finds the superiority of two-way short- and long-term memory network intelligent computing IoT model. The two-way short- and long-term memory network intelligent computing IoT model is improved and an improved deep neural network intelligent computing IoT is proposed, and the improved method is verified based on discrete signal homeschooling classification experiments, followed by focusing on the application research of the two-way short- and long-term memory network intelligent computing IoT model-assisted homeschooling attention mechanism. Learning based on neural network, human behavior recognition method combining spatiotemporal networks, a homeschooling method integrating bidirectional short- and long-term memory networks and attention mechanisms is designed. The visual attention mechanism is used to add weight information to the deep visual features extracted by the convolutional neural network, and a new feature sequence incorporating salient attention weights is output. This feature sequence is then decoded using an IndRNN independent recurrent neural network to finally classify and decide on the homeschooling category. Experiments on the UCF101 dataset demonstrate that the incorporation of the attention mechanism can improve the ability of the network to classify. The attention mechanism can help the intelligent computing IoT model discover key features, and the self-attention mechanism can effectively capture the internal features of homeschooling and optimize the feature vector. We propose the strategy of combining the self-attention mechanism with a bidirectional short- and long-term memory network to solve the family education classification problem and experimentally verify that the intelligent computing IoT model combined with the self-attention mechanism can more easily capture the interdependent features in family education, which can effectively solve the family education problem and further improve the family education classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Jiang, Cheng, Yuanxi Peng, Xuebin Tang, Chunchao Li, and Teng Li. "PointSwin: Modeling Self-Attention with Shifted Window on Point Cloud." Applied Sciences 12, no. 24 (December 9, 2022): 12616. http://dx.doi.org/10.3390/app122412616.

Full text
Abstract:
As a pioneering work that directly applies deep learning methods to raw point cloud data, PointNet has the advantages of fast convergence speed and high computational efficiency. However, its feature learning in local areas has a certain defect, which limits the expressive ability of the model. In order to enhance the feature representation in the local area, this paper proposes a new point cloud processing model, which is called PointSwin. By applying the Self-Attention with Shifted-Window mechanism to learn the correlation between mixed features and points, PointSwin encourages features to enhance their interactions with each other to achieve the effect of feature enhancement. At the same time, PointSwin also achieves a better balance between higher accuracy results and less time overhead by adopting the Mask mechanism to reduce redundant computations. In addition, this paper also proposes an efficient model called PointSwin-E. It can maintain good performance while greatly reducing the computational overhead. The results of the comparative experiments on ModelNet40 dataset show that PointSwin and PointSwin-E are better than PointNet and PointNet++ in terms of accuracy, and the effectiveness verification experiments on the Self-Attention with Shifted-Window mechanism also prove the superiority of this model.
APA, Harvard, Vancouver, ISO, and other styles
44

Pan, Wenxia. "English Machine Translation Model Based on an Improved Self-Attention Technology." Scientific Programming 2021 (December 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/2601480.

Full text
Abstract:
English machine translation is a natural language processing research direction that has important scientific research value and practical value in the current artificial intelligence boom. The variability of language, the limited ability to express semantic information, and the lack of parallel corpus resources all limit the usefulness and popularity of English machine translation in practical applications. The self-attention mechanism has received a lot of attention in English machine translation tasks because of its highly parallelizable computing ability, which reduces the model’s training time and allows it to capture the semantic relevance of all words in the context. The efficiency of the self-attention mechanism, however, differs from that of recurrent neural networks because it ignores the position and structure information between context words. The English machine translation model based on the self-attention mechanism uses sine and cosine position coding to represent the absolute position information of words in order to enable the model to use position information between words. This method, on the other hand, can reflect relative distance but does not provide directionality. As a result, a new model of English machine translation is proposed, which is based on the logarithmic position representation method and the self-attention mechanism. This model retains the distance and directional information between words, as well as the efficiency of the self-attention mechanism. Experiments show that the nonstrict phrase extraction method can effectively extract phrase translation pairs from the n-best word alignment results and that the extraction constraint strategy can improve translation quality even further. Nonstrict phrase extraction methods and n-best alignment results can significantly improve the quality of translation translations when compared to traditional phrase extraction methods based on single alignment.
APA, Harvard, Vancouver, ISO, and other styles
45

Ishizuka, Ryoto, Ryo Nishikimi, and Kazuyoshi Yoshii. "Global Structure-Aware Drum Transcription Based on Self-Attention Mechanisms." Signals 2, no. 3 (August 13, 2021): 508–26. http://dx.doi.org/10.3390/signals2030031.

Full text
Abstract:
This paper describes an automatic drum transcription (ADT) method that directly estimates a tatum-level drum score from a music signal in contrast to most conventional ADT methods that estimate the frame-level onset probabilities of drums. To estimate a tatum-level score, we propose a deep transcription model that consists of a frame-level encoder for extracting the latent features from a music signal and a tatum-level decoder for estimating a drum score from the latent features pooled at the tatum level. To capture the global repetitive structure of drum scores, which is difficult to learn with a recurrent neural network (RNN), we introduce a self-attention mechanism with tatum-synchronous positional encoding into the decoder. To mitigate the difficulty of training the self-attention-based model from an insufficient amount of paired data and to improve the musical naturalness of the estimated scores, we propose a regularized training method that uses a global structure-aware masked language (score) model with a self-attention mechanism pretrained from an extensive collection of drum scores. The experimental results showed that the proposed regularized model outperformed the conventional RNN-based model in terms of the tatum-level error rate and the frame-level F-measure, even when only a limited amount of paired data was available so that the non-regularized model underperformed the RNN-based model.
APA, Harvard, Vancouver, ISO, and other styles
46

Hu, Gensheng, Lidong Qian, Dong Liang, and Mingzhu Wan. "Self-adversarial Training and Attention for Multi-task Wheat Phenotyping." Applied Engineering in Agriculture 35, no. 6 (2019): 1009–14. http://dx.doi.org/10.13031/aea.13406.

Full text
Abstract:
Abstract. Phenotypic monitoring provides important data support for precision agriculture management. This study proposes a deep learning-based method to gain an accurate count of wheat ears and spikelets. The deep learning networks incorporate self-adversarial training and attention mechanism with stacked hourglass networks. Four stacked hourglass networks follow a holistic attention map to construct a generator of self-adversarial networks. The holistic attention maps enable the networks to focus on the overall consistency of the whole wheat. The discriminator of self-adversarial networks displays the same structure as the generator, which causes adversarial loss to the generator. This process improves the generator’s learning ability and prediction accuracy for occluded wheat ears. This method yields higher wheat ear count in the Annotated Crop Image Database (ACID) data set than the previous state-of-the-art algorithm. Keywords: Attention mechanism, Plant phenotype, Self-adversarial networks, Stacked hourglass.
APA, Harvard, Vancouver, ISO, and other styles
47

Fang, Yong, Shaoshuai Yang, Bin Zhao, and Cheng Huang. "Cyberbullying Detection in Social Networks Using Bi-GRU with Self-Attention Mechanism." Information 12, no. 4 (April 16, 2021): 171. http://dx.doi.org/10.3390/info12040171.

Full text
Abstract:
With the propagation of cyberbullying in social networks as a trending subject, cyberbullying detection has become a social problem that researchers are concerned about. Developing intelligent models and systems helps detect cyberbullying automatically. This work focuses on text-based cyberbullying detection because it is the commonly used information carrier in social networks and is the widely used feature in this regard studies. Motivated by the documented success of neural networks, we propose a complete model combining the bidirectional gated recurrent unit (Bi-GRU) and the self-attention mechanism. In detail, we introduce the design of a GRU cell and Bi-GRU’s advantage for learning the underlying relationships between words from both directions. Besides, we present the design of the self-attention mechanism and the benefit of this joining for achieving a greater performance of cyberbullying classification tasks. The proposed model could address the limitation of the vanishing and exploding gradient problems. We avoid using oversampling or downsampling on experimental data which could result in the overestimation of evaluation. We conduct a comparative assessment on two commonly used datasets, and the results show that our proposed method outperformed baselines in all evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
48

Fernández-Llaneza, Daniel, Silas Ulander, Dea Gogishvili, Eva Nittinger, Hongtao Zhao, and Christian Tyrchan. "Siamese Recurrent Neural Network with a Self-Attention Mechanism for Bioactivity Prediction." ACS Omega 6, no. 16 (April 15, 2021): 11086–94. http://dx.doi.org/10.1021/acsomega.1c01266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Shuai, Lin Luo, Qilei Xia, and Lunjie Wang. "Self-attention Mechanism based Dynamic Fault Diagnosis and Classification for Chemical Processes." Journal of Physics: Conference Series 1914, no. 1 (May 1, 2021): 012046. http://dx.doi.org/10.1088/1742-6596/1914/1/012046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Nkabiti, Kabo Poloko, and Yueyun Chen. "Application of solely self-attention mechanism in CSI-fingerprinting-based indoor localization." Neural Computing and Applications 33, no. 15 (January 18, 2021): 9185–98. http://dx.doi.org/10.1007/s00521-020-05681-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography