To see the other types of publications on this topic, follow the link: Self-attention mechanisms.

Journal articles on the topic 'Self-attention mechanisms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Self-attention mechanisms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Makarov, Ilya, Maria Bakhanova, Sergey Nikolenko, and Olga Gerasimova. "Self-supervised recurrent depth estimation with attention mechanisms." PeerJ Computer Science 8 (January 31, 2022): e865. http://dx.doi.org/10.7717/peerj-cs.865.

Full text
Abstract:
Depth estimation has been an essential task for many computer vision applications, especially in autonomous driving, where safety is paramount. Depth can be estimated not only with traditional supervised learning but also via a self-supervised approach that relies on camera motion and does not require ground truth depth maps. Recently, major improvements have been introduced to make self-supervised depth prediction more precise. However, most existing approaches still focus on single-frame depth estimation, even in the self-supervised setting. Since most methods can operate with frame sequences, we believe that the quality of current models can be significantly improved with the help of information about previous frames. In this work, we study different ways of integrating recurrent blocks and attention mechanisms into a common self-supervised depth estimation pipeline. We propose a set of modifications that utilize temporal information from previous frames and provide new neural network architectures for monocular depth estimation in a self-supervised manner. Our experiments on the KITTI dataset show that proposed modifications can be an effective tool for exploiting temporal information in a depth prediction pipeline.
APA, Harvard, Vancouver, ISO, and other styles
2

Bae, Ara, and Wooil Kim. "Speaker Verification Employing Combinations of Self-Attention Mechanisms." Electronics 9, no. 12 (December 21, 2020): 2201. http://dx.doi.org/10.3390/electronics9122201.

Full text
Abstract:
One of the most recent speaker recognition methods that demonstrates outstanding performance in noisy environments involves extracting the speaker embedding using attention mechanism instead of average or statistics pooling. In the attention method, the speaker recognition performance is improved by employing multiple heads rather than a single head. In this paper, we propose advanced methods to extract a new embedding by compensating for the disadvantages of the single-head and multi-head attention methods. The combination method comprising single-head and split-based multi-head attentions shows a 5.39% Equal Error Rate (EER). When the single-head and projection-based multi-head attention methods are combined, the speaker recognition performance improves by 4.45%, which is the best performance in this work. Our experimental results demonstrate that the attention mechanism reflects the speaker’s properties more effectively than average or statistics pooling, and the speaker verification system could be further improved by employing combinations of different attention techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Cao, Fude, Chunguang Zheng, Limin Huang, Aihua Wang, Jiong Zhang, Feng Zhou, Haoxue Ju, Haitao Guo, and Yuxia Du. "Research of Self-Attention in Image Segmentation." Journal of Information Technology Research 15, no. 1 (January 2022): 1–12. http://dx.doi.org/10.4018/jitr.298619.

Full text
Abstract:
Although the traditional convolutional neural network is applied to image segmentation successfully, it has some limitations. That's the context information of the long-range on the image is not well captured. With the success of the introduction of self-attentional mechanisms in the field of natural language processing (NLP), people have tried to introduce the attention mechanism in the field of computer vision. It turns out that self-attention can really solve this long-range dependency problem. This paper is a summary on the application of self-attention to image segmentation in the past two years. And think about whether the self-attention module in this field can replace convolution operation in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Dai, Biyun, Jinlong Li, and Ruoyi Xu. "Multiple Positional Self-Attention Network for Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7610–17. http://dx.doi.org/10.1609/aaai.v34i05.6261.

Full text
Abstract:
Self-attention mechanisms have recently caused many concerns on Natural Language Processing (NLP) tasks. Relative positional information is important to self-attention mechanisms. We propose Faraway Mask focusing on the (2m + 1)-gram words and Scaled-Distance Mask putting the logarithmic distance punishment to avoid and weaken the self-attention of distant words respectively. To exploit different masks, we present Positional Self-Attention Layer for generating different Masked-Self-Attentions and a following Position-Fusion Layer in which fused positional information multiplies the Masked-Self-Attentions for generating sentence embeddings. To evaluate our sentence embeddings approach Multiple Positional Self-Attention Network (MPSAN), we perform the comparison experiments on sentiment analysis, semantic relatedness and sentence classification tasks. The result shows that our MPSAN outperforms state-of-the-art methods on five datasets and the test accuracy is improved by 0.81%, 0.6% on SST, CR datasets, respectively. In addition, we reduce training parameters and improve the time efficiency of MPSAN by lowering the dimension number of self-attention and simplifying fusion mechanism.
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Fei, Dalong Zhang, and Chengming Liu. "Global–Local Self-Attention Based Transformer for Speaker Verification." Applied Sciences 12, no. 19 (October 10, 2022): 10154. http://dx.doi.org/10.3390/app121910154.

Full text
Abstract:
Transformer models are now widely used for speech processing tasks due to their powerful sequence modeling capabilities. Previous work determined an efficient way to model speaker embeddings using the Transformer model by combining transformers with convolutional networks. However, traditional global self-attention mechanisms lack the ability to capture local information. To alleviate these problems, we proposed a novel global–local self-attention mechanism. Instead of using local or global multi-head attention alone, this method performs local and global attention in parallel in two parallel groups to enhance local modeling and reduce computational cost. To better handle local location information, we introduced locally enhanced location encoding in the speaker verification task. The experimental results of the VoxCeleb1 test set and the VoxCeleb2 dev set demonstrated the improved effect of our proposed global–local self-attention mechanism. Compared with the Transformer-based Robust Embedding Extractor Baseline System, the proposed speaker Transformer network exhibited better performance in the speaker verification task.
APA, Harvard, Vancouver, ISO, and other styles
6

Ishizuka, Ryoto, Ryo Nishikimi, and Kazuyoshi Yoshii. "Global Structure-Aware Drum Transcription Based on Self-Attention Mechanisms." Signals 2, no. 3 (August 13, 2021): 508–26. http://dx.doi.org/10.3390/signals2030031.

Full text
Abstract:
This paper describes an automatic drum transcription (ADT) method that directly estimates a tatum-level drum score from a music signal in contrast to most conventional ADT methods that estimate the frame-level onset probabilities of drums. To estimate a tatum-level score, we propose a deep transcription model that consists of a frame-level encoder for extracting the latent features from a music signal and a tatum-level decoder for estimating a drum score from the latent features pooled at the tatum level. To capture the global repetitive structure of drum scores, which is difficult to learn with a recurrent neural network (RNN), we introduce a self-attention mechanism with tatum-synchronous positional encoding into the decoder. To mitigate the difficulty of training the self-attention-based model from an insufficient amount of paired data and to improve the musical naturalness of the estimated scores, we propose a regularized training method that uses a global structure-aware masked language (score) model with a self-attention mechanism pretrained from an extensive collection of drum scores. The experimental results showed that the proposed regularized model outperformed the conventional RNN-based model in terms of the tatum-level error rate and the frame-level F-measure, even when only a limited amount of paired data was available so that the non-regularized model underperformed the RNN-based model.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Hu, Ze Wang, Yu Shi, Yingying Hua, Guoxia Xu, and Lizhen Deng. "Multimodal Fusion Method Based on Self-Attention Mechanism." Wireless Communications and Mobile Computing 2020 (September 23, 2020): 1–8. http://dx.doi.org/10.1155/2020/8843186.

Full text
Abstract:
Multimodal fusion is one of the popular research directions of multimodal research, and it is also an emerging research field of artificial intelligence. Multimodal fusion is aimed at taking advantage of the complementarity of heterogeneous data and providing reliable classification for the model. Multimodal data fusion is to transform data from multiple single-mode representations to a compact multimodal representation. In previous multimodal data fusion studies, most of the research in this field used multimodal representations of tensors. As the input is converted into a tensor, the dimensions and computational complexity increase exponentially. In this paper, we propose a low-rank tensor multimodal fusion method with an attention mechanism, which improves efficiency and reduces computational complexity. We evaluate our model through three multimodal fusion tasks, which are based on a public data set: CMU-MOSI, IEMOCAP, and POM. Our model achieves a good performance while flexibly capturing the global and local connections. Compared with other multimodal fusions represented by tensors, experiments show that our model can achieve better results steadily under a series of attention mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

POSNER, MICHAEL I., and MARY K. ROTHBART. "Developing mechanisms of self-regulation." Development and Psychopathology 12, no. 3 (September 2000): 427–41. http://dx.doi.org/10.1017/s0954579400003096.

Full text
Abstract:
Child development involves both reactive and self-regulatory mechanisms that children develop in conjunction with social norms. A half-century of research has uncovered aspects of the physical basis of attentional networks that produce regulation, and has given us some knowledge of how the social environment may alter them. In this paper, we discuss six forms of developmental plasticity related to aspects of attention. We then focus on effortful or executive aspects of attention, reviewing research on temperamental individual differences and important pathways to normal and pathological development. Pathologies of development may arise when regulatory and reactive systems fail to reach the balance that allows for both self-expression and socially acceptable behavior. It remains a challenge for our society during the next millennium to obtain the information necessary to design systems that allow a successful balance to be realized by the largest possible number of children.
APA, Harvard, Vancouver, ISO, and other styles
9

Tiwari, Prayag, Amit Kumar Jaiswal, Sahil Garg, and Ilsun You. "SANTM: Efficient Self-attention-driven Network for Text Matching." ACM Transactions on Internet Technology 22, no. 3 (August 31, 2022): 1–21. http://dx.doi.org/10.1145/3426971.

Full text
Abstract:
Self-attention mechanisms have recently been embraced for a broad range of text-matching applications. Self-attention model takes only one sentence as an input with no extra information, i.e., one can utilize the final hidden state or pooling. However, text-matching problems can be interpreted either in symmetrical or asymmetrical scopes. For instance, paraphrase detection is an asymmetrical task, while textual entailment classification and question-answer matching are considered asymmetrical tasks. In this article, we leverage attractive properties of self-attention mechanism and proposes an attention-based network that incorporates three key components for inter-sequence attention: global pointwise features, preceding attentive features, and contextual features while updating the rest of the components. Our model follows evaluation on two benchmark datasets cover tasks of textual entailment and question-answer matching. The proposed efficient Self-attention-driven Network for Text Matching outperforms the state of the art on the Stanford Natural Language Inference and WikiQA datasets with much fewer parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Ng, Hu, Glenn Jun Weng Chia, Timothy Tzen Vun Yap, and Vik Tor Goh. "Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms." F1000Research 10 (May 17, 2022): 1001. http://dx.doi.org/10.12688/f1000research.73131.2.

Full text
Abstract:
Background: The proliferation of digital commerce has allowed merchants to reach out to a wider customer base, prompting a study of customer reviews to gauge service and product quality through sentiment analysis. Sentiment analysis can be enhanced through subjectivity and objectivity classification with attention mechanisms. Methods: This research includes input corpora of contrasting levels of subjectivity and objectivity from different databases to perform sentiment analysis on user reviews, incorporating attention mechanisms at the aspect level. Three large corpora are chosen as the subjectivity and objectivity datasets, the Shopee user review dataset (ShopeeRD) for subjectivity, together with the Wikipedia English dataset (Wiki-en) and Internet Movie Database (IMDb) for objectivity. Word embeddings are created using Word2Vec with Skip-Gram. Then, a bidirectional LSTM with an attention layer (LSTM-ATT) imposed on word vectors. The performance of the model is evaluated and benchmarked against classification models of Logistics Regression (LR) and Linear SVC (L-SVC). Three models are trained with subjectivity (70% of ShopeeRD) and the objectivity (Wiki-en) embeddings, with ten-fold cross-validation. Next, the three models are evaluated against two datasets (IMDb and 20% of ShopeeRD). The experiments are based on benchmark comparisons, embedding comparison and model comparison with 70-10-20 train-validation-test splits. Data augmentation using AUG-BERT is performed and selected models incorporating AUG-BERT, are compared. Results: L-SVC scored the highest accuracy with 56.9% for objective embeddings (Wiki-en) while the LSTM-ATT scored 69.0% on subjective embeddings (ShopeeRD). Improved performances were observed with data augmentation using AUG-BERT, where the LSTM-ATT+AUG-BERT model scored the highest accuracy at 60.0% for objective embeddings and 70.0% for subjective embeddings, compared to 57% (objective) and 69% (subjective) for L-SVC+AUG-BERT, and 56% (objective) and 68% (subjective) for L-SVC. Conclusions: Utilizing attention layers with subjectivity and objectivity notions has shown improvement to the accuracy of sentiment analysis models.
APA, Harvard, Vancouver, ISO, and other styles
11

Lin, Hung-Hsiang, Jiun-Da Lin, Jose Jaena Mari Ople, Jun-Cheng Chen, and Kai-Lung Hua. "Social Media Popularity Prediction Based on Multi-Modal Self-Attention Mechanisms." IEEE Access 10 (2022): 4448–55. http://dx.doi.org/10.1109/access.2021.3136552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ng, Hu, Glenn Jun Weng Chia, Timothy Tzen Vun Yap, and Vik Tor Goh. "Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms." F1000Research 10 (October 4, 2021): 1001. http://dx.doi.org/10.12688/f1000research.73131.1.

Full text
Abstract:
Background: The proliferation of digital commerce has allowed merchants to reach out to a wider customer base, prompting a study of customer reviews to gauge service and product quality through sentiment analysis. Sentiment analysis can be enhanced through subjectivity and objectivity classification with attention mechanisms. Methods: This research includes input corpora of contrasting levels of subjectivity and objectivity from different databases to perform sentiment analysis on user reviews, incorporating attention mechanisms at the aspect level. Three large corpora are chosen as the subjectivity and objectivity datasets, the Shopee user review dataset (ShopeeRD) for subjectivity, together with the Wikipedia English dataset (Wiki-en) and Internet Movie Database (IMDb) for objectivity. Word embeddings are created using Word2Vec with Skip-Gram. Then, a bidirectional LSTM with an attention layer (LSTM-ATT) imposed on word vectors. The performance of the model is evaluated and benchmarked against classification models of Logistics Regression (LR) and Linear SVC (L-SVC). Three models are trained with subjectivity (70% of ShopeeRD) and the objectivity (Wiki-en) embeddings, with ten-fold cross-validation. Next, the three models are evaluated against two datasets (IMDb and 20% of ShopeeRD). The experiments are based on benchmark comparisons, embedding comparison and model comparison with 70-10-20 train-validation-test splits. Data augmentation using AUG-BERT is performed and selected models incorporating AUG-BERT, are compared. Results: L-SVC scored the highest accuracy with 56.9% for objective embeddings (Wiki-en) while the LSTM-ATT scored 69.0% on subjective embeddings (ShopeeRD). Improved performances were observed with data augmentation using AUG-BERT, where the LSTM-ATT+AUG-BERT model scored the highest accuracy at 60.0% for objective embeddings and 70.0% for subjective embeddings, compared to 57% (objective) and 69% (subjective) for L-SVC+AUG-BERT, and 56% (objective) and 68% (subjective) for L-SVC. Conclusions: Utilizing attention layers with subjectivity and objectivity notions has shown improvement to the accuracy of sentiment analysis models.
APA, Harvard, Vancouver, ISO, and other styles
13

Baer, Ruth A. "Self-Focused Attention and Mechanisms of Change in Mindfulness-Based Treatment." Cognitive Behaviour Therapy 38, sup1 (January 2009): 15–20. http://dx.doi.org/10.1080/16506070902980703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lo, Ronda F., Andy H. Ng, Adam S. Cohen, and Joni Y. Sasaki. "Does self-construal shape automatic social attention?" PLOS ONE 16, no. 2 (February 10, 2021): e0246577. http://dx.doi.org/10.1371/journal.pone.0246577.

Full text
Abstract:
We examined whether activating independent or interdependent self-construal modulates attention shifting in response to group gaze cues. European Canadians (Study 1) and East Asian Canadians (Study 2) primed with independence vs. interdependence completed a multi-gaze cueing task with a central face gazing left or right, flanked by multiple background faces that either matched or mismatched the direction of the foreground gaze. Results showed that European Canadians (Study 1) mostly ignored background gaze cues and were uninfluenced by the self-construal primes. However, East Asian Canadians (Study 2), who have cultural backgrounds relevant to both independence and interdependence, showed different attention patterns by prime: those primed with interdependence were more distracted by mismatched (vs. matched) background gaze cues, whereas there was no change for those primed with independence. These findings suggest activating an interdependent self-construal modulates social attention mechanisms to attend broadly, but only for those who may find these representations meaningful.
APA, Harvard, Vancouver, ISO, and other styles
15

Kaiser, Roselinde H., Hannah R. Snyder, Franziska Goer, Rachel Clegg, Manon Ironside, and Diego A. Pizzagalli. "Attention Bias in Rumination and Depression: Cognitive Mechanisms and Brain Networks." Clinical Psychological Science 6, no. 6 (September 21, 2018): 765–82. http://dx.doi.org/10.1177/2167702618797935.

Full text
Abstract:
Depressed individuals exhibit biased attention to negative emotional information. However, much remains unknown about (a) the neurocognitive mechanisms of attention bias (e.g., qualities of negative information that evoke attention bias or functional brain network dynamics that may reflect a propensity for biased attention) and (b) distinctions in the types of attention bias related to different dimensions of depression (e.g., ruminative depression). Here, in 50 women, clinical depression was associated with facilitated processing of negative information only when such information was self-descriptive and task-relevant. However, among depressed individuals, trait rumination was associated with biases toward negative self-descriptive information regardless of task goals, especially when negative self-descriptive material was paired with self-referential images that should be ignored. Attention biases in ruminative depression were mediated by dynamic variability in frontoinsular resting-state functional connectivity. These findings highlight potential cognitive and functional network mechanisms of attention bias specifically related to the ruminative dimension of depression.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Shouyan, Mingyan Zhang, Xiaofen Yang, Zhijia Zhao, Tao Zou, and Xinqi Sun. "The Impact of Attention Mechanisms on Speech Emotion Recognition." Sensors 21, no. 22 (November 12, 2021): 7530. http://dx.doi.org/10.3390/s21227530.

Full text
Abstract:
Speech emotion recognition (SER) plays an important role in real-time applications of human-machine interaction. The Attention Mechanism is widely used to improve the performance of SER. However, the applicable rules of attention mechanism are not deeply discussed. This paper discussed the difference between Global-Attention and Self-Attention and explored their applicable rules to SER classification construction. The experimental results show that the Global-Attention can improve the accuracy of the sequential model, while the Self-Attention can improve the accuracy of the parallel model when conducting the model with the CNN and the LSTM. With this knowledge, a classifier (CNN-LSTM×2+Global-Attention model) for SER is proposed. The experiments result show that it could achieve an accuracy of 85.427% on the EMO-DB dataset.
APA, Harvard, Vancouver, ISO, and other styles
17

Springer, Anne, Juliane Beyer, Jan Derrfuss, Kirsten G. Volz, and Bettina Hannover. "Seeing You or the Scene? Self-Construals Modulate Inhibitory Mechanisms of Attention." Social Cognition 30, no. 2 (April 2012): 133–52. http://dx.doi.org/10.1521/soco.2012.30.2.133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Yange, Meng Li, Huaping Guo, and Li Zhang. "MSGSA: Multi-Scale Guided Self-Attention Network for Crowd Counting." Electronics 12, no. 12 (June 11, 2023): 2631. http://dx.doi.org/10.3390/electronics12122631.

Full text
Abstract:
The use of convolutional neural networks (CNN) for crowd counting has made significant progress in recent years; however, effectively addressing the scale variation and complex backgrounds remain challenging tasks. To address these challenges, we propose a novel Multi-Scale Guided Self-Attention (MSGSA) network that utilizes self-attention mechanisms to capture multi-scale contextual information for crowd counting. The MSGSA network consists of three key modules: a Feature Pyramid Module (FPM), a Scale Self-Attention Module (SSAM), and a Scale-aware Feature Fusion (SFA). By integrating self-attention mechanisms at multiple scales, our proposed method captures both global and local contextual information, leading to an improvement in the accuracy of crowd counting. We conducted extensive experiments on multiple benchmark datasets, and the results demonstrate that our method outperforms most existing methods in terms of counting accuracy and the quality of the generated density map. Our proposed MSGSA network provides a promising direction for efficient and accurate crowd counting in complex backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Mei, Yu Yao, Hongbin Qiu, and Xiyu Song. "Adaptive Memory-Controlled Self-Attention for Polyphonic Sound Event Detection." Symmetry 14, no. 2 (February 12, 2022): 366. http://dx.doi.org/10.3390/sym14020366.

Full text
Abstract:
Polyphonic sound event detection (SED) is the task of detecting the time stamps and the class of sound event that occurred during a recording. Real life sound events overlap in recordings, and their durations vary dramatically, making them even harder to recognize. In this paper, we propose Convolutional Recurrent Neural Networks (CRNNs) to extract hidden state feature representations; then, a self-attention mechanism using a symmetric score function is introduced to memorize long-range dependencies of features that the CRNNs extract. Furthermore, we propose to use memory-controlled self-attention to explicitly compute the relations between time steps in audio representation embedding. Then, we propose a strategy for adaptive memory-controlled self-attention mechanisms. Moreover, we applied semi-supervised learning, namely, mean teacher–student methods, to exploit unlabeled audio data. The proposed methods all performed well in the Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 Sound Event Detection in Real Life Audio (task3) test and the DCASE 2021 Sound Event Detection and Separation in Domestic Environments (task4) test. In DCASE 2017 task3, our model surpassed the challenge’s winning system’s F1-score by 6.8%. We show that the proposed adaptive memory-controlled model reached the same performance level as a fixed attention width model. Experimental results indicate that the proposed attention mechanism is able to improve sound event detection. In DCASE 2021 task4, we investigated various pooling strategies in two scenarios. In addition, we found that in weakly labeled semi-supervised sound event detection, building an attention layer on top of the CRNN is needless repetition. This conclusion could be applied to other multi-instance learning problems.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhou, Qian, Hua Zou, and Huanhuan Wu. "LGViT: A Local and Global Vision Transformer with Dynamic Contextual Position Bias Using Overlapping Windows." Applied Sciences 13, no. 3 (February 3, 2023): 1993. http://dx.doi.org/10.3390/app13031993.

Full text
Abstract:
Vision Transformers (ViTs) have shown their superiority in various visual tasks for the capability of self-attention mechanisms to model long-range dependencies. Some recent works try to reduce the high cost of vision transformers by limiting the self-attention module in a local window. As a price, the adopted window-based self-attention also reduces the ability to capture the long-range dependencies compared with the original self-attention in transformers. In this paper, we propose a Local and Global Vision Transformer (LGViT) that incorporates overlapping windows and multi-scale dilated pooling to robust the self-attention locally and globally. Our proposed self-attention mechanism is composed of a local self-attention module (LSA) and a global self-attention module (GSA), which are performed on overlapping windows partitioned from the input image. In LSA, the key and value sets are expanded by the surroundings of windows to increase the receptive field. For GSA, the key and value sets are expanded by multi-scale dilated pooling to promote global interactions. Moreover, a dynamic contextual positional encoding module is exploited to add positional information more efficiently and flexibly. We conduct extensive experiments on various visual tasks and the experimental results strongly demonstrate the outperformance of our proposed LGViT to state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

Posner, Michael I., Mary K. Rothbart, Brad E. Sheese, and Pascale Voelker. "Developing Attention: Behavioral and Brain Mechanisms." Advances in Neuroscience 2014 (May 8, 2014): 1–9. http://dx.doi.org/10.1155/2014/405094.

Full text
Abstract:
Brain networks underlying attention are present even during infancy and are critical for the developing ability of children to control their emotions and thoughts. For adults, individual differences in the efficiency of attentional networks have been related to neuromodulators and to genetic variations. We have examined the development of attentional networks and child temperament in a longitudinal study from infancy (7 months) to middle childhood (7 years). Early temperamental differences among infants, including smiling and laughter and vocal reactivity, are related to self-regulation abilities at 7 years. However, genetic variations related to adult executive attention, while present in childhood, are poor predictors of later control, in part because individual genetic variation may have many small effects and in part because their influence occurs in interaction with caregiver behavior and other environmental influences. While brain areas involved in attention are present during infancy, their connectivity changes and leads to improvement in control of behavior. It is also possible to influence control mechanisms through training later in life. The relation between maturation and learning may allow advances in our understanding of human brain development.
APA, Harvard, Vancouver, ISO, and other styles
22

Lin, Zhicheng, and Shihui Han. "Self-construal priming modulates the scope of visual attention." Quarterly Journal of Experimental Psychology 62, no. 4 (April 2009): 802–13. http://dx.doi.org/10.1080/17470210802271650.

Full text
Abstract:
Although it is well documented that cultures influence basic cognitive processes such as attention, the underlying mechanisms remain unclear. We tested the hypothesis that self-concepts that characterize people from different cultures mediate the variation of visual attention. After being primed with self-construals that emphasize the Eastern interdependent self or the Western independent self, Chinese participants were asked to discriminate a central target letter flanked by compatible or incompatible stimuli (Experiment 1) or global/local letters in a compound stimulus (Experiment 2). Experiment 1 showed that, while responses were slower to the incompatible than to the compatible stimuli, this flanker compatibility effect was increased by the interdependent relative to the independent self-construal priming. Experiment 2 showed that the interdependent-self priming resulted in faster responses to the global than to the local targets in compound letters whereas a reverse pattern was observed in the independent-self priming condition. The results provide evidence for dynamics of the scope of visual attention as a function of self-construal priming that switches self-concept toward the interdependent or independent styles in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
23

Han, Suk Won, and Cheol Hwan Kim. "Neurocognitive Mechanisms Underlying Internet/Smartphone Addiction: A Preliminary fMRI Study." Tomography 8, no. 4 (July 11, 2022): 1781–90. http://dx.doi.org/10.3390/tomography8040150.

Full text
Abstract:
The present study investigated the neurocognitive mechanisms underlying smartphone/internet addiction. We tested a specific hypothesis that the excessive, uncontrolled use of smartphones should be related to the ability of controlling attention in a purely endogenous and self-regulatory manner. In an fMRI experiment, in which 43 adults participated, we had participants detect and identify specified target stimuli among non-targets. In some trials, 10 s oddball movies were presented as distractors. While the participants try to filter out the distractors and focus their attention on the main task, the activation profiles of the frontoparietal brain regions were examined. The results showed that the people with a higher risk of being addicted to smartphone use failed to filter out distractors via the endogenous control of attention. The neuroimaging data showed that the high-risk group showed significantly lower levels of activation in the frontopolar cortex (FPC). We conclude that people at a high risk of smartphone addiction have difficulty endogenously shifting their attention from distracting stimuli toward goal-directed behavior, and FPC plays a critical role in this self-regulatory control of attention.
APA, Harvard, Vancouver, ISO, and other styles
24

Nagai, Yukie, Koh Hosoda, Akio Morita, and Minoru Asada. "Emergence of Joint Attention through Bootstrap Learning based on the Mechanisms of Visual Attention and Learning with Self-evaluation." Transactions of the Japanese Society for Artificial Intelligence 19 (2004): 10–19. http://dx.doi.org/10.1527/tjsai.19.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ren, Xudie, Jialve Wang, and Shenghong Li. "MAM: Multiple Attention Mechanism Neural Networks for Cross-Age Face Recognition." Wireless Communications and Mobile Computing 2022 (April 30, 2022): 1–11. http://dx.doi.org/10.1155/2022/8546029.

Full text
Abstract:
Cross-age face recognition problem is of great challenge in practical applications because face features of the same person at different ages contain variant aging features in addition to the invariant identity features. To better extract the age-invariant identity features hiding beneath the age-variant aging features, a deep learning-based approach with multiple attention mechanisms is proposed in this paper. First, we propose the stepped local pooling strategy to improve the SE module. Then by incorporating the residual-attention mechanism, the self-attention mechanism, and the improved channel-attention mechanism to the backbone network, we proposed the Multiple Attention Mechanism Network (MAM-CNN) framework for the cross-age face recognition problem. The proposed framework can focus on essential face regions to highlight identity features and diminish the distractions caused by aging features. Experiments are carried out on two well-known public domain face aging datasets (MORPH and CACD-VS). The results yielded prove that the introduced multiple mechanisms jointly enhance the model performance by 0.96% and 0.52%, respectively, over the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Kuo, Yu-Chen, Ching-Bang Yao, and Chen-Yu Wu. "A Strategy for Enhancing English Learning Achievement, Based on the Eye-Tracking Technology with Self-Regulated Learning." Sustainability 14, no. 23 (December 6, 2022): 16286. http://dx.doi.org/10.3390/su142316286.

Full text
Abstract:
Owing to the global promotion of e-learning, combining recognition technology to facilitate learning has become a popular research topic. This study uses eye-tracking to analyze students’ actual learning situations by examining their attention during the learning process and to provide timely support to enhance their learning performance. Using cognitive technology, this study can analyze students’ real-time learning status, which can be utilized to provide timely learning reminders that help them achieve their self-defined learning goals and to effectively enhance their interest and performance. Accordingly, we designed a self-regulated learning (SRL) mechanism, based on eye-tracking technology, combined with online marking and note-taking functions. The mechanism can aid students in maintaining a better reading state, thereby enhancing their learning performance. This study explores students’ learning outcomes, motivation, self-efficacy, learning anxiety, and performance. The experimental results show that students who used the SRL mechanism exhibited a greater learning performance than those who did not use it. Similarly, SRL mechanisms could potentially improve students’ learning motivation and self-efficacy, as well as increase their learning attention. Moreover, SRL mechanisms reduce students’ perplexities and learning anxieties, thereby enhancing their reading-learning performance to achieve an educational sustainability by providing a better e-learning environment.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Yuhua, Hang Li, Tong Zhen, and Zhihui Li. "Integrating Self-Attention Mechanisms and ResNet for Grain Storage Ventilation Decision Making: A Study." Applied Sciences 13, no. 13 (June 28, 2023): 7655. http://dx.doi.org/10.3390/app13137655.

Full text
Abstract:
Food security is a widely discussed topic globally. The key to ensuring the safety of food storage is to control temperature and humidity, with ventilation being an effective and fast method for temperature and humidity control. This paper proposes a new approach called “grain condition multimodal” based on the theory of computer multimodality. Under changing external environments, grain conditions can be classified according to different ventilation modes, including cooling ventilation, dehumidification ventilation, anti-condensation ventilation, heat dissipation ventilation, and quality adjustment ventilation. Studying intelligent ventilation decisions helps achieve grain temperature balance, prevent moisture condensation, control grain heating, reduce grain moisture, and create a low-temperature environment to improve grain storage performance. Combining deep learning models with data such as grain stack temperature and humidity can significantly improve the accuracy of ventilation decisions. This paper proposes a neural network model based on residual networks and self-attention mechanisms that performs better than basic models such as LSTM (Long Short-Term Memory), CNN (Convolutional Neural Network), GRU (Gated Recurrent Unit), and ResNet (Residual Network). The model’s accuracy, precision, recall, and F1 scores are 94.38%, 94.92%, 98.94%, and 96.89%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
28

Diao, Zhifeng, and Fanglei Sun. "Visual Object Tracking Based on Deep Neural Network." Mathematical Problems in Engineering 2022 (July 12, 2022): 1–9. http://dx.doi.org/10.1155/2022/2154463.

Full text
Abstract:
Computer vision systems cannot function without visual target tracking. Intelligent video monitoring, medical treatment, human-computer interaction, and traffic management all stand to benefit greatly from this technology. Although many new algorithms and methods emerge every year, the reality is complex. Targets are often disturbed by factors such as occlusion, illumination changes, deformation, and rapid motion. Solving these problems has also become the main task of visual target tracking researchers. As with the development for deep neural networks and attention mechanisms, object-tracking methods with deep learning show great research potential. This paper analyzes the abovementioned difficult factors, uses the tracking framework based on deep learning, and combines the attention mechanism model to accurately model the target, aiming to improve tracking algorithm. In this work, twin network tracking strategy with dual self-attention is designed. A dual self-attention mechanism is used to enhance feature representation of the target from the standpoint of space and channel, with the goal of addressing target deformation and other problems. In addition, adaptive weights and residual connections are used to enable adaptive attention feature selection. A Siamese tracking network is used in conjunction with the proposed dual self-attention technique. Massive experimental results show our proposed method improves tracking performance, and tracking strategy achieves an excellent tracking effect.
APA, Harvard, Vancouver, ISO, and other styles
29

Gao, Yue, Di Li, Xiangjian Chen, and Junwu Zhu. "Attention-Based Mechanisms for Cognitive Reinforcement Learning." Applied Sciences 13, no. 13 (June 21, 2023): 7361. http://dx.doi.org/10.3390/app13137361.

Full text
Abstract:
In this paper, we propose a cognitive reinforcement learning method based on an attention mechanism (CRL-CBAM) to address the problems of complex interactive communication, limited range, and time-varying communication topology in multi-intelligence collaborative work. The method not only combines the efficient decision-making capability of reinforcement learning, the representational capability of deep learning, and the self-learning capability of cognitive learning but also inserts a convolutional block attention module to increase the representational capability by using the attention mechanism to focus on important features and suppress unnecessary ones. The use of two modules, channel and spatial axis, to emphasize meaningful features in the two main dimensions can effectively aid the flow of information in the network. Results from simulation experiments show that the method has more rewards and is more efficient than other methods in formation control, which means a greater advantage when dealing with scenarios with a large number of agents. In group containment, the agents learn to sacrifice individual rewards to maximize group rewards. All tasks are successfully completed, even if the simulation scenario changes from the training scenario. The method can therefore be applied to new environments with effectiveness and robustness.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Wanru, Yuwei Lv, Yonggang Wen, and Xuemei Sun. "Rumor Detection Based on Knowledge Enhancement and Graph Attention Network." Discrete Dynamics in Nature and Society 2022 (October 6, 2022): 1–12. http://dx.doi.org/10.1155/2022/6257658.

Full text
Abstract:
Presently, most of the existing rumor detection methods focus on learning and integrating various features for detection, but due to the complexity of the language, these models often rarely consider the relationship between the parts of speech. For the first time, this paper integrated a knowledge graphs and graph attention networks to solve this problem through attention mechanisms. A knowledge graphs can be the most effective and intuitive expression of relationships between entities, providing problem analysis from the perspective of “relationships”. This paper used knowledge graphs to enhance topics and learn the text features by using self-attention. Furthermore, this paper defined a common dependent tree structure, and then the ordinary dependency trees were reshaped to make it generate a motif-dependent tree. A graph attention network was adopted to collect feature representations derived from the corresponding syntax-dependent tree production. The attention mechanism was an allocation mechanism of weight parameters that could help the model capture important information. Rumors were then detected accordingly by using the attention mechanism to combine text representations learned from self-attention and graph representations learned from the graph attention network. Finally, numerous experiments were performed on the standard dataset Twitter, and the proposed model here had achieved a 7.7% improved accuracy rate compared with the benchmark model.
APA, Harvard, Vancouver, ISO, and other styles
31

Niu, Jinxing, Shuo Liu, Hanbing Li, Tao Zhang, and Lijun Wang. "Grasp Detection Combining Self-Attention with CNN in Complex Scenes." Applied Sciences 13, no. 17 (August 25, 2023): 9655. http://dx.doi.org/10.3390/app13179655.

Full text
Abstract:
In this paper, we present a novel approach that subtly combines the transformer with grasping CNN to achieve more optimal grasps in complex real-life situations. The approach comprises two unique designs that effectively improve grasp precision in complex scenes. The first essential design uses self-attention mechanisms to capture contextual information from RGB images, boosting contrast between key object features and their surroundings. We precisely adjust internal parameters to balance accuracy and computing costs. The second crucial design involves building a feature fusion bridge that processes all one-dimensional sequence features at once to create an intuitive visual perception for the detection stage, ensuring a seamless combination of the transformer block and CNN. These designs eliminate noise features in complex backgrounds and emphasize graspable object features, providing valuable semantic data to the subsequent grasping CNN to achieve appropriate grasping. We evaluated the approach on the Cornell and VMRD datasets. According to the experimental results, our method achieves better performance than the original grasping CNN in single-object and multi-object scenarios, exhibiting 97.7% and 72.2% accuracy on the Cornell and VMRD grasp datasets using RGB, respectively.
APA, Harvard, Vancouver, ISO, and other styles
32

Ma, Suling. "A Study of Two-Way Short- and Long-Term Memory Network Intelligent Computing IoT Model-Assisted Home Education Attention Mechanism." Computational Intelligence and Neuroscience 2021 (December 21, 2021): 1–11. http://dx.doi.org/10.1155/2021/3587884.

Full text
Abstract:
This paper analyzes and collates the research on traditional homeschooling attention mechanism and homeschooling attention mechanism based on two-way short- and long-term memory network intelligent computing IoT model and finds the superiority of two-way short- and long-term memory network intelligent computing IoT model. The two-way short- and long-term memory network intelligent computing IoT model is improved and an improved deep neural network intelligent computing IoT is proposed, and the improved method is verified based on discrete signal homeschooling classification experiments, followed by focusing on the application research of the two-way short- and long-term memory network intelligent computing IoT model-assisted homeschooling attention mechanism. Learning based on neural network, human behavior recognition method combining spatiotemporal networks, a homeschooling method integrating bidirectional short- and long-term memory networks and attention mechanisms is designed. The visual attention mechanism is used to add weight information to the deep visual features extracted by the convolutional neural network, and a new feature sequence incorporating salient attention weights is output. This feature sequence is then decoded using an IndRNN independent recurrent neural network to finally classify and decide on the homeschooling category. Experiments on the UCF101 dataset demonstrate that the incorporation of the attention mechanism can improve the ability of the network to classify. The attention mechanism can help the intelligent computing IoT model discover key features, and the self-attention mechanism can effectively capture the internal features of homeschooling and optimize the feature vector. We propose the strategy of combining the self-attention mechanism with a bidirectional short- and long-term memory network to solve the family education classification problem and experimentally verify that the intelligent computing IoT model combined with the self-attention mechanism can more easily capture the interdependent features in family education, which can effectively solve the family education problem and further improve the family education classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Kardakis, Spyridon, Isidoros Perikos, Foteini Grivokostopoulou, and Ioannis Hatzilygeroudis. "Examining Attention Mechanisms in Deep Learning Models for Sentiment Analysis." Applied Sciences 11, no. 9 (April 25, 2021): 3883. http://dx.doi.org/10.3390/app11093883.

Full text
Abstract:
Attention-based methods for deep neural networks constitute a technique that has attracted increased interest in recent years. Attention mechanisms can focus on important parts of a sequence and, as a result, enhance the performance of neural networks in a variety of tasks, including sentiment analysis, emotion recognition, machine translation and speech recognition. In this work, we study attention-based models built on recurrent neural networks (RNNs) and examine their performance in various contexts of sentiment analysis. Self-attention, global-attention and hierarchical-attention methods are examined under various deep neural models, training methods and hyperparameters. Even though attention mechanisms are a powerful recent concept in the field of deep learning, their exact effectiveness in sentiment analysis is yet to be thoroughly assessed. A comparative analysis is performed in a text sentiment classification task where baseline models are compared with and without the use of attention for every experiment. The experimental study additionally examines the proposed models’ ability in recognizing opinions and emotions in movie reviews. The results indicate that attention-based models lead to great improvements in the performance of deep neural models showcasing up to a 3.5% improvement in their accuracy.
APA, Harvard, Vancouver, ISO, and other styles
34

Hendricks, Lisa Anne, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, and Aida Nematzadeh. "Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers." Transactions of the Association for Computational Linguistics 9 (2021): 570–85. http://dx.doi.org/10.1162/tacl_a_00385.

Full text
Abstract:
Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors that can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Shugang, Mingjian Jiang, Shuang Wang, Xiaofeng Wang, Zhiqiang Wei, and Zhen Li. "SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network." International Journal of Molecular Sciences 22, no. 16 (August 20, 2021): 8993. http://dx.doi.org/10.3390/ijms22168993.

Full text
Abstract:
The prediction of drug–target affinity (DTA) is a crucial step for drug screening and discovery. In this study, a new graph-based prediction model named SAG-DTA (self-attention graph drug–target affinity) was implemented. Unlike previous graph-based methods, the proposed model utilized self-attention mechanisms on the drug molecular graph to obtain effective representations of drugs for DTA prediction. Features of each atom node in the molecular graph were weighted using an attention score before being aggregated as molecule representation. Various self-attention scoring methods were compared in this study. In addition, two pooing architectures, namely, global and hierarchical architectures, were presented and evaluated on benchmark datasets. Results of comparative experiments on both regression and binary classification tasks showed that SAG-DTA was superior to previous sequence-based or other graph-based methods and exhibited good generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
36

Reimann, Jan Niclas, Andreas Schwung, and Steven X. Ding. "Adopting attention-mechanisms for Neural Logic Rule Layers." at - Automatisierungstechnik 70, no. 3 (March 1, 2022): 257–66. http://dx.doi.org/10.1515/auto-2021-0136.

Full text
Abstract:
Abstract In previous works we discovered that rule-based systems severely suffer in performance when increasing the number of rules. In order to increase the amount of possible boolean relations while keeping the number of rules fixed, we employ ideas from well known Spatial Transformer Systems and Self-Attention Networks: here, our learned rules are not static but are dynamically adjusted to fit the input data by training a separate rule-prediction system, which is predicting parameter matrices used in Neural Logic Rule Layers. We show, that these networks, termed Adaptive Neural Logic Rule Layers, outperform their static counterpart both in terms of final performance, as well as training stability and excitability during early stages of training.
APA, Harvard, Vancouver, ISO, and other styles
37

Schäfer, Sarah, Dirk Wentura, and Christian Frings. "Creating a network of importance: The particular effects of self-relevance on stimulus processing." Attention, Perception, & Psychophysics 82, no. 7 (June 17, 2020): 3750–66. http://dx.doi.org/10.3758/s13414-020-02070-7.

Full text
Abstract:
Abstract Several factors guide our attention and the way we process our surroundings. In that regard, there is an ongoing debate about the way we are influenced by stimuli that have a particular self-relevance for us. Recent findings suggest that self-relevance does not always capture our attention automatically. Instead, an interpretation of the literature might be that self-relevance serves as an associative advantage facilitating the integration of relevant stimuli into the self-concept. We compared the effect of self-relevant stimuli with the effect of negative stimuli in three tasks measuring different aspects of cognitive processing. We found a first dissociation suggesting that negative valence attracts attention while self-relevance does not, a second dissociation suggesting that self-relevance influences stimulus processing beyond attention-grabbing mechanisms and in the form of an “associative glue,” while negative valence does not, and, last but not least, a third dissociation suggesting that self-relevance influences stimulus processing at a later stage than negative valence does.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Wei, Zhongwei Qu, Lianen Qu, Xupeng Wang, Yilin Shi, De Zhang, and Zhenlin Hui. "Radar Echo Maps Prediction Using an Improved MIM with Self-Attention Memory Module." Journal of Sensors 2023 (July 18, 2023): 1–12. http://dx.doi.org/10.1155/2023/8876971.

Full text
Abstract:
A radar echo sequence which includes N frames plays a crucial role in monitoring precipitation clouds and serves as the foundation for accurate precipitation forecasting. To predict future frames, spatiotemporal models are used to leverage historical radar echo sequences. The spatiotemporal information combining both temporal and spatial information is derived from radar echo sequence. The spatiotemporal information reveals the changing trend of intensity in the echo region over time. Dynamic variation information extracted in radar echo maps mainly consists of nonstationary information and spatiotemporal information. However, the changing trends at different locations within the precipitation cloud are different, so the significance of the spatiotemporal information should be different. The current precipitation forecasting model, Memory In Memory (MIM), has the capability to store the nonstationary information derived from radar echo maps. However, the MIM falls short in discerning the significance of the spatiotemporal information extracted from these maps. To address this limitation, we propose a novel model, SAMM-MIM (self-attention memory module-MIM), which regulates the generation of hidden states and spatiotemporal memory states using a SAMM. The proposed SAMM uses the self-attention mechanism and a series of gate mechanisms to concentrate on significant spatiotemporal information, learn changing trends in the echo region, and output predictive information. The predictive information which is stored in hidden states contains predictions of the changing trends of dynamic variation information. Experimental evaluation on a dataset of radar data from Qingdao, China, demonstrates that SAMM-MIM achieves superior prediction performance compared with other spatiotemporal sequence models, as indicated by improved scores on mean squared error, critical success index, and missing alarm rate metrics.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Yabei, Minjun Liang, Mingyang Wei, Ge Wang, and Yanan Li. "Mechanisms and Applications of Attention in Medical Image Segmentation: A Review." Academic Journal of Science and Technology 5, no. 3 (May 5, 2023): 237–43. http://dx.doi.org/10.54097/ajst.v5i3.8021.

Full text
Abstract:
The core task of medical image segmentation based on deep learning is to quickly obtain good results through low-cost auxiliary modules. The attention mechanism, relying on the interacting features of the neural network, is one of the lightweight schemes to focus on key features, which is inspired by the characteristics of selective filtering information in human vision. Through the investigation and analysis, this paper argues that the common attentional mechanisms can be mainly classified into four types according to their structure and form: (i) conventional attention based on feature interaction, (ii) multi-scale/multi-branch-based attention, (iii) Self-similarity attention based on key-value pair queries, (iv) hard attention, etc. Medical images contain poor and blur descriptions of contextual information than natural images. They are usually re-imaging by the feedback intensity of the medium signal since most of them have low contrast and uneven appearance, as well as contain noise and artifacts. In models based on deep learning, without the ability to focus on key descriptive information or features, it is difficult for well-designed models to perform theoretically. This paper shows that attention mechanisms can guide downstream medical image analysis tasks to master discernible expected features while filtering and suppressing irrelevant information to enhance the intensity of target features. Therefore, the network performance can be improved through continuous highly accurate feature spatial evolution.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Rongkai, Ying Zeng, Li Tong, and Bin Yan. "Specific Neural Mechanisms of Self-Cognition and the Application of Brainprint Recognition." Biology 12, no. 3 (March 22, 2023): 486. http://dx.doi.org/10.3390/biology12030486.

Full text
Abstract:
The important identity attribute of self-information presents unique cognitive processing advantages in psychological experiments and has become a research hotspot in psychology and brain science. The unique processing mode of own information has been widely verified in visual and auditory experiments, which is a unique neural processing method for own name, face, voice and other information. In the study of individual behavior, the behavioral uniqueness of self-information is reflected in the faster response of the human brain to self-information, the higher attention to self-information, and the stronger memory level of self-reference. Brain imaging studies have also presented the uniqueness of self-cognition in the brain. EEG studies have shown that self-information induces significant P300 components. fMRI and PET results show that the differences in self and non-self working patterns were located in the frontal and parietal lobes. In addition, this paper combines the self-uniqueness theory and brain-print recognition technology to explore the application of self-information in experimental design, channel combination strategy and identity feature selection of brainprints.
APA, Harvard, Vancouver, ISO, and other styles
41

Mörtberg, Ewa, Asle Hoffart, Benjamin Boecking, and David M. Clark. "Shifting the Focus of One's Attention Mediates Improvement in Cognitive Therapy for Social Anxiety Disorder." Behavioural and Cognitive Psychotherapy 43, no. 1 (August 28, 2013): 63–73. http://dx.doi.org/10.1017/s1352465813000738.

Full text
Abstract:
Background: Cognitive therapy is an effective treatment for social anxiety disorder but little is known about the mechanisms by which the treatment achieves its effects. Aims: This study investigated the potential role of self-focused attention and social phobia related negative automatic thoughts as mediators of clinical improvement. Method: Twenty-nine patients with social phobia received individual cognitive therapy (ICT) in a randomized controlled trial. Weekly process and outcome measures were analysed using multilevel mediation models. Results: Change from self-focused to externally focused attention mediated improvements in social anxiety one week later. In contrast, change in frequency of, or belief in, negative social phobia related negative automatic thoughts did not predict social anxiety one week later. Conclusions: Change in self-focused attention mediate therapeutic improvement in ICT. Therapists should therefore target self-focused attention.
APA, Harvard, Vancouver, ISO, and other styles
42

Fichten, Catherine S., Harriet Lennox, Kristen Robillard, John Wright, Stéphane Sabourin, and Rhonda Amsel. "Attentional Focus and Attitudes Toward Peers with Disabilities: Self Focusing and A Comparison of Modeling and Self-Disclosure." Journal of Applied Rehabilitation Counseling 27, no. 4 (December 1, 1996): 30–39. http://dx.doi.org/10.1891/0047-2220.27.4.30.

Full text
Abstract:
This study tested aspects of the Attentional Mechanisms Model of Interaction Stram (AMMIS) by examining correlates of dispositionally self-focused attention (self-consciousness) and by comparing two filmed interventions: one of these modeled appropriate behaviors when encountering someone who is blind (symbolic modeling of skills), while the second featured a blind man during everyday activities (self-disclosure). Results indicate that self-focused attention is related to negative outcomes and that both the modeling and the self-disclosure films had beneficial effects on thoughts, feelings, self-efficacy beliefs, and attitudes, compared to no intervention. While symbolic modeling was expected to result in more favorable outcomes, self-disclosure generally produced superior results. Implications for research, skills training, and attitude change are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhijian, Lyu, Jiang Shaohua, and Tan Yonghao. "DSAGLSTM-DTA: Prediction of Drug-Target Affinity using Dual Self-Attention and LSTM." Machine Learning and Applications: An International Journal 9, no. 02 (June 30, 2022): 1–19. http://dx.doi.org/10.5121/mlaij.2022.9201.

Full text
Abstract:
The research on affinity between drugs and targets (DTA) aims to effectively narrow the target search space for drug repurposing. Therefore, reasonable prediction of drug and target affinities can minimize the waste of resources such as human and material resources. In this work, a novel graph-based model called DSAGLSTM-DTA was proposed for DTA prediction. The proposed model is unlike previous graph-based drug-target affinity model, which incorporated self-attention mechanisms in the feature extraction process of drug molecular graphs to fully extract its effective feature representations. The features of each atom in the 2D molecular graph were weighted based on attention score before being aggregated as molecule representation and two distinct pooling architectures, namely centralized and distributed architectures were implemented and compared on benchmark datasets. In addition, in the course of processing protein sequences, inspired by the approach of protein feature extraction in GDGRU-DTA, we continue to interpret protein sequences as time series and extract their features using Bidirectional Long Short-Term Memory (BiLSTM) networks, since the context-dependence of long amino acid sequences. Similarly, DSAGLSTM-DTA also utilized a self-attention mechanism in the process of protein feature extraction to obtain comprehensive representations of proteins, in which the final hidden states for element in the batch were weighted with the each unit output of LSTM, and the results were represented as the final feature of proteins. Eventually, representations of drug and protein were concatenated and fed into prediction block for final prediction. The proposed model was evaluated on different regression datasets and binary classification datasets, and the results demonstrated that DSAGLSTM-DTA was superior to some state-ofthe-art DTA models and exhibited good generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
44

Song, Jiang, Jianguo Qian, Zhengjun Liu, Yang Jiao, Jiahui Zhou, Yongrong Li, Yiming Chen, Jie Guo, and Zhiqiang Wang. "Research on Arc Sag Measurement Methods for Transmission Lines Based on Deep Learning and Photogrammetry Technology." Remote Sensing 15, no. 10 (May 11, 2023): 2533. http://dx.doi.org/10.3390/rs15102533.

Full text
Abstract:
Arc sag is an important parameter in the design and operation and maintenance of transmission lines and is directly related to the safety and reliability of grid operation. The current arc sag measurement method is inefficient and costly, which makes it difficult to meet the engineering demand for fast inspection of transmission lines. In view of this, this paper proposes an automatic spacer bar segmentation algorithm, CM-Mask-RCNN, that combines the CAB attention mechanism and MHSA self-attention mechanism, which automatically extracts the spacer bars and calculates the center coordinates, and combines classical algorithms such as beam method leveling, spatial front rendezvous, and spatial curve fitting, based on UAV inspection video data, to realize arc sag measurement with a low cost and high efficiency. It is experimentally verified that the CM-Mask-RCNN algorithm proposed in this paper achieves an AP index of 73.40% on the self-built dataset, which is better than the Yolact++, U-net, and Mask-RCNN algorithms. In addition, it is also verified that the adopted approach of fusing CAB and MHSA attention mechanisms can effectively improve the segmentation performance of the model, and this combination improves the model performance more significantly compared with other attention mechanisms, with an AP improvement of 2.24%. The algorithm in this paper was used to perform arc sag measurement experiments on 10 different transmission lines, and the measurement errors are all within ±2.5%, with an average error of −0.11, which verifies the effectiveness of the arc sag measurement method proposed in this paper for transmission lines.
APA, Harvard, Vancouver, ISO, and other styles
45

Gilboa-Schechtman, E., and R. Azoulay. "Treatment of Social Anxiety Disorder: Mechanisms, Techniques, and Empirically Supported Interventions." Клиническая и специальная психология 11, no. 2 (2022): 1–21. http://dx.doi.org/10.17759/cpse.2022110201.

Full text
Abstract:
Social anxiety disorder (SAD) is a prevalent condition negatively affecting one’s sense of self and interpersonal functioning. Relying on cognitive but integrating interpersonal and evolutionary models of SAD as our theoretical base, we review basic processes contributing to the maintenance of this condition (e.g., self-focused attention, imagery, avoidance), as well as the treatment techniques geared to modify such processes (e.g., exposure, attention modification, imagery rescripting). We discuss cognitive-behavioral treatments (CBT) as combining multiple treatment techniques into intervention “packages.” Next, we review the existing empirical evidence on the effectiveness of CBT. Although CBT has accumulated the most support as superior to other credible interventions, we suggest that many treatment challenges remain. We conclude by discussing the ways to enhance the efficacy of CBT for SAD. Specifically, we highlight the need to (a) elucidate the complex relationship between basic processes and techniques, (b) advance personalized interventions, and (c) include a more diverse and comprehensive array of outcome measures.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Sitong, Tianyi Wu, Haoru Tan, and Guodong Guo. "Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2731–39. http://dx.doi.org/10.1609/aaai.v36i3.20176.

Full text
Abstract:
Recently, Transformers have shown promising performance in various vision tasks. To reduce the quadratic computation complexity caused by the global self-attention, various methods constrain the range of attention within a local region to improve its efficiency. Consequently, their receptive fields in a single attention layer are not large enough, resulting in insufficient context modeling. To address this issue, we propose a Pale-Shaped self-Attention (PS-Attention), which performs self-attention within a pale-shaped region. Compared to the global self-attention, PS-Attention can reduce the computation and memory costs significantly. Meanwhile, it can capture richer contextual information under the similar computation complexity with previous local self-attention mechanisms. Based on the PS-Attention, we develop a general Vision Transformer backbone with a hierarchical architecture, named Pale Transformer, which achieves 83.4%, 84.3%, and 84.9% Top-1 accuracy with the model size of 22M, 48M, and 85M respectively for 224x224 ImageNet-1K classification, outperforming the previous Vision Transformer backbones. For downstream tasks, our Pale Transformer backbone performs better than the recent state-of-the-art CSWin Transformer by a large margin on ADE20K semantic segmentation and COCO object detection & instance segmentation. The code will be released on https://github.com/BR-IDL/PaddleViT.
APA, Harvard, Vancouver, ISO, and other styles
47

Yan, Wenhui, Wending Tang, Lihua Wang, Yannan Bin, and Junfeng Xia. "PrMFTP: Multi-functional therapeutic peptides prediction based on multi-head self-attention mechanism and class weight optimization." PLOS Computational Biology 18, no. 9 (September 12, 2022): e1010511. http://dx.doi.org/10.1371/journal.pcbi.1010511.

Full text
Abstract:
Prediction of therapeutic peptide is a significant step for the discovery of promising therapeutic drugs. Most of the existing studies have focused on the mono-functional therapeutic peptide prediction. However, the number of multi-functional therapeutic peptides (MFTP) is growing rapidly, which requires new computational schemes to be proposed to facilitate MFTP discovery. In this study, based on multi-head self-attention mechanism and class weight optimization algorithm, we propose a novel model called PrMFTP for MFTP prediction. PrMFTP exploits multi-scale convolutional neural network, bi-directional long short-term memory, and multi-head self-attention mechanisms to fully extract and learn informative features of peptide sequence to predict MFTP. In addition, we design a class weight optimization scheme to address the problem of label imbalanced data. Comprehensive evaluation demonstrate that PrMFTP is superior to other state-of-the-art computational methods for predicting MFTP. We provide a user-friendly web server of PrMFTP, which is available at http://bioinfo.ahu.edu.cn/PrMFTP.
APA, Harvard, Vancouver, ISO, and other styles
48

You, Yujie, Le Zhang, Peng Tao, Suran Liu, and Luonan Chen. "Spatiotemporal Transformer Neural Network for Time-Series Forecasting." Entropy 24, no. 11 (November 14, 2022): 1651. http://dx.doi.org/10.3390/e24111651.

Full text
Abstract:
Predicting high-dimensional short-term time-series is a difficult task due to the lack of sufficient information and the curse of dimensionality. To overcome these problems, this study proposes a novel spatiotemporal transformer neural network (STNN) for efficient prediction of short-term time-series with three major features. Firstly, the STNN can accurately and robustly predict a high-dimensional short-term time-series in a multi-step-ahead manner by exploiting high-dimensional/spatial information based on the spatiotemporal information (STI) transformation equation. Secondly, the continuous attention mechanism makes the prediction results more accurate than those of previous studies. Thirdly, we developed continuous spatial self-attention, temporal self-attention, and transformation attention mechanisms to create a bridge between effective spatial information and future temporal evolution information. Fourthly, we show that the STNN model can reconstruct the phase space of the dynamical system, which is explored in the time-series prediction. The experimental results demonstrate that the STNN significantly outperforms the existing methods on various benchmarks and real-world systems in the multi-step-ahead prediction of a short-term time-series.
APA, Harvard, Vancouver, ISO, and other styles
49

Elster, Jon. "Self-poisoning of the mind." Philosophical Transactions of the Royal Society B: Biological Sciences 365, no. 1538 (January 27, 2010): 221–26. http://dx.doi.org/10.1098/rstb.2009.0176.

Full text
Abstract:
Rational-choice theory tries to explain behaviour on the assumption that individuals optimize. Some forms of irrational behaviour can be explained by assuming that the individual is subject to hedonic, pleasure-seeking mechanisms, such as wishful thinking or adaptive preference formation. In this paper, I draw attention to psychic mechanisms, originating in the individual, which make her worse off. I first consider the ideas of counterwishful thinking and of counteradaptive preference formation and then, drawing heavily on Proust, the self-poisoning of the mind that occurs through the operation of amour-propre.
APA, Harvard, Vancouver, ISO, and other styles
50

Marmolejo-Martínez-Artesero, Sara, Caty Casas, and David Romeo-Guitart. "Endogenous Mechanisms of Neuroprotection: To Boost or Not to Be." Cells 10, no. 2 (February 10, 2021): 370. http://dx.doi.org/10.3390/cells10020370.

Full text
Abstract:
Postmitotic cells, like neurons, must live through a lifetime. For this reason, organisms/cells have evolved with self-repair mechanisms that allow them to have a long life. The discovery workflow of neuroprotectors during the last years has focused on blocking the pathophysiological mechanisms that lead to neuronal loss in neurodegeneration. Unfortunately, only a few strategies from these studies were able to slow down or prevent neurodegeneration. There is compelling evidence demonstrating that endorsing the self-healing mechanisms that organisms/cells endogenously have, commonly referred to as cellular resilience, can arm neurons and promote their self-healing. Although enhancing these mechanisms has not yet received sufficient attention, these pathways open up new therapeutic avenues to prevent neuronal death and ameliorate neurodegeneration. Here, we highlight the main endogenous mechanisms of protection and describe their role in promoting neuron survival during neurodegeneration.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography