Добірка наукової літератури з теми "Graph attention network (GAT)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Graph attention network (GAT)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Graph attention network (GAT)":

1

Wu, Nan, and Chaofan Wang. "Ensemble Graph Attention Networks." Transactions on Machine Learning and Artificial Intelligence 10, no. 3 (June 12, 2022): 29–41. http://dx.doi.org/10.14738/tmlai.103.12399.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Graph neural networks have demonstrated its success in many applications on graph-structured data. Many efforts have been devoted to elaborating new network architectures and learning algorithms over the past decade. The exploration of applying ensemble learning techniques to enhance existing graph algorithms have been overlooked. In this work, we propose a simple generic bagging-based ensemble learning strategy which is applicable to any backbone graph models. We then propose two ensemble graph neural network models – Ensemble-GAT and Ensemble-HetGAT by applying the ensemble strategy to the graph attention network (GAT), and a heterogeneous graph attention network (HetGAT). We demonstrate the effectiveness of the proposed ensemble strategy on GAT and HetGAT through comprehensive experiments with four real-world homogeneous graph datasets and three real-world heterogeneous graph datasets on node classification tasks. The proposed Ensemble-GAT and Ensemble-HetGAT outperform the state-of-the-art graph neural network and heterogeneous graph neural network models on most of the benchmark datasets. The proposed ensemble strategy also alleviates the over-smoothing problem in GAT and HetGAT.
2

Verma, Atul Kumar, Rahul Saxena, Mahipal Jadeja, Vikrant Bhateja, and Jerry Chun-Wei Lin. "Bet-GAT: An Efficient Centrality-Based Graph Attention Model for Semi-Supervised Node Classification." Applied Sciences 13, no. 2 (January 7, 2023): 847. http://dx.doi.org/10.3390/app13020847.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Graph Neural Networks (GNNs) have witnessed great advancement in the field of neural networks for processing graph datasets. Graph Convolutional Networks (GCNs) have outperformed current models/algorithms in accomplishing tasks such as semi-supervised node classification, link prediction, and graph classification. GCNs perform well even with a very small training dataset. The GCN framework has evolved to Graph Attention Model (GAT), GraphSAGE, and other hybrid frameworks. In this paper, we effectively usd the network centrality approach to select nodes from the training set (instead of a traditional random selection), which is fed into GCN (and GAT) to perform semi-supervised node classification tasks. This allows us to take advantage of the best positional nodes in the network. Based on empirical analysis, we choose the betweenness centrality measure for selecting the training nodes. We also mathematically justify why our proposed technique offers better training. This novel training technique is used to analyze the performance of GCN and GAT models on five benchmark networks—Cora, Citeseer, PubMed, Wiki-CS, and Amazon Computers. In GAT implementations, we obtain improved classification accuracy compared to the other state-of-the-art GCN-based methods. Moreover, to the best of our knowledge, the results obtained for Citeseer, Wiki- CS, and Amazon Computer datasets are the best compared to all the existing node classification methods.
3

Lu, Shengfu, Jiaming Kang, Jinyu Zhang, and Mi Li. "Assessment method of depressive disorder level based on graph attention network." ITM Web of Conferences 45 (2022): 01039. http://dx.doi.org/10.1051/itmconf/20224501039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents an approach to predict the depression self-rating scale of Patient Health Questions-9 (PHQ-9) values from pupil-diameter data based on the graph attention network (GAT). The pupil diameter signal was derived from the eye information collected synchronously while the subjects were viewing the virtual reality emotional scene, and then the scores of PHQ-9 depression self-rating scale were collected for depression level. The chebyshev distance based GAT (Chebyshev-GAT) was constructed by extracting pupil-diameter change rate, emotional bandwidth, information entropy and energy, and their statistical distribution. The results show that, the error (MAE and SMRE)of the prediction results using Chebyshev-GAT is smaller then the traditional regression prediction model.
4

Xiang, Zhijie, Weijia Gong, Zehui Li, Xue Yang, Jihua Wang, and Hong Wang. "Predicting Protein–Protein Interactions via Gated Graph Attention Signed Network." Biomolecules 11, no. 6 (May 28, 2021): 799. http://dx.doi.org/10.3390/biom11060799.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Protein–protein interactions (PPIs) play a key role in signal transduction and pharmacogenomics, and hence, accurate PPI prediction is crucial. Graph structures have received increasing attention owing to their outstanding performance in machine learning. In practice, PPIs can be expressed as a signed network (i.e., graph structure), wherein the nodes in the network represent proteins, and edges represent the interactions (positive or negative effects) of protein nodes. PPI predictions can be realized by predicting the links of the signed network; therefore, the use of gated graph attention for signed networks (SN-GGAT) is proposed herein. First, the concept of graph attention network (GAT) is applied to signed networks, in which “attention” represents the weight of neighbor nodes, and GAT updates the node features through the weighted aggregation of neighbor nodes. Then, the gating mechanism is defined and combined with the balance theory to obtain the high-order relations of protein nodes to improve the attention effect, making the attention mechanism follow the principle of “low-order high attention, high-order low attention, different signs opposite”. PPIs are subsequently predicted on the Saccharomyces cerevisiae core dataset and the Human dataset. The test results demonstrate that the proposed method exhibits strong competitiveness.
5

Yuan, Hong, Jing Huang, and Jin Li. "Protein-ligand binding affinity prediction model based on graph attention network." Mathematical Biosciences and Engineering 18, no. 6 (2021): 9148–62. http://dx.doi.org/10.3934/mbe.2021451.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<abstract> <p>Estimating the binding affinity between proteins and drugs is very important in the application of structure-based drug design. Currently, applying machine learning to build the protein-ligand binding affinity prediction model, which is helpful to improve the performance of classical scoring functions, has attracted many scientists' attention. In this paper, we have developed an affinity prediction model called GAT-Score based on graph attention network (GAT). The protein-ligand complex is represented by a graph structure, and the atoms of protein and ligand are treated in the same manner. Two improvements are made to the original graph attention network. Firstly, a dynamic feature mechanism is designed to enable the model to deal with bond features. Secondly, a virtual super node is introduced to aggregate node-level features into graph-level features, so that the model can be used in the graph-level regression problems. PDBbind database v.2018 is used to train the model. Finally, the performance of GAT-Score was tested by the scheme $C_s$ (Core set as the test set) and <italic>CV</italic> (Cross-Validation). It has been found that our results are better than most methods from machine learning models with traditional molecular descriptors.</p> </abstract>
6

Jing, Weipeng, Xianyang Song, Donglin Di, and Houbing Song. "geoGAT: Graph Model Based on Attention Mechanism for Geographic Text Classification." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 5 (September 30, 2021): 1–18. http://dx.doi.org/10.1145/3434239.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the area of geographic information processing, there are few researches on geographic text classification. However, the application of this task in Chinese is relatively rare. In our work, we intend to implement a method to extract text containing geographical entities from a large number of network texts. The geographic information in these texts is of great practical significance to transportation, urban and rural planning, disaster relief, and other fields. We use the method of graph convolutional neural network with attention mechanism to achieve this function. Graph attention networks (GAT) is an improvement of graph convolutional neural networks (GCN). Compared with GCN, the advantage of GAT is that the attention mechanism is proposed to weight the sum of the characteristics of adjacent vertices. In addition, We construct a Chinese dataset containing geographical classification from multiple datasets of Chinese text classification. The Macro-F Score of the geoGAT we used reached 95% on the new Chinese dataset.
7

Liu, Yiwen, Tao Wen, and Zhenning Wu. "Motion Artifact Detection Based on Regional–Temporal Graph Attention Network from Head Computed Tomography Images." Electronics 13, no. 4 (February 10, 2024): 724. http://dx.doi.org/10.3390/electronics13040724.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artifacts are the main cause of degradation in CT image quality and diagnostic accuracy. Because of the complex texture of CT images, it is a challenging task to automatically detect artifacts from limited image samples. Recently, graph convolutional networks (GCNs) have achieved great success and shown promising results in medical imaging due to their powerful learning ability. However, GCNs do not take the attention mechanism into consideration. To overcome their limitations, we propose a novel Regional–Temporal Graph Attention Network for motion artifact detection from computed tomography images (RT-GAT). In this paper, head CT images are viewed as a heterogeneous graph by taking regional and temporal information into consideration, and the graph attention network is utilized to extract the features of the constructed graph. Then, the feature vector is input into the classifier to detect the motion artifacts. The experimental results demonstrate that our proposed RT-GAT method outperforms the state-of-the-art methods on a real-world CT dataset.
8

Huang, Ling, Xing-Xing Liu, Shu-Qiang Huang, Chang-Dong Wang, Wei Tu, Jia-Meng Xie, Shuai Tang, and Wendi Xie. "Temporal Hierarchical Graph Attention Network for Traffic Prediction." ACM Transactions on Intelligent Systems and Technology 12, no. 6 (December 31, 2021): 1–21. http://dx.doi.org/10.1145/3446430.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As a critical task in intelligent traffic systems, traffic prediction has received a large amount of attention in the past few decades. The early efforts mainly model traffic prediction as the time-series mining problem, in which the spatial dependence has been largely ignored. As the rapid development of deep learning, some attempts have been made in modeling traffic prediction as the spatio-temporal data mining problem in a road network, in which deep learning techniques can be adopted for modeling the spatial and temporal dependencies simultaneously. Despite the success, the spatial and temporal dependencies are only modeled in a regionless network without considering the underlying hierarchical regional structure of the spatial nodes, which is an important structure naturally existing in the real-world road network. Apart from the challenge of modeling the spatial and temporal dependencies like the existing studies, the extra challenge caused by considering the hierarchical regional structure of the road network lies in simultaneously modeling the spatial and temporal dependencies between nodes and regions and the spatial and temporal dependencies between regions. To this end, this article proposes a new Temporal Hierarchical Graph Attention Network (TH-GAT). The main idea lies in augmenting the original road network into a region-augmented network, in which the hierarchical regional structure can be modeled. Based on the region-augmented network, the region-aware spatial dependence model and the region-aware temporal dependence model can be constructed, which are two main components of the proposed TH-GAT model. In addition, in the region-aware spatial dependence model, the graph attention network is adopted, in which the importance of a node to another node, of a node to a region, of a region to a node, and of a region to another region, can be captured automatically by means of the attention coefficients. Extensive experiments are conducted on two real-world traffic datasets, and the results have confirmed the superiority of the proposed TH-GAT model.
9

Song, Kyungwoo, Yohan Jung, Dongjun Kim, and Il-Chul Moon. "Implicit Kernel Attention." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9713–21. http://dx.doi.org/10.1609/aaai.v35i11.17168.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Attention computes the dependency between representations, and it encourages the model to focus on the important selective features. Attention-based models, such as Transformer and graph attention network (GAT), are widely utilized for sequential data and graph-structured data. This paper suggests a new interpretation and generalized structure of the attention in Transformer and GAT. For the attention in Transformer and GAT, we derive that the attention is a product of two parts: 1) the RBF kernel to measure the similarity of two instances and 2) the exponential of L2 norm to compute the importance of individual instances. From this decomposition, we generalize the attention in three ways. First, we propose implicit kernel attention with an implicit kernel function instead of manual kernel selection. Second, we generalize L2 norm as the Lp norm. Third, we extend our attention to structured multi-head attention. Our generalized attention shows better performance on classification, translation, and regression tasks.
10

Zheng, Jing, Ziren Gao, Jingsong Ma, Jie Shen, and Kang Zhang. "Deep Graph Convolutional Networks for Accurate Automatic Road Network Selection." ISPRS International Journal of Geo-Information 10, no. 11 (November 11, 2021): 768. http://dx.doi.org/10.3390/ijgi10110768.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The selection of road networks is very important for cartographic generalization. Traditional artificial intelligence methods have improved selection efficiency but cannot fully extract the spatial features of road networks. However, current selection methods, which are based on the theory of graphs or strokes, have low automaticity and are highly subjective. Graph convolutional networks (GCNs) combine graph theory with neural networks; thus, they can not only extract spatial information but also realize automatic selection. Therefore, in this study, we adopted GCNs for automatic road network selection and transformed the process into one of node classification. In addition, to solve the problem of gradient vanishing in GCNs, we compared and analyzed the results of various GCNs (GraphSAGE and graph attention networks [GAT]) by selecting small-scale road networks under different deep architectures (JK-Nets, ResNet, and DenseNet). Our results indicate that GAT provides better selection of road networks than other models. Additionally, the three abovementioned deep architectures can effectively improve the selection effect of models; JK-Nets demonstrated more improvement with higher accuracy (88.12%) than other methods. Thus, our study shows that GCN is an appropriate tool for road network selection; its application in cartography must be further explored.

Дисертації з теми "Graph attention network (GAT)":

1

Belhadj, Djedjiga. "Multi-GAT semi-supervisé pour l’extraction d’informations et son adaptation au chiffrement homomorphe." Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse est réalisée dans le cadre du projet BPI DeepTech, en collaboration avec la société Fair&Smart, veillant principalement à la protection des données personnelles conformément au Règlement Général sur la Protection des Données (RGPD). Dans ce contexte, nous avons proposé un modèle neuronal profond pour l'extraction d'informations dans les documents administratifs semi-structurés (DSSs). En raison du manque de données d'entraînement publiques, nous avons proposé un générateur artificiel de DSSs qui peut générer plusieurs classes de documents avec une large variation de contenu et de mise en page. Les documents sont générés à l'aide de variables aléatoires permettant de gérer le contenu et la mise en page en respectant des contraintes visant à garantir leur proximité avec des documents réels. Des métriques ont été introduites pour évaluer la diversité des DSSs générés en termes de contenu et de mise en page. Les résultats de l'évaluation ont montré que les jeux de données générés pour trois types de DSSs (fiches de paie, tickets de caisse et factures) présentent un degré élevé de diversité, ce qui permet d'éviter le sur-apprentissage lors de l'entraînement des systèmes d'extraction d'informations. En s'appuyant sur le format spécifique des DSSs, constitué de paires de mots (mots-clés, informations) situés dans des voisinages proches spatialement, le document est modélisé sous forme de graphe où les nœuds représentent les mots et les arcs, les relations de voisinage. Le graphe est incorporé dans un réseau d'attention à graphe (GAT) multi-couches (Multi-GAT). Celui-ci applique le mécanisme d'attention multi-têtes permettant d'apprendre l'importance des voisins de chaque mot pour mieux le classer. Une première version de ce modèle a été utilisée en mode supervisé et a obtenu un score F1 de 96 % sur deux jeux de données de factures et de fiches de paie générées, et de 89 % sur un ensemble de tickets de caisse réels (SROIE). Nous avons ensuite enrichi le Multi-GAT avec un plongement multimodal de l'information au niveau des mots (avec des composantes textuelle, visuelle et positionnelle), et l'avons associé à un auto-encodeur variationnel à graphe (VGAE). Ce modèle fonctionne en mode semi-supervisé, capable d'apprendre à partir des données annotées et non annotées simultanément. Pour optimiser au mieux la classification des nœuds du graphe, nous avons proposé un semi-VGAE dont l'encodeur partage ses premières couches avec le classifieur Multi-GAT. Cette optimisation est encore renforcée par la proposition d'une fonction de perte VGAE gérée par la perte de classification. En utilisant une petite base de données non annotées, nous avons pu améliorer de plus de 3 % le score F1 obtenu sur un ensemble de factures générées. Destiné à fonctionner dans un environnement protégé, nous avons adapté l'architecture du modèle pour son chiffrement homomorphe. Nous avons étudié une méthode de réduction de la dimensionnalité du modèle Multi-GAT. Ensuite, nous avons proposé une approche d'approximation polynomiale des fonctions non-linéaires dans le modèle. Pour réduire la dimension du modèle, nous avons proposé une méthode de fusion de caractéristiques multimodales qui nécessite peu de paramètres supplémentaires et qui réduit les dimensions du modèle tout en améliorant ses performances. Pour l'adaptation au chiffrement, nous avons étudié des approximations polynomiales de degrés faibles aux fonctions non-linéaires avec une utilisation des techniques de distillation de connaissance et de fine tuning pour mieux adapter le modèle aux nouvelles approximations. Nous avons pu minimiser la perte lors de l'approximation d'environ 3 % pour deux jeux de données de factures ainsi qu'un jeu de données de fiches de paie et de 5 % pour SROIE
This thesis is being carried out as part of the BPI DeepTech project, in collaboration with the company Fair&Smart, primarily looking after the protection of personal data in accordance with the General Data Protection Regulation (RGPD). In this context, we have proposed a deep neural model for extracting information in semi-structured administrative documents (SSDs). Due to the lack of public training datasets, we have proposed an artificial generator of SSDs that can generate several classes of documents with a wide variation in content and layout. Documents are generated using random variables to manage content and layout, while respecting constraints aimed at ensuring their similarity to real documents. Metrics were introduced to evaluate the content and layout diversity of the generated SSDs. The results of the evaluation have shown that the generated datasets for three SSD types (payslips, receipts and invoices) present a high diversity level, thus avoiding overfitting when training the information extraction systems. Based on the specific format of SSDs, consisting specifically of word pairs (keywords-information) located in spatially close neighborhoods, the document is modeled as a graph where nodes represent words and edges, neighborhood connections. The graph is fed into a multi-layer graph attention network (Multi-GAT). The latter applies the multi-head attention mechanism to learn the importance of each word's neighbors in order to better classify it. A first version of this model was used in supervised mode and obtained an F1 score of 96% on two generated invoice and payslip datasets, and 89% on a real receipt dataset (SROIE). We then enriched the multi-GAT with multimodal embedding of word-level information (textual, visual and positional), and combined it with a variational graph auto-encoder (VGAE). This model operates in semi-supervised mode, being able to learn on both labeled and unlabeled data simultaneously. To further optimize the graph node classification, we have proposed a semi-VGAE whose encoder shares its first layers with the multi-GAT classifier. This is also reinforced by the proposal of a VGAE loss function managed by the classification loss. Using a small unlabeled dataset, we were able to improve the F1 score obtained on a generated invoice dataset by over 3%. Intended to operate in a protected environment, we have adapted the architecture of the model to suit its homomorphic encryption. We studied a method of dimensionality reduction of the Multi-GAT model. We then proposed a polynomial approximation approach for the non-linear functions in the model. To reduce the dimensionality of the model, we proposed a multimodal feature fusion method that requires few additional parameters and reduces the dimensions of the model while improving its performance. For the encryption adaptation, we studied low-degree polynomial approximations of nonlinear functions, using knowledge distillation and fine-tuning techniques to better adapt the model to the new approximations. We were able to minimize the approximation loss by around 3% on two invoice datasets as well as one payslip dataset and by 5% on SROIE
2

Lee, John Boaz T. "Deep Learning on Graph-structured Data." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, deep learning has made a significant impact in various fields – helping to push the state-of-the-art forward in many application domains. Convolutional Neural Networks (CNN) have been applied successfully to tasks such as visual object detection, image super-resolution, and video action recognition while Long Short-term Memory (LSTM) and Transformer networks have been used to solve a variety of challenging tasks in natural language processing. However, these popular deep learning architectures (i.e., CNNs, LSTMs, and Transformers) can only handle data that can be represented as grids or sequences. Due to this limitation, many existing deep learning approaches do not generalize to problem domains where the data is represented as graphs – social networks in social network analysis or molecular graphs in chemoinformatics, for instance. The goal of this thesis is to help bridge the gap by studying deep learning solutions that can handle graph data naturally. In particular, we explore deep learning-based approaches in the following areas. 1. Graph Attention. In the real-world, graphs can be both large – with many complex patterns – and noisy which can pose a problem for effective graph mining. An effective way to deal with this issue is to use an attention-based deep learning model. An attention mechanism allows the model to focus on task-relevant parts of the graph which helps the model make better decisions. We introduce a model for graph classification which uses an attention-guided walk to bias exploration towards more task-relevant parts of the graph. For the task of node classification, we study a different model – one with an attention mechanism which allows each node to select the most task-relevant neighborhood to integrate information from. 2. Graph Representation Learning. Graph representation learning seeks to learn a mapping that embeds nodes, and even entire graphs, as points in a low-dimensional continuous space. The function is optimized such that the geometric distance between objects in the embedding space reflect some sort of similarity based on the structure of the original graph(s). We study the problem of learning time-respecting embeddings for nodes in a dynamic network. 3. Brain Network Discovery. One of the fundamental tasks in functional brain analysis is the task of brain network discovery. The brain is a complex structure which is made up of various brain regions, many of which interact with each other. The objective of brain network discovery is two-fold. First, we wish to partition voxels – from a functional Magnetic Resonance Imaging scan – into functionally and spatially cohesive regions (i.e., nodes). Second, we want to identify the relationships (i.e., edges) between the discovered regions. We introduce a deep learning model which learns to construct a group-cohesive partition of voxels from the scans of multiple individuals in the same group. We then introduce a second model which can recover a hierarchical set of brain regions, allowing us to examine the functional organization of the brain at different levels of granularity. Finally, we propose a model for the problem of unified and group-contrasting edge discovery which aims to discover discriminative brain networks that can help us to better distinguish between samples from different classes.
3

You, Di. "Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., Snopes.com and Politifact.com). However, fake news dissemination has been greatly promoted by social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this thesis, we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms seven state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
4

Dronzeková, Michaela. "Analýza polygonálních modelů pomocí neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417253.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis deals with rotation estimation of 3D model of human jaw. It describes and compares methods for direct analysis od 3D models as well as method to analyze model using rasterization. To evaluate perfomance of proposed method, a metric that computes number of cases when prediction was less than 30° from ground truth is used. Proposed method that uses rasterization, takes  three x-ray views of model as an input and processes it with convolutional network. It achieves best preformance, 99% with described metric. Method to directly analyze polygonal model as a sequence uses attention mechanism to do so and was inspired by transformer architecture. A special pooling function was proposed for this network that decreases memory requirements of the network. This method achieves 88%, but does not use rasterization and can process polygonal model directly. It is not as good as rasterization method with x-ray display, byt it is better than rasterization method with model not rendered as x-ray.  The last method uses graph representation of mesh. Graph network had problems with overfitting, that is why it did not get good results and I think this method is not very suitable for analyzing plygonal model.
5

Blini, Elvio A. "Biases in Visuo-Spatial Attention: from Assessment to Experimental Induction." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424480.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this work I present several studies, which might appear rather heterogeneous for both experimental questions and methodological approaches, and yet are linked by a common leitmotiv: spatial attention. I will address issues related to the assessment of attentional asymmetries, in the healthy individual as in patients with neurological disorders, their role in various aspects of human cognition, and their neural underpinning, driven by the deep belief that spatial attention plays an important role in various mental processes that are not necessarily confined to perception. What follows is organized into two distinct sections. In the first I will focus on the evaluation of visuospatial asymmetries, starting from the description of a new paradigm particularly suitable for this purpose. In the first chapter I will describe the effects of multitasking in a spatial monitoring test; the main result shows a striking decreasing in detection performance as a function of the introduced memory load. In the second chapter I will apply the same paradigm to a clinical population characterized by a brain lesion affecting the left hemisphere. Despite a standard neuropsychological battery failed to highlight any lateralized attentional deficit, I will show that exploiting concurrent demands might lead to enhanced sensitivity of diagnostic tests and consequently positive effects on patients’ diagnostic and therapeutic management. Finally, in the third chapter I will suggest, in light of preliminary data, that attentional asymmetries also occur along the sagittal axis; I will argue, in particular, that more attentional resources appear to be allocated around peripersonal space, the resulting benefits extending to various tasks (i.e., discrimination tasks). Then, in the second section, I will follow a complementary approach: I will seek to induce attentional shifts in order to evaluate their role in different cognitive tasks. In the fourth and fifth chapters this will be pursued exploiting sensory stimulations: visual optokinetic stimulation and galvanic vestibular stimulation, respectively. In the fourth chapter I will show that spatial attention is highly involved in numerical cognition, this relationship being bidirectional. Specifically, I will show that optokinetic stimulation modulates the occurrence of procedural errors during mental arithmetics, and that calculation itself affects oculomotor behaviour in turn. In the fifth chapter I will examine the effects of galvanic vestibular stimulation, a particularly promising technique for the rehabilitation of lateralized attention disorders, on spatial representations. I will discuss critically a recent account for unilateral spatial neglect, suggesting that vestibular stimulations or disorders might indeed affect the metric representation of space, but not necessarily resulting in spatial unawareness. Finally, in the sixth chapter I will describe an attentional capture phenomenon by intrinsically rewarding distracters. I will seek, in particular, to predict the degree of attentional capture from resting-state functional magnetic resonance imaging data and the related brain connectivity pattern; I will report preliminary data focused on the importance of the cingulate-opercular network, and discuss the results through a parallel with clinical populations characterized by behavioural addictions.
In questo lavoro presenterò una serie di ricerche che possono sembrare piuttosto eterogenee per quesiti sperimentali e approcci metodologici, ma sono tuttavia legate da un filo conduttore comune: i costrutti di ragionamento e attenzione spaziale. Affronterò in particolare aspetti legati alla valutazione delle asimmetrie attenzionali, nell'individuo sano come nel paziente con disturbi neurologici, il loro ruolo in vari aspetti della cognizione umana, e i loro substrati neurali, guidato dalla convinzione che l’attenzione spaziale giochi un ruolo importante in svariati processi mentali non necessariamente limitati alla percezione. Quanto segue è stato dunque organizzato in due sezioni distinte. Nella prima mi soffermerò sulla valutazione delle asimmetrie visuospaziali, iniziando dalla descrizione di un nuovo paradigma particolarmente adatto a questo scopo. Nel primo capitolo descriverò gli effetti del doppio compito e del carico attenzionale su un test di monitoraggio spaziale; il risultato principale mostra un netto peggioramento nella prestazione al compito di detezione spaziale in funzione del carico di memoria introdotto. Nel secondo capitolo applicherò lo stesso paradigma ad una popolazione clinica contraddistinta da lesione cerebrale dell’emisfero sinistro. Nonostante una valutazione neuropsicologica standard non evidenziasse alcun deficit lateralizzato dell’attenzione, mostrerò che sfruttare un compito accessorio può portare ad una spiccata maggiore sensibilità dei test diagnostici, con evidenti ricadute benefiche sull'iter clinico e terapeutico dei pazienti. Infine, nel terzo capitolo suggerirò, tramite dati preliminari, che asimmetrie attenzionali possono essere individuate, nell'individuo sano, anche lungo l’asse sagittale; argomenterò, in particolare, che attorno allo spazio peripersonale sembrano essere generalmente concentrate più risorse attentive, e che i benefici conseguenti si estendono a compiti di varia natura (ad esempio compiti di discriminazione). Passerò dunque alla seconda sezione, in cui, seguendo una logica inversa, indurrò degli spostamenti nel focus attentivo in modo da valutarne il ruolo in compiti di varia natura. Nei capitoli quarto e quinto sfrutterò delle stimolazioni sensoriali: la stimolazione visiva optocinetica e la stimolazione galvanico vestibolare, rispettivamente. Nel quarto capitolo mostrerò che l’attenzione spaziale è coinvolta nella cognizione numerica, con cui intrattiene rapporti bidirezionali. Nello specifico mostrerò da un lato che la stimolazione optocinetica può modulare l’occorrenza di errori procedurali nel calcolo mentale, dall'altro che il calcolo stesso ha degli effetti sull'attenzione spaziale e in particolare sul comportamento oculomotorio. Nel quinto capitolo esaminerò gli effetti della stimolazione galvanica vestibolare, una tecnica particolarmente promettente per la riabilitazione dei disturbi attentivi lateralizzati, sulle rappresentazioni mentali dello spazio. Discuterò in modo critico un recente modello della negligenza spaziale unilaterale, suggerendo che stimolazioni e disturbi vestibolari possano sì avere ripercussioni sulle rappresentazioni metriche dello spazio, ma senza comportare necessariamente inattenzione per lo spazio stesso. Infine, nel sesto capitolo descriverò gli effetti di cattura dell’attenzione visuospaziale che stimoli distrattori intrinsecamente motivanti possono esercitare nell'adulto sano. Cercherò, in particolare, di predire l’entità di questa cattura attenzionale partendo da immagini di risonanza magnetica funzionale a riposo: riporterò dati preliminari focalizzati sull'importanza del circuito cingolo-opercolare, effettuando un parallelismo con popolazioni cliniche caratterizzate da comportamenti di dipendenza.
6

Huang, Wei-Chia, and 黃偉嘉. "A Question Answering System for Financial Time-Series Correlation Based on Improved Gated Graph Sequence Neural Network with Attention Mechanism." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hu4b8r.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立交通大學
資訊管理研究所
108
With the rise of financial technology (FinTech) in recent years, the financial industry seek to make their services more efficient through technology, one of the important topic in Fintech is how to conduct analysis in big data and establish prediction model based on artificial intelligence. In this case, we hope to find out the rules and implicit correlation between these data through algorithms, and forecast the situation of future market. In fact, we believe that there is a complex correlation pattern between different financial commodities. The changes in commodities may lead to a chain reaction of financial market through the complex network, and we may be able to build a relational model, which can be represented by a graph structure from these commodities and take a glance on the real situation of market. We may be able to learn the correlation with the help of deep neural network. So we will focus on the research of graph neural network and apply it to the financial domain. In this work, we proposes a deep learning model based on graph structure and attention mechanism, which is applied to the study of interaction relationship of financial time-series data. Traditional deep learning model perform well while input data are in the Euclidean space such as images and sequences. However, it is very easy to lose the structural information of graph if we learn the graph structure data with traditional deep learning module. Therefore, it is necessary to design a deep learning model specifically used for processing the graph structure. In this study, we expect to formulate various relationship between financial commodities as and learn the representation of graph through the graph neural networks. Moreover, we can highlight the importance of each commodity through the attention mechanism, and finally forecast the future trend of market with the help of our proposed model.

Частини книг з теми "Graph attention network (GAT)":

1

Peng, Mi, Buqing Cao, Junjie Chen, Jianxun Liu, and Bing Li. "SC-GAT: Web Services Classification Based on Graph Attention Network." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 513–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67537-0_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sui, Jianan, Yuehui Chen, Baitong Chen, Yi Cao, Jiazi Chen, and Hanhan Cong. "SeqVec-GAT: A Golgi Classification Model Based on Multi-headed Graph Attention Network." In Intelligent Computing Theories and Application, 697–704. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13829-4_61.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xu, Hongbo, Shuhao Li, Zhenyu Cheng, Rui Qin, Jiang Xie, and Peishuai Sun. "VT-GAT: A Novel VPN Encrypted Traffic Classification Model Based on Graph Attention Neural Network." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 437–56. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-24386-8_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wu, Xinhui, Weizhi An, Shiqi Yu, Weiyu Guo, and Edel B. García. "Spatial-Temporal Graph Attention Network for Video-Based Gait Recognition." In Lecture Notes in Computer Science, 274–86. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41299-9_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Lu, Xiao Song, Bingli Sun, Guanghong Gong, and Wenxin Li. "MMoE-GAT: A Multi-Gate Mixture-of-Experts Boosted Graph Attention Network for Aircraft Engine Remaining Useful Life Prediction." In Communications in Computer and Information Science, 451–65. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7240-1_36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nerrise, Favour, Qingyu Zhao, Kathleen L. Poston, Kilian M. Pohl, and Ehsan Adeli. "An Explainable Geometric-Weighted Graph Attention Network for Identifying Functional Networks Associated with Gait Impairment." In Lecture Notes in Computer Science, 723–33. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43895-0_68.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Samanta, Aritra, Shyamali Mitra, Biplab Banerjee, and Nibaran Das. "Cluster—GAT: Mixing Convolutional and Self-Attended Feature Maps Using Graph Attention Networks for Cervical Cell Classification." In Proceedings of 4th International Conference on Frontiers in Computing and Systems, 409–18. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2611-0_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Xueya, Tong Zhang, Wenting Zhao, Zhen Cui, and Jian Yang. "Dual-Attention Graph Convolutional Network." In Lecture Notes in Computer Science, 238–51. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41299-9_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Long, Yunfei, Huosheng Xu, Pengyuan Qi, Liguo Zhang, and Jun Li. "Graph Attention Network for Word Embeddings." In Lecture Notes in Computer Science, 191–201. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78612-0_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Ziming, Jun Chen, and Haopeng Chen. "EGAT: Edge-Featured Graph Attention Network." In Lecture Notes in Computer Science, 253–64. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86362-3_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Graph attention network (GAT)":

1

Yuan, Xiaosong, Ke Chen, Wanli Zuo, and Yijia Zhang. "TC-GAT: Graph Attention Network for Temporal Causality Discovery." In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huang, Deling, Xialong Tong, and Haodong Yang. "Web Service Recommendation based on Graph Attention Network (GAT-WSR)." In 2022 International Conference on Computer Communication and Informatics (ICCCI). IEEE, 2022. http://dx.doi.org/10.1109/iccci54379.2022.9740941.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ahmed, Ahmed N., Ali Anwar, Siegfried Mercelis, Steven Latre, and Peter Hellinckx. "FF-GAT: Feature Fusion Using Graph Attention Networks." In IECON 2021 - 47th Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2021. http://dx.doi.org/10.1109/iecon48115.2021.9589579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fang, Yujie, Jie Jiang, and Yixiang He. "Traffic Speed Prediction Based on LSTM-Graph Attention Network (L-GAT)." In 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE). IEEE, 2021. http://dx.doi.org/10.1109/aemcse51986.2021.00163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zheng, Zhaohua, Jianfang Li, Lingjie Zhu, Honghua Li, Frank Petzold, and Ping Tan. "GAT-CADNet: Graph Attention Network for Panoptic Symbol Spotting in CAD Drawings." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01145.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lu, Xiaofeng, Xiaoyu Zhang, and Pietro Lio. "GAT-DNS: DNS Multivariate Time Series Prediction Model Based on Graph Attention Network." In WWW '23: The ACM Web Conference 2023. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3543873.3587329.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liang, Shuo, Wei Wei, Xian-Ling Mao, Fei Wang, and Zhiyong He. "BiSyn-GAT+: Bi-Syntax Aware Graph Attention Network for Aspect-based Sentiment Analysis." In Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.findings-acl.144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mahto, Dinesh Kumar, Vikash Kumar Saini, Akhilesh Mathur, Rajesh Kumar, and Rupesh Yadav. "GAT- DNet: High Fidelity Graph Attention Network for Distribution Optimal Power Flow Pursuit." In 2023 9th IEEE India International Conference on Power Electronics (IICPE). IEEE, 2023. http://dx.doi.org/10.1109/iicpe60303.2023.10474885.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Demir, Andac, Toshiaki Koike-Akino, Ye Wang, and Deniz Erdogmus. "EEG-GAT: Graph Attention Networks for Classification of Electroencephalogram (EEG) Signals." In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2022. http://dx.doi.org/10.1109/embc48229.2022.9871984.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Tianqi. "VP-GAT: vector prior graph attention network for automated segment labeling of coronary arteries." In Fourteenth International Conference on Graphics and Image Processing (ICGIP 2022), edited by Liang Xiao and Jianru Xue. SPIE, 2023. http://dx.doi.org/10.1117/12.2680419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії