Literatura académica sobre el tema "Learning on graphs"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Learning on graphs".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Learning on graphs"

1

Huang, Xueqin, Xianqiang Zhu, Xiang Xu, Qianzhen Zhang y Ailin Liang. "Parallel Learning of Dynamics in Complex Systems". Systems 10, n.º 6 (15 de diciembre de 2022): 259. http://dx.doi.org/10.3390/systems10060259.

Texto completo
Resumen
Dynamics always exist in complex systems. Graphs (complex networks) are a mathematical form for describing a complex system abstractly. Dynamics can be learned efficiently from the structure and dynamics state of a graph. Learning the dynamics in graphs plays an important role in predicting and controlling complex systems. Most of the methods for learning dynamics in graphs run slowly in large graphs. The complexity of the large graph’s structure and its nonlinear dynamics aggravate this problem. To overcome these difficulties, we propose a general framework with two novel methods in this paper, the Dynamics-METIS (D-METIS) and the Partitioned Graph Neural Dynamics Learner (PGNDL). The general framework combines D-METIS and PGNDL to perform tasks for large graphs. D-METIS is a new algorithm that can partition a large graph into multiple subgraphs. D-METIS innovatively considers the dynamic changes in the graph. PGNDL is a new parallel model that consists of ordinary differential equation systems and graph neural networks (GNNs). It can quickly learn the dynamics of subgraphs in parallel. In this framework, D-METIS provides PGNDL with partitioned subgraphs, and PGNDL can solve the tasks of interpolation and extrapolation prediction. We exhibit the universality and superiority of our framework on four kinds of graphs with three kinds of dynamics through an experiment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zeng, Jiaqi y Pengtao Xie. "Contrastive Self-supervised Learning for Graph Classification". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 10824–32. http://dx.doi.org/10.1609/aaai.v35i12.17293.

Texto completo
Resumen
Graph classification is a widely studied problem and has broad applications. In many real-world problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously. To perform CSSL on graphs, given a collection of original graphs, we perform data augmentation to create augmented graphs out of the original graphs. An augmented graph is created by consecutively applying a sequence of graph alteration operations. A contrastive loss is defined to learn graph encoders by judging whether two augmented graphs are from the same original graph. Experiments on various graph classification datasets demonstrate the effectiveness of our proposed methods. The code is available at https://github.com/UCSD-AI4H/GraphSSL.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fionda, Valeria y Giuseppe Pirrò. "Learning Triple Embeddings from Knowledge Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Texto completo
Resumen
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed knowledge graph triples. We leverage the idea of line graph of a graph and extend it to the context of knowledge graphs. We introduce an edge weighting mechanism for the line graph based on semantic proximity. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real-world knowledge graphs and compared it with related work. We also show an application of triple embeddings in the context of user-item recommendations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zainullina, R. "Automatic Graph Generation for E-learning Systems". Bulletin of Science and Practice 7, n.º 6 (15 de junio de 2021): 12–16. http://dx.doi.org/10.33619/2414-2948/67/01.

Texto completo
Resumen
The subject of the research is one of the ways of updating modern training systems for solving problems of graph theory, namely, automatic generation of graphs. This approach will reduce the load on the training system database and generate tasks for the user in real-time without updating the bank of tasks. In the course of the work, the advantages and disadvantages of this approach were identified. The most suitable method for the implementation of the research was chosen to represent graphs in electronic computers. The requirements for generated graphs and possible ways of implementing these requirements are identified and substantiated. Namely: in the implemented program, simple connected undirected graphs will be generated. We considered an important detail in working with graphs — graph traversal using the “Depth (width) search” algorithm, which in this task is used to check the graph for connectivity. The result of the work is presented — a software implementation of the graph generation algorithm in the C# programming language. In it, graphs are represented by an adjacency list, generated randomly, and checked for connectivity using the DFS (Depth First Search) function. DFS is a software implementation of the Depth First Search algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hu, Shengze, Weixin Zeng, Pengfei Zhang y Jiuyang Tang. "Neural Graph Similarity Computation with Contrastive Learning". Applied Sciences 12, n.º 15 (29 de julio de 2022): 7668. http://dx.doi.org/10.3390/app12157668.

Texto completo
Resumen
Computing the similarity between graphs is a longstanding and challenging problem with many real-world applications. Recent years have witnessed a rapid increase in neural-network-based methods, which project graphs into embedding space and devise end-to-end frameworks to learn to estimate graph similarity. Nevertheless, these solutions usually design complicated networks to capture the fine-grained interactions between graphs, and hence have low efficiency. Additionally, they rely on labeled data for training the neural networks and overlook the useful information hidden in the graphs themselves. To address the aforementioned issues, in this work, we put forward a contrastive neural graph similarity learning framework, Conga. Specifically, we utilize vanilla graph convolutional networks to generate the graph representations and capture the cross-graph interactions via a simple multilayer perceptron. We further devise an unsupervised contrastive loss to discriminate the graph embeddings and guide the training process by learning more expressive entity representations. Extensive experiment results on public datasets validate that our proposal has more robust performance and higher efficiency compared with state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xiang, Xintao, Tiancheng Huang y Donglin Wang. "Learning to Evolve on Dynamic Graphs (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 13091–92. http://dx.doi.org/10.1609/aaai.v36i11.21682.

Texto completo
Resumen
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) - a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based meta-learning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kim, Jaehyeon, Sejong Lee, Yushin Kim, Seyoung Ahn y Sunghyun Cho. "Graph Learning-Based Blockchain Phishing Account Detection with a Heterogeneous Transaction Graph". Sensors 23, n.º 1 (1 de enero de 2023): 463. http://dx.doi.org/10.3390/s23010463.

Texto completo
Resumen
Recently, cybercrimes that exploit the anonymity of blockchain are increasing. They steal blockchain users’ assets, threaten the network’s reliability, and destabilize the blockchain network. Therefore, it is necessary to detect blockchain cybercriminal accounts to protect users’ assets and sustain the blockchain ecosystem. Many studies have been conducted to detect cybercriminal accounts in the blockchain network. They represented blockchain transaction records as homogeneous transaction graphs that have a multi-edge. They also adopted graph learning algorithms to analyze transaction graphs. However, most graph learning algorithms are not efficient in multi-edge graphs, and homogeneous graphs ignore the heterogeneity of the blockchain network. In this paper, we propose a novel heterogeneous graph structure called an account-transaction graph, ATGraph. ATGraph represents a multi-edge as single edges by considering transactions as nodes. It allows graph learning more efficiently by eliminating multi-edges. Moreover, we compare the performance of ATGraph with homogeneous transaction graphs in various graph learning algorithms. The experimental results demonstrate that the detection performance using ATGraph as input outperforms that using homogeneous graphs as the input by up to 0.2 AUROC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ma, Yunpu y Volker Tresp. "Quantum Machine Learning Algorithm for Knowledge Graphs". ACM Transactions on Quantum Computing 2, n.º 3 (30 de septiembre de 2021): 1–28. http://dx.doi.org/10.1145/3467982.

Texto completo
Resumen
Semantic knowledge graphs are large-scale triple-oriented databases for knowledge representation and reasoning. Implicit knowledge can be inferred by modeling the tensor representations generated from knowledge graphs. However, as the sizes of knowledge graphs continue to grow, classical modeling becomes increasingly computationally resource intensive. This article investigates how to capitalize on quantum resources to accelerate the modeling of knowledge graphs. In particular, we propose the first quantum machine learning algorithm for inference on tensorized data, i.e., on knowledge graphs. Since most tensor problems are NP-hard [18], it is challenging to devise quantum algorithms to support the inference task. We simplify the modeling task by making the plausible assumption that the tensor representation of a knowledge graph can be approximated by its low-rank tensor singular value decomposition, which is verified by our experiments. The proposed sampling-based quantum algorithm achieves speedup with a polylogarithmic runtime in the dimension of knowledge graph tensor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ma, Guixiang, Nesreen K. Ahmed, Theodore L. Willke y Philip S. Yu. "Deep graph similarity learning: a survey". Data Mining and Knowledge Discovery 35, n.º 3 (24 de marzo de 2021): 688–725. http://dx.doi.org/10.1007/s10618-020-00733-5.

Texto completo
Resumen
AbstractIn many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification, clustering, and similarity search. Recently, there has been an increasing interest in deep graph similarity learning, where the key idea is to learn a deep learning model that maps input graphs to a target space such that the distance in the target space approximates the structural distance in the input space. Here, we provide a comprehensive review of the existing literature of deep graph similarity learning. We propose a systematic taxonomy for the methods and applications. Finally, we discuss the challenges and future directions for this problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lee, Namkyeong, Junseok Lee y Chanyoung Park. "Augmentation-Free Self-Supervised Learning on Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 7372–80. http://dx.doi.org/10.1609/aaai.v36i7.20700.

Texto completo
Resumen
Inspired by the recent success of self-supervised methods applied on images, self-supervised learning on graph structured data has seen rapid growth especially centered on augmentation-based contrastive methods. However, we argue that without carefully designed augmentation techniques, augmentations on graphs may behave arbitrarily in that the underlying semantics of graphs can drastically change. As a consequence, the performance of existing augmentation-based methods is highly dependent on the choice of augmentation scheme, i.e., augmentation hyperparameters and combinations of augmentation. In this paper, we propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL. Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph. Extensive experiments towards various node-level tasks, i.e., node classification, clustering, and similarity search on various real-world datasets demonstrate the superiority of AFGRL. The source code for AFGRL is available at https://github.com/Namkyeong/AFGRL.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Learning on graphs"

1

Vitale, F. "FAST LEARNING ON GRAPHS". Doctoral thesis, Università degli Studi di Milano, 2011. http://hdl.handle.net/2434/155500.

Texto completo
Resumen
We carry out a systematic study of classification problems on networked data, presenting novel techniques with good performance both in theory and in practice. We assess the power of node classification based on class-linkage information only. In particular, we propose four new algorithms that exploit the homiphilic bias (linked entities tend to belong to the same class) in different ways. The set of the algorithms we present covers diverse practical needs: some of them operate in an active transductive setting and others in an on-line transductive setting. A third group works within an explorative protocol, in which the vertices of an unknown graph are progressively revealed to the learner in an on-line fashion. Within the mistake bound learning model, for each of our algorithms we provide a rigorous theoretical analysis, together with an interpretation of the obtained performance bounds. We also design adversarial strategies achieving matching lower bounds. In particular, we prove optimality for all input graphs and for all fixed regularity values of suitable labeling complexity measures. We also analyze the computational requirements of our methods, showing that our algorithms can to handle very large data sets. In the case of the on-line protocol, for which we exhibit an optimal algorithm with constant amortized time per prediction, we validate our theoretical results carrying out experiments on real-world datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Irniger, Christophe-André. "Graph matching filtering databases of graphs using machine learning techniques". Berlin Aka, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2677754&prov=M&dok_var=1&dok_ext=htm.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Simonovsky, Martin. "Deep learning on attributed graphs". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1133/document.

Texto completo
Resumen
Le graphe est un concept puissant pour la représentation des relations entre des paires d'entités. Les données ayant une structure de graphes sous-jacente peuvent être trouvées dans de nombreuses disciplines, décrivant des composés chimiques, des surfaces des modèles tridimensionnels, des interactions sociales ou des bases de connaissance, pour n'en nommer que quelques-unes. L'apprentissage profond (DL) a accompli des avancées significatives dans une variété de tâches d'apprentissage automatique au cours des dernières années, particulièrement lorsque les données sont structurées sur une grille, comme dans la compréhension du texte, de la parole ou des images. Cependant, étonnamment peu de choses ont été faites pour explorer l'applicabilité de DL directement sur des données structurées sous forme des graphes. L'objectif de cette thèse est d'étudier des architectures de DL sur des graphes et de rechercher comment transférer, adapter ou généraliser à ce domaine des concepts qui fonctionnent bien sur des données séquentielles et des images. Nous nous concentrons sur deux primitives importantes : le plongement de graphes ou leurs nœuds dans une représentation de l'espace vectorielle continue (codage) et, inversement, la génération des graphes à partir de ces vecteurs (décodage). Nous faisons les contributions suivantes. Tout d'abord, nous introduisons Edge-Conditioned Convolutions (ECC), une opération de type convolution sur les graphes réalisés dans le domaine spatial où les filtres sont générés dynamiquement en fonction des attributs des arêtes. La méthode est utilisée pour coder des graphes avec une structure arbitraire et variable. Deuxièmement, nous proposons SuperPoint Graph, une représentation intermédiaire de nuages de points avec de riches attributs des arêtes codant la relation contextuelle entre des parties des objets. Sur la base de cette représentation, l'ECC est utilisé pour segmenter les nuages de points à grande échelle sans sacrifier les détails les plus fins. Troisièmement, nous présentons GraphVAE, un générateur de graphes permettant de décoder des graphes avec un nombre de nœuds variable mais limité en haut, en utilisant la correspondance approximative des graphes pour aligner les prédictions d'un auto-encodeur avec ses entrées. La méthode est appliquée à génération de molécules
Graph is a powerful concept for representation of relations between pairs of entities. Data with underlying graph structure can be found across many disciplines, describing chemical compounds, surfaces of three-dimensional models, social interactions, or knowledge bases, to name only a few. There is a natural desire for understanding such data better. Deep learning (DL) has achieved significant breakthroughs in a variety of machine learning tasks in recent years, especially where data is structured on a grid, such as in text, speech, or image understanding. However, surprisingly little has been done to explore the applicability of DL on graph-structured data directly.The goal of this thesis is to investigate architectures for DL on graphs and study how to transfer, adapt or generalize concepts working well on sequential and image data to this domain. We concentrate on two important primitives: embedding graphs or their nodes into a continuous vector space representation (encoding) and, conversely, generating graphs from such vectors back (decoding). To that end, we make the following contributions.First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like operation on graphs performed in the spatial domain where filters are dynamically generated based on edge attributes. The method is used to encode graphs with arbitrary and varying structure.Second, we propose SuperPoint Graph, an intermediate point cloud representation with rich edge attributes encoding the contextual relationship between object parts. Based on this representation, ECC is employed to segment large-scale point clouds without major sacrifice in fine details.Third, we present GraphVAE, a graph generator allowing to decode graphs with variable but upper-bounded number of nodes making use of approximate graph matching for aligning the predictions of an autoencoder with its inputs. The method is applied to the task of molecule generation
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Rosar, Kós Lassance Carlos Eduardo. "Graphs for deep learning representations". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0204.

Texto completo
Resumen
Ces dernières années, les méthodes d'apprentissage profond ont atteint l'état de l'art dans une vaste gamme de tâches d'apprentissage automatique, y compris la classification d'images et la traduction automatique. Ces architectures sont assemblées pour résoudre des tâches d'apprentissage automatique de bout en bout. Afin d'atteindre des performances de haut niveau, ces architectures nécessitent souvent d'un très grand nombre de paramètres. Les conséquences indésirables sont multiples, et pour y remédier, il est souhaitable de pouvoir comprendre ce qui se passe à l'intérieur des architectures d'apprentissage profond. Il est difficile de le faire en raison de: i) la dimension élevée des représentations ; et ii) la stochasticité du processus de formation. Dans cette thèse, nous étudions ces architectures en introduisant un formalisme à base de graphes, s'appuyant notamment sur les récents progrès du traitement de signaux sur graphe (TSG). À savoir, nous utilisons des graphes pour représenter les espaces latents des réseaux neuronaux profonds. Nous montrons que ce formalisme des graphes nous permet de répondre à diverses questions, notamment: i) mesurer des capacités de généralisation ;ii) réduire la quantité de des choix arbitraires dans la conception du processus d'apprentissage ; iii)améliorer la robustesse aux petites perturbations ajoutées sur les entrées ; et iv) réduire la complexité des calculs
In recent years, Deep Learning methods have achieved state of the art performance in a vast range of machine learning tasks, including image classification and multilingual automatic text translation. These architectures are trained to solve machine learning tasks in an end-to-end fashion. In order to reach top-tier performance, these architectures often require a very large number of trainable parameters. There are multiple undesirable consequences, and in order to tackle these issues, it is desired to be able to open the black boxes of deep learning architectures. Problematically, doing so is difficult due to the high dimensionality of representations and the stochasticity of the training process. In this thesis, we investigate these architectures by introducing a graph formalism based on the recent advances in Graph Signal Processing (GSP). Namely, we use graphs to represent the latent spaces of deep neural networks. We showcase that this graph formalism allows us to answer various questions including: ensuring generalization abilities, reducing the amount of arbitrary choices in the design of the learning process, improving robustness to small perturbations added to the inputs, and reducing computational complexity
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ghiasnezhad, Omran Pouya. "Rule Learning in Knowledge Graphs". Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382680.

Texto completo
Resumen
With recent advancements in knowledge extraction and knowledge management systems, an enormous number of knowledge bases have been constructed, such as YAGO, and Wikidata. These automatically built knowledge bases which contain millions of entities and their relations have been stored in graph-based schemas, and thus are usually referred to as knowledge graphs (KGs). Since KGs have been built based on the limited available data, they are far from complete. However, learning frequent patterns in the form of logical rules from these incomplete KGs has two main advantages. First, by applying the learned rules, we can infer new facts, so we could complete the KGs. Second, the rules are stand-alone knowledge which express valuable insight about the data. However, learning rules from KGs in relation to the real-world scenarios imposes several challenges. First, due to the vast size of real-world KGs, developing a rule learning method is challenging. In fact, existing methods are not scalable for learning rst order rules, while various optimisation strategies are used such as sampling and language bias (i.e., restrictions on the form of rules). Second, applying the learned rules to the vast KG and inferring new facts is another di cult issue. Learned rules usually contain a lot of noises and adding new facts can cause inconsistency of KGs. Third, it is useful but non-trivial to extend an existing method of rule learning to the case of stream KGs. Forth, in many data repositories, the facts are augmented with time stamps. In this case, we face a stream of data (KGs). Considering time as a new dimension of data imposes some challenges to the rule learning process. It would be useful to construct a time-sensitive model from the stream of data and apply the obtained model to stream KGs. Last, the density of information in a KG is varied. Although the size of a KG is vast, it contains a limited amount of information for some relations. Consequently, that part of KG is sparse. Learning a set of accurate and informative rules regarding the sparse part of a KG is challenging due to the lack of su cient training data. In this thesis, we investigate these research problems and present our methods for rule learning in various scenarios. We have rst developed a new approach, named Rule Learning via Learning Representation (RLvLR), to learning rules from KGs by using the technique of embedding in representation learning together with a new sampling method. RLvLR learns rst-order rules from vast KGs by exploring the embedding space. It can handle some large KGs that cannot be handled by existing rule learners e ciently, due to a novel sampling method. To improve the performance of RLvLR for handling sparse data, we propose a transfer learning method, Transfer Rule Learner (TRL), for rule learning. Based on a similarity characterised by the embedding representation, our method is able to select most relevant KGs and rules to transfer from a pool of KGs whose rules have been obtained. We have also adapted RLvLR to handle stream KGs instead of static KGs. Then a system called StreamLearner is developed for learning rules from stream KGs. These proposed methods can only learn so-called closed path rules, which is a proper subset of Horn rules. Thus, we have also developed a transfer rule learner (T-LPAD) that learns the structure of logic program with annotated disjunctions. T-LPAD is created by employing transfer learning to explore the space of rules' structures more e ciently. Various experiments have been conducted to test and validate the proposed methods. Our experimental results show that our methods outperform state-of-the-art methods in many ways.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Fan, Shuangfei. "Deep Representation Learning on Labeled Graphs". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96596.

Texto completo
Resumen
We introduce recurrent collective classification (RCC), a variant of ICA analogous to recurrent neural network prediction. RCC accommodates any differentiable local classifier and relational feature functions. We provide gradient-based strategies for optimizing over model parameters to more directly minimize the loss function. In our experiments, this direct loss minimization translates to improved accuracy and robustness on real network data. We demonstrate the robustness of RCC in settings where local classification is very noisy, settings that are particularly challenging for ICA. As a new way to train generative models, generative adversarial networks (GANs) have achieved considerable success in image generation, and this framework has also recently been applied to data with graph structures. We identify the drawbacks of existing deep frameworks for generating graphs, and we propose labeled-graph generative adversarial networks (LGGAN) to train deep generative models for graph-structured data with node labels. We test the approach on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all baselines in terms of quality, generality, and scalability. To further evaluate the quality of the generated graphs, we apply it to a downstream task for graph classification, and the results show that LGGAN can better capture the important aspects of the graph structure.
Doctor of Philosophy
Graphs are one of the most important and powerful data structures for conveying the complex and correlated information among data points. In this research, we aim to provide more robust and accurate models for some graph specific tasks, such as collective classification and graph generation, by designing deep learning models to learn better task-specific representations for graphs. First, we studied the collective classification problem in graphs and proposed recurrent collective classification, a variant of the iterative classification algorithm that is more robust to situations where predictions are noisy or inaccurate. Then we studied the problem of graph generation using deep generative models. We first proposed a deep generative model using the GAN framework that generates labeled graphs. Then in order to support more applications and also get more control over the generated graphs, we extended the problem of graph generation to conditional graph generation which can then be applied to various applications for modeling graph evolution and transformation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rommedahl, David y Martin Lindström. "Learning Sparse Graphs for Data Prediction". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295623.

Texto completo
Resumen
Graph structures can often be used to describecomplex data sets. In many applications, the graph structureis not known but must be inferred from data. Furthermore, realworld data is often naturally described by sparse graphs. Inthis project, we have aimed at recreating the results describedin previous work, namely to learn a graph that can be usedfor prediction using an ℓ1-penalised LASSO approach. We alsopropose different methods for learning and evaluating the graph. We have evaluated the methods on synthetic data and real-worldSwedish temperature data. The results show that we are unableto recreate the results of the previous research team, but wemanage to learn sparse graphs that could be used for prediction. Further work is needed to verify our results.
Grafstrukturer kan ofta användas för att beskriva komplex data. I många tillämpningar är grafstrukturen inte känd, utan måste läras från data. Vidare beskrivs verklig data ofta naturligt av glesa grafer. I detta projekt har vi försökt återskapa resultaten från ett tidigare forskningsarbete, nämligen att lära en graf som kan användas för prediktion med en ℓ1pennaliserad LASSO-metod. Vi föreslår även andra metoder för inlärning och utvärdering av grafen. Vi har testat metoderna  på syntetisk data och verklig temperaturdata från Sverige.  Resultaten visar att vi inte kan återskapa de tidigare forskarnas resultat, men vi lyckas lära in glesa grafer som kan användas för prediktion. Ytterligare arbete krävs för att verifiera våra resultat.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Xu, Keyulu. "Graph structures, random walks, and all that : learning graphs with jumping knowledge networks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121660.

Texto completo
Resumen
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 51-54).
Graph representation learning aims to extract high-level features from the graph structures and node features, in order to make predictions about the nodes and the graphs. Applications include predicting chemical properties of drugs, community detection in social networks, and modeling interactions in physical systems. Recent deep learning approaches for graph representation learning, namely Graph Neural Networks (GNNs), follow a neighborhood aggregation procedure, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. We analyze some important properties of these models, and propose a strategy to overcome the limitations. In particular, the range of neighboring nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture - jumping knowledge (JK) networks that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.
by Keyulu Xu.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Freeman, Guy. "Learning and predicting with chain event graphs". Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/4529/.

Texto completo
Resumen
Graphical models provide a very promising avenue for making sense of large, complex datasets. The most popular graphical models in use at the moment are Bayesian networks (BNs). This thesis shows, however, they are not always ideal factorisations of a system. Instead, I advocate for the use of a relatively new graphical model, the chain event graph (CEG), that is based on event trees. Event trees directly represent graphically the event space of a system. Chain event graphs reduce their potentially huge dimensionality by taking into account identical probability distributions on some of the event tree’s subtrees, with the added benefits of showing the conditional independence relationships of the system — one of the advantages of the Bayesian network representation that event trees lack — and implementation of causal hypotheses that is just as easy, and arguably more natural, than is the case with Bayesian networks, with a larger domain of implementation using purely graphical means. The trade-off for this greater expressive power, however, is that model specification and selection are much more difficult to undertake with the larger set of possible models for a given set of variables. My thesis is the first exposition of how to learn CEGs. I demonstrate that not only is conjugate (and hence quick) learning of CEGs possible, but I characterise priors that imply conjugate updating based on very reasonable assumptions that also have direct Bayesian network analogues. By re-casting CEGs as partition models, I show how established partition learning algorithms can be adapted for the task of learning CEGs. I then develop a robust yet flexible prediction machine based on CEGs for any discrete multivariate time series — the dynamic CEG model — which combines the power of CEGs, multi-process and steady modelling, lattice theory and Occam’s razor. This is also an exact method that produces reliable predictions without requiring much a priori modelling. I then demonstrate how easily causal analysis can be implemented with this model class that can express a wide variety of causal hypotheses. I end with an application of these techniques to real educational data, drawing inferences that would not have been possible simply using BNs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pasteris, S. U. "Efficient algorithms for online learning over graphs". Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1516210/.

Texto completo
Resumen
In this thesis we consider the problem of online learning with labelled graphs, in particular designing algorithms that can perform this problem quickly and with low memory requirements. We consider the tasks of Classification (in which we are asked to predict the labels of vertices) and Similarity Prediction (in which we are asked to predict whether two given vertices have the same label). The first half of the thesis considers non- probabilistic online learning, where there is no probability distribution on the labelling and we bound the number of mistakes of an algorithm by a function of the labelling's complexity (i.e. its "naturalness"), often the cut- size. The second half of the thesis considers probabilistic machine learning in which we have a known probability distribution on the labelling. Before considering probabilistic online learning we first analyse the junction tree algorithm, on which we base our online algorithms, and design a new ver- sion of it, superior to the otherwise current state of the art. Explicitly, the novel contributions of this thesis are as follows: • A new algorithm for online prediction of the labelling of a graph which has better performance than previous algorithms on certain graph and labelling families. • Two algorithms for online similarity prediction on a graph (a novel problem solved in this thesis). One performs very well whilst the other not so well but which runs exponentially faster. • A new (better than before, in terms of time and space complexity) state of the art junction tree algorithm, as well as an application of it to the problem of online learning in an Ising model. • An algorithm that, in linear time, finds the optimal junction tree for online inference in tree-structured Ising models, the resulting online junction tree algorithm being far superior to the previous state of the art. All claims in this thesis are supported by mathematical proofs.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Learning on graphs"

1

Irniger, Christophe-André Mario. Graph matching: Filtering databases of graphs using machine learning techniques. Berlin: AKA, 2005.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Let's eat lunch: Learning about picture graphs. New York: Rosen Classroom Books & Materials, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Swan, Malcolm. Learning the language of functions and graphs. [Nottingham]: [Shell Centre for Mathematical Education, University of Nottingham], 1988.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

On the trail with Lewis and Clark: Learning to use line graphs. New York: Rosen Pub. Group, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Fung, P. W. Designing an intelligent case-based learning environment in physics with conceptual graphs. Manchester: UMIST, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bayesian networks and decision graphs. New York: Springer, 2001.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sudre, Carole H., Hamid Fehri, Tal Arbel, Christian F. Baumgartner, Adrian Dalca, Ryutaro Tanno, Koen Van Leemput et al., eds. Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60365-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Hamilton, William L. Graph Representation Learning. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01588-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Subramanya, Amarnag y Partha Pratim Talukdar. Graph-Based Semi-Supervised Learning. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-031-01571-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Daoqiang, Luping Zhou, Biao Jie y Mingxia Liu, eds. Graph Learning in Medical Imaging. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35817-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Learning on graphs"

1

Zhang, Xinhua, Novi Quadrianto, Kristian Kersting, Zhao Xu, Yaakov Engel, Claude Sammut, Mark Reid et al. "Graphs". En Encyclopedia of Machine Learning, 479–82. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_352.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Webb, Geoffrey I., Johannes Fürnkranz, Johannes Fürnkranz, Johannes Fürnkranz, Geoffrey Hinton, Claude Sammut, Joerg Sander et al. "Directed Graphs". En Encyclopedia of Machine Learning, 279. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_218.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cesa-Bianchi, Nicolò, Claudio Gentile y Fabio Vitale. "Learning Unknown Graphs". En Lecture Notes in Computer Science, 110–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04414-4_13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jensen, Tommy R. "Graphs". En Encyclopedia of Machine Learning and Data Mining, 592–96. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_352.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Aggarwal, Manasvi y M. N. Murty. "Embedding Graphs". En Machine Learning in Social Networks, 89–104. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4022-0_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Webb, Cerian Ruth y Mirela Domijan. "Graphs and Plots". En Learning Materials in Biosciences, 85–104. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21337-4_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Michelucci, Umberto. "Computational Graphs and TensorFlow". En Applied Deep Learning, 1–29. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3790-8_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liang, De-Ming y Yu-Feng Li. "Learning Safe Graph Construction from Multiple Graphs". En Communications in Computer and Information Science, 41–54. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2122-1_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Baxter, Nancy, Ed Dubinsky y Gary Levin. "Relations and Graphs". En Learning Discrete Mathematics with ISETL, 363–404. New York, NY: Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4612-3592-7_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Jappy, Pascal y Richard Nock. "PAC learning conceptual graphs". En Conceptual Structures: Theory, Tools and Applications, 303–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054923.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Learning on graphs"

1

Zhang, Ziwei, Xin Wang y Wenwu Zhu. "Automated Machine Learning on Graphs: A Survey". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/637.

Texto completo
Resumen
Machine learning on graphs has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To solve this critical challenge, automated machine learning (AutoML) on graphs which combines the strength of graph machine learning and AutoML together, is gaining attention from the research community. Therefore, we comprehensively survey AutoML on graphs in this paper, primarily focusing on hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We further overview libraries related to automated graph machine learning and in-depth discuss AutoGL, the first dedicated open-source library for AutoML on graphs. In the end, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive review of automated machine learning on graphs to the best of our knowledge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kim, Seoyoon, Seongjun Yun y Jaewoo Kang. "DyGRAIN: An Incremental Learning Framework for Dynamic Graphs". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/438.

Texto completo
Resumen
Graph-structured data provide a powerful representation of complex relations or interactions. Many variants of graph neural networks (GNNs) have emerged to learn graph-structured data where underlying graphs are static, although graphs in various real-world applications are dynamic (e.g., evolving structure). To consider the dynamic nature that a graph changes over time, the need for applying incremental learning (i.e., continual learning or lifelong learning) to the graph domain has been emphasized. However, unlike incremental learning on Euclidean data, graph-structured data contains dependency between the existing nodes and newly appeared nodes, resulting in the phenomenon that receptive fields of existing nodes vary by new inputs (e.g., nodes and edges). In this paper, we raise a crucial challenge of incremental learning for dynamic graphs as time-varying receptive fields, and propose a novel incremental learning framework, DyGRAIN, to mitigate time-varying receptive fields and catastrophic forgetting. Specifically, our proposed method incrementally learns dynamic graph representations by reflecting the influential change in receptive fields of existing nodes and maintaining previous knowledge of informational nodes prone to be forgotten. Our experiments on large-scale graph datasets demonstrate that our proposed method improves the performance by effectively capturing pivotal nodes and preventing catastrophic forgetting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Das, Bishwadeep y Elvin Isufi. "Graph Filtering Over Expanding Graphs". En 2022 IEEE Data Science and Learning Workshop (DSLW). IEEE, 2022. http://dx.doi.org/10.1109/dslw53931.2022.9820066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Barros, Claudio D. T., Daniel N. R. da Silva y Fabio A. M. Porto. "Machine Learning on Graph-Structured Data". En Anais Estendidos do Simpósio Brasileiro de Banco de Dados. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbbd_estendido.2021.18179.

Texto completo
Resumen
Several real-world complex systems have graph-structured data, including social networks, biological networks, and knowledge graphs. A continuous increase in the quantity and quality of these graphs demands learning models to unlock the potential of this data and execute tasks, including node classification, graph classification, and link prediction. This tutorial presents machine learning on graphs, focusing on how representation learning - from traditional approaches (e.g., matrix factorization and random walks) to deep neural architectures - fosters carrying out those tasks. We also introduce representation learning over dynamic and knowledge graphs. Lastly, we discuss open problems, such as scalability and distributed network embedding systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Chuxu, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla y Huan Liu. "Few-Shot Learning on Graphs". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/789.

Texto completo
Resumen
Graph representation learning has attracted tremendous attention due to its remarkable performance in many real-world applications. However, prevailing supervised graph representation learning models for specific tasks often suffer from label sparsity issue as data labeling is always time and resource consuming. In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge. There have been many studies working on FSLG recently. In this paper, we comprehensively survey these work in the form of a series of methods and applications. Specifically, we first introduce FSLG challenges and bases, then categorize and summarize existing work of FSLG in terms of three major graph mining tasks at different granularity levels, i.e., node, edge, and graph. Finally, we share our thoughts on some future research directions of FSLG. The authors of this survey have contributed significantly to the AI literature on FSLG over the last few years.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sokolova, A. A. y A. Yu Syshchikov. "THE APPLICATION OF MACHINE LEARNING METHODS TO PROBLEMS ON GRAPHS". En Aerospace instrumentation and operational technologies. Saint Petersburg State University of Aerospace Instrumentation, 2021. http://dx.doi.org/10.31799/978-5-8088-1554-4-2021-2-321-324.

Texto completo
Resumen
This article provides an overview of machine learning methods applicable to graph tasks. It also provides a summary of how to represent graphs and a description of the machine learning application to graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

You, Jiaxuan, Tianyu Du y Jure Leskovec. "ROLAND: Graph Learning Framework for Dynamic Graphs". En KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3539300.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yang, Peng, Peilin Zhao y Xin Gao. "Bandit Online Learning on Graphs via Adaptive Optimization". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/415.

Texto completo
Resumen
Traditional online learning on graphs adapts graph Laplacian into ridge regression, which may not guarantee reasonable accuracy when the data are adversarially generated. To solve this issue, we exploit an adaptive optimization framework for online classification on graphs. The derived model can achieve a min-max regret under an adversarial mechanism of data generation. To take advantage of the informative labels, we propose an adaptive large-margin update rule, which enjoys a lower regret than the algorithms using error-driven update rules. However, this algorithm assumes that the full information label is provided for each node, which is violated in many practical applications where labeling is expensive and the oracle may only tell whether the prediction is correct or not. To address this issue, we propose a bandit online algorithm on graphs. It derives per-instance confidence region of the prediction, from which the model can be learned adaptively to minimize the online regret. Experiments on benchmark graph datasets show that the proposed bandit algorithm outperforms state-of-the-art competitors, even sometimes beats the algorithms using full information label feedback.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Islam, Md Kamrul, Sabeur Aridhi y Malika Smail-Tabbone. "Appraisal Study of Similarity-Based and Embedding-Based Link Prediction Methods on Graphs". En 2nd International Conference on Machine Learning &Trends (MLT 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.111106.

Texto completo
Resumen
The task of inferring missing links or predicting future ones in a graph based on its current structure is referred to as link prediction. Link prediction methods that are based on pairwise node similarity are well-established approaches in the literature and show good prediction performance in many real-world graphs though they are heuristic. On the other hand, graph embedding approaches learn low-dimensional representation of nodes in graph and are capable of capturing inherent graph features, and thus support the subsequent link prediction task in graph. This appraisal paper studies a selection of methods from both categories on several benchmark (homogeneous) graphs with different properties from various domains. Beyond the intra and inter category comparison of the performances of the methods our aim is also to uncover interesting connections between Graph Neural Network(GNN)-based methods and heuristic ones as a means to alleviate the black-box well-known limitation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lyford, Alex y Lonneke Boels. "Using Machine Learning to Understand Students’ Gaze Patterns on Graphing Tasks". En Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. International Association for Statistical Education, 2022. http://dx.doi.org/10.52041/iase.icots11.t8d2.

Texto completo
Resumen
Graphs are ubiquitous. Many graphs, including histograms, bar charts, and stacked dotplots, have proven tricky to interpret. Students’ gaze data can indicate students’ interpretation strategies on these graphs. We therefore explore the question: In what way can machine learning quantify differences in students’ gaze data when interpreting two near-identical histograms with graph tasks in between? Our work provides evidence that using machine learning in conjunction with gaze data can provide insight into how students analyze and interpret graphs. This approach also sheds light on the ways in which students may better understand a graph after first being presented with other graph types, including dotplots. We conclude with a model that can accurately differentiate between the first and second time a student solved near-identical histogram tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Learning on graphs"

1

Kriegel, Francesco. Learning description logic axioms from discrete probability distributions over description graphs (Extended Version). Technische Universität Dresden, 2018. http://dx.doi.org/10.25368/2022.247.

Texto completo
Resumen
Description logics in their standard setting only allow for representing and reasoning with crisp knowledge without any degree of uncertainty. Of course, this is a serious shortcoming for use cases where it is impossible to perfectly determine the truth of a statement. For resolving this expressivity restriction, probabilistic variants of description logics have been introduced. Their model-theoretic semantics is built upon so-called probabilistic interpretations, that is, families of directed graphs the vertices and edges of which are labeled and for which there exists a probability measure on this graph family. Results of scientific experiments, e.g., in medicine, psychology, or biology, that are repeated several times can induce probabilistic interpretations in a natural way. In this document, we shall develop a suitable axiomatization technique for deducing terminological knowledge from the assertional data given in such probabilistic interpretations. More specifically, we consider a probabilistic variant of the description logic EL⊥, and provide a method for constructing a set of rules, so-called concept inclusions, from probabilistic interpretations in a sound and complete manner.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Goldberg, Sean y Daisy Zhe Wang. Graph Learning in Knowledge Bases. Office of Scientific and Technical Information (OSTI), septiembre de 2017. http://dx.doi.org/10.2172/1390764.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Holder, Lawrence B. y Diane J. Cook. Graph-Based Structural Pattern Learning. Fort Belvoir, VA: Defense Technical Information Center, julio de 2006. http://dx.doi.org/10.21236/ada456904.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Babkin, Vladyslav V., Viktor V. Sharavara, Volodymyr V. Sharavara, Vladyslav V. Bilous, Andrei V. Voznyak y Serhiy Ya Kharchenko. Using augmented reality in university education for future IT specialists: educational process and student research work. CEUR Workshop Proceedings, julio de 2021. http://dx.doi.org/10.31812/123456789/4632.

Texto completo
Resumen
The article substantiates the feature of using augmented reality (AR) in university training of future IT specialists in the learning process and in the research work of students. The survey of university teachers analyzed the most popular AR applications for training future IT specialists (AR Ruler, AR Physics, Nicola Tesla, Arloon Geometry, AR Geometry, GeoGebra 3D Graphing Calculator, etc.), disclose the main advantages of the applications. The methodological basis for the implementation of future IT specialists research activities towards the development and use of AR applications is substantiated. The content of the activities of the student’s scientific club “Informatics studios” of Borys Grinchenko Kyiv University is developed. Students as part of the scientific club activity updated the mobile application, and the model bank corresponding to the topics: “Polyhedrons” for 11th grade, as well as “Functions, their properties and graphs” for 10th grade. The expediency of using software tools to develop a mobile application (Android Studio, SDK, NDK, QR Generator, FTDS Dev, Google Sceneform, Poly) is substantiated. The content of the stages of development of a mobile application is presented. As a result of a survey of students and pupils the positive impact of AR on the learning process is established.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Qi, Fei, Zhaohui Xia, Gaoyang Tang, Hang Yang, Yu Song, Guangrui Qian, Xiong An, Chunhuan Lin y Guangming Shi. A Graph-based Evolutionary Algorithm for Automated Machine Learning. Web of Open Science, diciembre de 2020. http://dx.doi.org/10.37686/ser.v1i2.77.

Texto completo
Resumen
As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Michalenko, Joshua, Indu Manickam y Stephen Heck. GraphAlign: Graph-Enabled Machine Learning for Seismic Event Filtering. Office of Scientific and Technical Information (OSTI), septiembre de 2022. http://dx.doi.org/10.2172/1887336.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bilous, Vladyslav V., Volodymyr V. Proshkin y Oksana S. Lytvyn. Development of AR-applications as a promising area of research for students. [б. в.], noviembre de 2020. http://dx.doi.org/10.31812/123456789/4409.

Texto completo
Resumen
The article substantiates the importance of using augmented reality in the educational process, in particular, in the study of natural and mathematical disciplines. The essence of AR (augmented reality), characteristics of AR hardware and software, directions and advantages of using AR in the educational process are outlined. It has proven that AR is a unique tool that allows educators to teach the new digital generation in a readable, comprehensible, memorable and memorable format, which is the basis for developing a strong interest in learning. Presented the results of the international study on the quality of education PISA (Programme for International Student Assessment) which stimulated the development of the problem of using AR in mathematics teaching. Within the limits of realization of research work of students of the Borys Grinchenko Kyiv University the AR-application on mathematics is developed. To create it used tools: Android Studio, SDK, ARCore, QR Generator, Math pattern. A number of markers of mathematical objects have been developed that correspond to the school mathematics course (topic: “Polyhedra and Functions, their properties and graphs”). The developed AR tools were introduced into the process of teaching students of the specialty “Mathematics”. Prospects of research in development of a technique of training of separate mathematics themes with use of AR have been defined.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Homel, M. A., C. S. Sherman y J. P. Morris. Machine Learning for Constitutive Modeling on a Graphics Processing Unit. Office of Scientific and Technical Information (OSTI), noviembre de 2019. http://dx.doi.org/10.2172/1576907.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Gustafsson, Martin y Nick Taylor. The Politics of Improving Learning Outcomes in South Africa. Research on Improving Systems of Education (RISE), octubre de 2022. http://dx.doi.org/10.35489/bsg-rise-2022/pe03.

Texto completo
Resumen
This paper examines the political economy and the ideology, two important determinants of educational development, in the South African context, using an approach which is in part dialogical, while paying special attention to the acquisition of foundational skills in the early grades.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Wenting. Robust Fault Location in Power Grids through Graph Learning at Low Label Rates. Office of Scientific and Technical Information (OSTI), febrero de 2021. http://dx.doi.org/10.2172/1768426.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía