Literatura científica selecionada sobre o tema "Plongements de graphes"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Plongements de graphes".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Plongements de graphes"
Lewis, Stephen, e Nathaniel Thiem. "Nonzero coefficients in restrictions and tensor products of supercharacters of $U_n(q)$ (extended abstract)". Discrete Mathematics & Theoretical Computer Science DMTCS Proceedings vol. AN,..., Proceedings (1 de janeiro de 2010). http://dx.doi.org/10.46298/dmtcs.2840.
Texto completo da fonteFang, Wenjie. "A generalization of the quadrangulation relation to constellations and hypermaps". Discrete Mathematics & Theoretical Computer Science DMTCS Proceedings vol. AS,..., Proceedings (1 de janeiro de 2013). http://dx.doi.org/10.46298/dmtcs.12789.
Texto completo da fonteBernardi, Olivier, e Guillaume Chapuy. "Counting unicellular maps on non-orientable surfaces". Discrete Mathematics & Theoretical Computer Science DMTCS Proceedings vol. AN,..., Proceedings (1 de janeiro de 2010). http://dx.doi.org/10.46298/dmtcs.2859.
Texto completo da fonteFusy, Eric. "New bijective links on planar maps". Discrete Mathematics & Theoretical Computer Science DMTCS Proceedings vol. AJ,..., Proceedings (1 de janeiro de 2008). http://dx.doi.org/10.46298/dmtcs.3628.
Texto completo da fonteTeses / dissertações sobre o assunto "Plongements de graphes"
Beaudou, Laurent. "Autour de problèmes de plongements de graphes". Phd thesis, Grenoble 1, 2009. http://www.theses.fr/2009GRE10089.
Texto completo da fonteThis Ph. D. Manuscript is built around the notion of graph embedding. An embedding of a graph G is an application mapping the vertices of G to elements of another structure, and preserving some properties of G. There are two types of embeddings. The combinatorial embeddings map the vertices of a graph G to the vertices of a graph H. The usual property that is preserved is the adjacency between vertices. In this thesis, we consider the isometric embeddings, preserving in addition the distances between vertices. We give some structural characterizations for families of graphs isometrically embeddable in hypercubes or Hamming graphs. The topological embeddings aim at drawing a graph G on some surface. Vertices are mapped to distinct points of the surface and the edges are represented by continuous curves linking these points. Is it possible to draw a graph G so that the edges do not cross eachother ? If not, what is the minimum number of crossings of a drawing of G ? We deal with these questions on different surfaces, or in relation with some graph operations as direct product or zip product
Beaudou, Laurent. "Autour de problèmes de plongements de graphes". Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00401226.
Texto completo da fonteGaber, Jaafar. "Plongements et manipulations d'arbres dans les architectures distribuées". Lille 1, 1998. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1998/50376-1998-447.pdf.
Texto completo da fonteMaignant, Elodie. "Plongements barycentriques pour l'apprentissage géométrique de variétés : application aux formes et graphes". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4096.
Texto completo da fonteAn MRI image has over 60,000 pixels. The largest known human protein consists of around 30,000 amino acids. We call such data high-dimensional. In practice, most high-dimensional data is high-dimensional only artificially. For example, of all the images that could be randomly generated by coloring 256 x 256 pixels, only a very small subset would resemble an MRI image of a human brain. This is known as the intrinsic dimension of such data. Therefore, learning high-dimensional data is often synonymous with dimensionality reduction. There are numerous methods for reducing the dimension of a dataset, the most recent of which can be classified according to two approaches.A first approach known as manifold learning or non-linear dimensionality reduction is based on the observation that some of the physical laws behind the data we observe are non-linear. In this case, trying to explain the intrinsic dimension of a dataset with a linear model is sometimes unrealistic. Instead, manifold learning methods assume a locally linear model.Moreover, with the emergence of statistical shape analysis, there has been a growing awareness that many types of data are naturally invariant to certain symmetries (rotations, reparametrizations, permutations...). Such properties are directly mirrored in the intrinsic dimension of such data. These invariances cannot be faithfully transcribed by Euclidean geometry. There is therefore a growing interest in modeling such data using finer structures such as Riemannian manifolds. A second recent approach to dimension reduction consists then in generalizing existing methods to non-Euclidean data. This is known as geometric learning.In order to combine both geometric learning and manifold learning, we investigated the method called locally linear embedding, which has the specificity of being based on the notion of barycenter, a notion a priori defined in Euclidean spaces but which generalizes to Riemannian manifolds. In fact, the method called barycentric subspace analysis, which is one of those generalizing principal component analysis to Riemannian manifolds, is based on this notion as well. Here we rephrase both methods under the new notion of barycentric embeddings. Essentially, barycentric embeddings inherit the structure of most linear and non-linear dimension reduction methods, but rely on a (locally) barycentric -- affine -- model rather than a linear one.The core of our work lies in the analysis of these methods, both on a theoretical and practical level. In particular, we address the application of barycentric embeddings to two important examples in geometric learning: shapes and graphs. In addition to practical implementation issues, each of these examples raises its own theoretical questions, mostly related to the geometry of quotient spaces. In particular, we highlight that compared to standard dimension reduction methods in graph analysis, barycentric embeddings stand out for their better interpretability. In parallel with these examples, we characterize the geometry of locally barycentric embeddings, which generalize the projection computed by locally linear embedding. Finally, algorithms for geometric manifold learning, novel in their approach, complete this work
Perin, Chloé. "Plongements élémentaires dans un groupe hyperbolique sans torsion". Phd thesis, Université de Caen, 2008. http://tel.archives-ouvertes.fr/tel-00460330.
Texto completo da fonteLe, coz Corentin. "Separation and Poincaré profiles Separation profiles, isoperimetry, growth and compression Poincaré profiles of lamplighter diagonal products". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM014.
Texto completo da fonteThe goal of this thesis report is to present my research concerning separation and Poincaré profiles. Separation profile first appeared in 2012 in a seminal article written by Benjamini, Schramm and Timár. This definition was based on preceding research, in the field of computer science, mainly work of Lipton and Trajan concerning planar graphs, and of Miller, Teng, Thurston and Vavasis concerning overlap graphs. The separation profile plays now a role in geometric group theory, where my personal interests lies, because of its property of monotonicity under coarse embeddings. It was generalized by Hume, Mackay and Tessera in 2019 to a spectrum of profiles, called the Poincaré profiles
Marcus, Michel. "Cartes, hypercartes et diagrammes de cordes". Bordeaux 1, 1997. http://www.theses.fr/1997BOR10509.
Texto completo da fonteProuteau, Thibault. "Graphs,Words, and Communities : converging paths to interpretability with a frugal embedding framework". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1006.
Texto completo da fonteRepresentation learning with word and graph embedding models allows distributed representations of information that can in turn be used in input of machine learning algorithms. Through the last two decades, the tasks of embedding graphs’ nodes and words have shifted from matrix factorization approaches that could be trained in a matter of minutes to large models requiring ever larger quantities of training data and sometimes weeks on large hardware architectures. However, in a context of global warming where sustainability is a critical concern, we ought to look back to previous approaches and consider their performances with regard to resources consumption. Furthermore, with the growing involvement of embeddings in sensitive machine learning applications (judiciary system, health), the need for more interpretable and explainable representations has manifested. To foster efficient representation learning and interpretability, this thesis introduces Lower Dimension Bipartite Graph Framework (LDBGF), a node embedding framework able to embed with the same pipeline graph data and text from large corpora represented as co-occurrence networks. Within this framework, we introduce two implementations (SINr-NR, SINr-MF) that leverage community detection in networks to uncover a latent embedding space where items (nodes/words) are represented according to their links to communities. We show that SINr-NR and SINr-MF can compete with similar embedding approaches on tasks such as predicting missing links in networks (link prediction) or node features (degree centrality, PageRank score). Regarding word embeddings, we show that SINr-NR is a good contender to represent words via word co-occurrence networks. Finally, we demonstrate the interpretability of SINr-NR on multiple aspects. First with a human evaluation that shows that SINr-NR’s dimensions are to some extent interpretable. Secondly, by investigating sparsity of vectors, and how having fewer dimensions may allow interpreting how the dimensions combine and allow sense to emerge
Islam, Md Kamrul. "Explainable link prediction in large complex graphs - application to drug repurposing". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0203.
Texto completo da fonteMany real-world complex systems can be well-represented with graphs, where nodes represent objects or entities and links/relations represent interactions between pairs of nodes. Link prediction (LP) is one of the most interesting and long-standing problems in the field of graph mining; it predicts the probability of a link between two unconnected nodes based on available information in the current graph. This thesis studies the LP problem in graphs. It consists of two parts: LP in simple graphs and LP knowledge graphs (KGs). In the first part, the LP problem is defined as predicting the probability of a link between a pair of nodes in a simple graph. In the first study, a few similarity-based and embedding-based LP approaches are evaluated and compared on simple graphs from various domains. he study also criticizes the traditional way of computing the precision metric of similarity-based approaches as the computation faces the difficulty of tuning the threshold for deciding the link existence based on the similarity score. We proposed a new way of computing the precision metric. The results showed the expected superiority of embedding-based approaches. Still, each of the similarity-based approaches is competitive on graphs with specific properties. We could check experimentally that similarity-based approaches are fully explainable but lack generalization due to their heuristic nature, whereas embedding-based approaches are general but not explainable. The second study tries to alleviate the unexplainability limitation of embedding-based approaches by uncovering interesting connections between them and similarity-based approaches to get an idea of what is learned in embedding-based approaches. The third study demonstrates how the similarity-based approaches can be ensembled to design an explainable supervised LP approach. Interestingly, the study shows high LP performance for the supervised approach across various graphs, which is competitive with embedding-based approaches.The second part of the thesis focuses on LP in KGs. A KG is represented as a collection of RDF triples, (head,relation,tail) where the head and the tail are two entities which are connected by a specific relation. The LP problem in a KG is formulated as predicting missing head or tail entities in a triple. LP approaches based on the embeddings of entities and relations of a KG have become very popular in recent years, and generating negative triples is an important task in KG embedding methods. The first study in this part discusses a new method called SNS to generate high-quality negative triples during the training of embedding methods for learning embeddings of KGs. The results we produced show better LP performance when SNS is injected into an embedding approach than when injecting state-of-the-art negative triple sampling methods. The second study in the second part discusses a new neuro-symbolic method of mining rules and an abduction strategy to explain LP by an embedding-based approach utilizing the learned rules. The third study applies the explainable LP to a COVID-19 KG to develop a new drug repurposing approach for COVID-19. The approach learns ”ensemble embeddings” of entities and relations in a COVID-19 centric KG, in order to get a better latent representation of the graph elements. For the first time to our knowledge, molecular docking is then used to evaluate the predictions obtained from drug repurposing using KG embedding. Molecular evaluation and explanatory paths bring reliability to prediction results and constitute new complementary and reusable methods for assessing KG-based drug repurposing. The last study proposes a distributed architecture for learning KG embeddings in distributed and parallel settings. The results of the study that the computational time of embedding methods improves remarkably without affecting LP performance when they are trained in the proposed distributed settings than the traditional centralized settings
Kobeissi, Mohamed. "Plongement de graphes dans l'hypercube". Phd thesis, Grenoble 1, 2001. https://theses.hal.science/tel-00004683.
Texto completo da fonteLivros sobre o assunto "Plongements de graphes"
Geometry of Semilinear Embeddings: Relations to Graphs and Codes. World Scientific Publishing Co Pte Ltd, 2015.
Encontre o texto completo da fonteGeometry of Semilinear Embeddings: Relations to Graphs and Codes. World Scientific Publishing Co Pte Ltd, 2015.
Encontre o texto completo da fonteGeometry of Semilinear Embeddings: Relations to Graphs and Codes. World Scientific Publishing Co Pte Ltd, 2015.
Encontre o texto completo da fonteRingel, Gerhard. Map Color Theorem. Brand: Springer, 2011.
Encontre o texto completo da fonte