Дисертації з теми "Plongements de graphes"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-26 дисертацій для дослідження на тему "Plongements de graphes".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Beaudou, Laurent. "Autour de problèmes de plongements de graphes." Phd thesis, Grenoble 1, 2009. http://www.theses.fr/2009GRE10089.
Повний текст джерелаThis Ph. D. Manuscript is built around the notion of graph embedding. An embedding of a graph G is an application mapping the vertices of G to elements of another structure, and preserving some properties of G. There are two types of embeddings. The combinatorial embeddings map the vertices of a graph G to the vertices of a graph H. The usual property that is preserved is the adjacency between vertices. In this thesis, we consider the isometric embeddings, preserving in addition the distances between vertices. We give some structural characterizations for families of graphs isometrically embeddable in hypercubes or Hamming graphs. The topological embeddings aim at drawing a graph G on some surface. Vertices are mapped to distinct points of the surface and the edges are represented by continuous curves linking these points. Is it possible to draw a graph G so that the edges do not cross eachother ? If not, what is the minimum number of crossings of a drawing of G ? We deal with these questions on different surfaces, or in relation with some graph operations as direct product or zip product
Beaudou, Laurent. "Autour de problèmes de plongements de graphes." Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00401226.
Повний текст джерелаGaber, Jaafar. "Plongements et manipulations d'arbres dans les architectures distribuées." Lille 1, 1998. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1998/50376-1998-447.pdf.
Повний текст джерелаMaignant, Elodie. "Plongements barycentriques pour l'apprentissage géométrique de variétés : application aux formes et graphes." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4096.
Повний текст джерелаAn MRI image has over 60,000 pixels. The largest known human protein consists of around 30,000 amino acids. We call such data high-dimensional. In practice, most high-dimensional data is high-dimensional only artificially. For example, of all the images that could be randomly generated by coloring 256 x 256 pixels, only a very small subset would resemble an MRI image of a human brain. This is known as the intrinsic dimension of such data. Therefore, learning high-dimensional data is often synonymous with dimensionality reduction. There are numerous methods for reducing the dimension of a dataset, the most recent of which can be classified according to two approaches.A first approach known as manifold learning or non-linear dimensionality reduction is based on the observation that some of the physical laws behind the data we observe are non-linear. In this case, trying to explain the intrinsic dimension of a dataset with a linear model is sometimes unrealistic. Instead, manifold learning methods assume a locally linear model.Moreover, with the emergence of statistical shape analysis, there has been a growing awareness that many types of data are naturally invariant to certain symmetries (rotations, reparametrizations, permutations...). Such properties are directly mirrored in the intrinsic dimension of such data. These invariances cannot be faithfully transcribed by Euclidean geometry. There is therefore a growing interest in modeling such data using finer structures such as Riemannian manifolds. A second recent approach to dimension reduction consists then in generalizing existing methods to non-Euclidean data. This is known as geometric learning.In order to combine both geometric learning and manifold learning, we investigated the method called locally linear embedding, which has the specificity of being based on the notion of barycenter, a notion a priori defined in Euclidean spaces but which generalizes to Riemannian manifolds. In fact, the method called barycentric subspace analysis, which is one of those generalizing principal component analysis to Riemannian manifolds, is based on this notion as well. Here we rephrase both methods under the new notion of barycentric embeddings. Essentially, barycentric embeddings inherit the structure of most linear and non-linear dimension reduction methods, but rely on a (locally) barycentric -- affine -- model rather than a linear one.The core of our work lies in the analysis of these methods, both on a theoretical and practical level. In particular, we address the application of barycentric embeddings to two important examples in geometric learning: shapes and graphs. In addition to practical implementation issues, each of these examples raises its own theoretical questions, mostly related to the geometry of quotient spaces. In particular, we highlight that compared to standard dimension reduction methods in graph analysis, barycentric embeddings stand out for their better interpretability. In parallel with these examples, we characterize the geometry of locally barycentric embeddings, which generalize the projection computed by locally linear embedding. Finally, algorithms for geometric manifold learning, novel in their approach, complete this work
Perin, Chloé. "Plongements élémentaires dans un groupe hyperbolique sans torsion." Phd thesis, Université de Caen, 2008. http://tel.archives-ouvertes.fr/tel-00460330.
Повний текст джерелаLe, coz Corentin. "Separation and Poincaré profiles Separation profiles, isoperimetry, growth and compression Poincaré profiles of lamplighter diagonal products." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM014.
Повний текст джерелаThe goal of this thesis report is to present my research concerning separation and Poincaré profiles. Separation profile first appeared in 2012 in a seminal article written by Benjamini, Schramm and Timár. This definition was based on preceding research, in the field of computer science, mainly work of Lipton and Trajan concerning planar graphs, and of Miller, Teng, Thurston and Vavasis concerning overlap graphs. The separation profile plays now a role in geometric group theory, where my personal interests lies, because of its property of monotonicity under coarse embeddings. It was generalized by Hume, Mackay and Tessera in 2019 to a spectrum of profiles, called the Poincaré profiles
Marcus, Michel. "Cartes, hypercartes et diagrammes de cordes." Bordeaux 1, 1997. http://www.theses.fr/1997BOR10509.
Повний текст джерелаProuteau, Thibault. "Graphs,Words, and Communities : converging paths to interpretability with a frugal embedding framework." Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1006.
Повний текст джерелаRepresentation learning with word and graph embedding models allows distributed representations of information that can in turn be used in input of machine learning algorithms. Through the last two decades, the tasks of embedding graphs’ nodes and words have shifted from matrix factorization approaches that could be trained in a matter of minutes to large models requiring ever larger quantities of training data and sometimes weeks on large hardware architectures. However, in a context of global warming where sustainability is a critical concern, we ought to look back to previous approaches and consider their performances with regard to resources consumption. Furthermore, with the growing involvement of embeddings in sensitive machine learning applications (judiciary system, health), the need for more interpretable and explainable representations has manifested. To foster efficient representation learning and interpretability, this thesis introduces Lower Dimension Bipartite Graph Framework (LDBGF), a node embedding framework able to embed with the same pipeline graph data and text from large corpora represented as co-occurrence networks. Within this framework, we introduce two implementations (SINr-NR, SINr-MF) that leverage community detection in networks to uncover a latent embedding space where items (nodes/words) are represented according to their links to communities. We show that SINr-NR and SINr-MF can compete with similar embedding approaches on tasks such as predicting missing links in networks (link prediction) or node features (degree centrality, PageRank score). Regarding word embeddings, we show that SINr-NR is a good contender to represent words via word co-occurrence networks. Finally, we demonstrate the interpretability of SINr-NR on multiple aspects. First with a human evaluation that shows that SINr-NR’s dimensions are to some extent interpretable. Secondly, by investigating sparsity of vectors, and how having fewer dimensions may allow interpreting how the dimensions combine and allow sense to emerge
Islam, Md Kamrul. "Explainable link prediction in large complex graphs - application to drug repurposing." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0203.
Повний текст джерелаMany real-world complex systems can be well-represented with graphs, where nodes represent objects or entities and links/relations represent interactions between pairs of nodes. Link prediction (LP) is one of the most interesting and long-standing problems in the field of graph mining; it predicts the probability of a link between two unconnected nodes based on available information in the current graph. This thesis studies the LP problem in graphs. It consists of two parts: LP in simple graphs and LP knowledge graphs (KGs). In the first part, the LP problem is defined as predicting the probability of a link between a pair of nodes in a simple graph. In the first study, a few similarity-based and embedding-based LP approaches are evaluated and compared on simple graphs from various domains. he study also criticizes the traditional way of computing the precision metric of similarity-based approaches as the computation faces the difficulty of tuning the threshold for deciding the link existence based on the similarity score. We proposed a new way of computing the precision metric. The results showed the expected superiority of embedding-based approaches. Still, each of the similarity-based approaches is competitive on graphs with specific properties. We could check experimentally that similarity-based approaches are fully explainable but lack generalization due to their heuristic nature, whereas embedding-based approaches are general but not explainable. The second study tries to alleviate the unexplainability limitation of embedding-based approaches by uncovering interesting connections between them and similarity-based approaches to get an idea of what is learned in embedding-based approaches. The third study demonstrates how the similarity-based approaches can be ensembled to design an explainable supervised LP approach. Interestingly, the study shows high LP performance for the supervised approach across various graphs, which is competitive with embedding-based approaches.The second part of the thesis focuses on LP in KGs. A KG is represented as a collection of RDF triples, (head,relation,tail) where the head and the tail are two entities which are connected by a specific relation. The LP problem in a KG is formulated as predicting missing head or tail entities in a triple. LP approaches based on the embeddings of entities and relations of a KG have become very popular in recent years, and generating negative triples is an important task in KG embedding methods. The first study in this part discusses a new method called SNS to generate high-quality negative triples during the training of embedding methods for learning embeddings of KGs. The results we produced show better LP performance when SNS is injected into an embedding approach than when injecting state-of-the-art negative triple sampling methods. The second study in the second part discusses a new neuro-symbolic method of mining rules and an abduction strategy to explain LP by an embedding-based approach utilizing the learned rules. The third study applies the explainable LP to a COVID-19 KG to develop a new drug repurposing approach for COVID-19. The approach learns ”ensemble embeddings” of entities and relations in a COVID-19 centric KG, in order to get a better latent representation of the graph elements. For the first time to our knowledge, molecular docking is then used to evaluate the predictions obtained from drug repurposing using KG embedding. Molecular evaluation and explanatory paths bring reliability to prediction results and constitute new complementary and reusable methods for assessing KG-based drug repurposing. The last study proposes a distributed architecture for learning KG embeddings in distributed and parallel settings. The results of the study that the computational time of embedding methods improves remarkably without affecting LP performance when they are trained in the proposed distributed settings than the traditional centralized settings
Kobeissi, Mohamed. "Plongement de graphes dans l'hypercube." Phd thesis, Grenoble 1, 2001. https://theses.hal.science/tel-00004683.
Повний текст джерелаKobeissi, Mohamed. "Plongement de graphes dans l'hypercube." Phd thesis, Université Joseph Fourier (Grenoble), 2001. http://tel.archives-ouvertes.fr/tel-00004683.
Повний текст джерелаTrouillon, Théo. "Modèles d'embeddings à valeurs complexes pour les graphes de connaissances." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM048/document.
Повний текст джерелаThe explosion of widely available relational datain the form of knowledge graphsenabled many applications, including automated personalagents, recommender systems and enhanced web search results.The very large size and notorious incompleteness of these data basescalls for automatic knowledge graph completion methods to make these applicationsviable. Knowledge graph completion, also known as link-prediction,deals with automatically understandingthe structure of large knowledge graphs---labeled directed graphs---topredict missing entries---labeled edges. An increasinglypopular approach consists in representing knowledge graphs as third-order tensors,and using tensor factorization methods to predict their missing entries.State-of-the-art factorization models propose different trade-offs between modelingexpressiveness, and time and space complexity. We introduce a newmodel, ComplEx---for Complex Embeddings---to reconcile both expressivenessand complexity through the use of complex-valued factorization, and exploreits link with unitary diagonalization.We corroborate our approach theoretically and show that all possibleknowledge graphs can be exactly decomposed by the proposed model.Our approach based on complex embeddings is arguably simple,as it only involves a complex-valued trilinear product,whereas other methods resort to more and more complicated compositionfunctions to increase their expressiveness. The proposed ComplEx model isscalable to large data sets as it remains linear in both space and time, whileconsistently outperforming alternative approaches on standardlink-prediction benchmarks. We also demonstrateits ability to learn useful vectorial representations for other tasks,by enhancing word embeddings that improve performanceson the natural language problem of entailment recognitionbetween pair of sentences.In the last part of this thesis, we explore factorization models abilityto learn relational patterns from observed data.By their vectorial nature, it is not only hard to interpretwhy this class of models works so well,but also to understand where they fail andhow they might be improved. We conduct an experimentalsurvey of state-of-the-art models, not towardsa purely comparative end, but as a means to get insightabout their inductive abilities.To assess the strengths and weaknesses of each model, we create simple tasksthat exhibit first, atomic properties of knowledge graph relations,and then, common inter-relational inference through synthetic genealogies.Based on these experimental results, we propose new researchdirections to improve on existing models, including ComplEx
Bloyet, Nicolas. "Caractérisation et plongement de sous-graphes colorés : application à la construction de modèles structures à activité (QSAR)." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS546.
Повний текст джерелаIn the field of chemistry, it is interesting to be able to estimate the physicochemical properties of molecules, especially for industrial applications. These are difficult to estimate by physical simulations, as their implementation often present prohibitive time complexity. However, the emergence of data (public or private) opens new perspectives for the treatment of these problems by statistical methods and machine learning. The main difficulty lies in the characterization of molecules: these are more like a network of atoms (in other words a colored graph) than a vector. Unfortunately, statistical modeling methods usually deal with observations encoded as such, hence the need for specific methods able to deal with graphs- encoded observations, called structure-activity relationships. The aim of this thesis is to take advantage of public corpora to learn the best possible representations of these structures, and to transfer this global knowledge to smaller datasets. We adapted methods used in automatic processing of natural languages to achieve this goal. To implement them, more theoretical work was needed, especially on the graph isomorphism problem. The results obtained on classification / regression tasks are at least competitive with the state of the art, and even sometimes better, in particular on restricted data sets, attesting some opportunities for transfer learning in this field
Boschin, Armand. "Machine learning techniques for automatic knowledge graph completion." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT016.
Повний текст джерелаA knowledge graph is a directed graph in which nodes are entities and edges, typed by a relation, represent known facts linking two entities. These graphs can encode a wide variety of information, but their construction and exploitation can be complex. Historically, symbolic methods have been used to extract rules about entities and relations, to correct anomalies or to predict missing facts. More recently, techniques of representation learning, or embeddings, have attempted to solve these same tasks. Initially purely algebraic or geometric, these methods have become more complex with deep neural networks and have sometimes been combined with pre-existing symbolic techniques.In this thesis, we first focus on the problem of implementation. Indeed, the diversity of libraries used makes the comparison of results obtained by different models a complex task. In this context, the Python library TorchKGE was developed to provide a unique setup for the implementation of embedding models and a highly efficient inference evaluation module. This library relies on graphic acceleration of tensor computation provided by PyTorch, is compatible with widespread optimization libraries and is available as open source.We then consider the automatic enrichment of Wikidata by typing the hyperlinks linking Wikipedia pages. A preliminary study showed that the graph of Wikipedia articles is much denser than the corresponding knowledge graph in Wikidata. A new training method involving relations and an inference method using entity types were proposed and experiments showed the relevance of the combined approach, including on a new dataset.Finally, we explore automatic entity typing as a hierarchical classification task. That led to the design of a new hierarchical loss used to train tensor-based models along with a new type of encoder. Experiments on two datasets have allowed a good understanding of the impact a prior knowledge of class taxonomy can have on a classifier but also reinforced the intuition that the hierarchy can be learned from the features if the dataset is large enough
Dodane, Olivier. "Théorèmes de Petri pour les courbes stables et dégénérescence du système d'équations du plongement canonique." Strasbourg, 2009. https://publication-theses.unistra.fr/public/theses_doctorat/2009/DODANE_Olivier_2009.pdf.
Повний текст джерелаPetri's theorem states that the canonical image of a nonhyperelliptic smooth curve of genus g>=4 defined over an algebraically closed field is an intersection of quadrics and cubics. Moreover, one can exhibit a system of equations for this image. These results are due to Petri (1923) and were generalized and transcribed in modern language by Saint-Donat (1973). The moduli space of smooth curves is not proper and can be completed by adding stable curves. It is therefore natural to search for generalizations of Petri's theorem for stable curves and to examine questions of degeneracy. In this thesis, we consider on the one hand the case of a stable curve with one singular point and whose normalization is hyperelliptic, and on the other hand the case of a stable curve whose graph is planar. Moreover, we undertake the canonical embedding of a stable curve defined over a discrete valuation ring. The general method consists in: -- describing the canonical sheaf and constructing a well adapted basis for the space of its global sections; -- constructing quadrics and cubics in the canonical ideal; -- proving that these equations generate the canonical ideal. The text also contains new biographical indications concerning the german mathematician Karl Petri. [http://tel. Archives-ouvertes. Fr]
Philibert, Manon. "Cubes partiels : complétion, compression, plongement." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0403.
Повний текст джерелаPartial cubes (aka isometric subgraphs of hypercubes) are a fundamental class of metric graph theory. They comprise many important graph classes (trees, median graphs, tope graphs of complexes of oriented matroids, etc.), arising from different areas of research such as discrete geometry, combinatorics or geometric group theory.First, we investigate the structure of partial cubes of VC-dimension 2. We show that those graphs can be obtained via amalgams from even cycles and full subdivisions of complete graphs. This decomposition allows us to obtain various characterizations. In particular, any partial cube can be completed to an ample partial cube of VC-dimension 2. Then, we show that the tope graphs of oriented matroids and complexes of uniform oriented matroids can also be completed to ample partial cubes of the same VC-dimension.Using a result of Moran and Warmuth, we establish that those classes satisfy the conjecture of Floyd and Warmuth, one of the oldest open problems in computational machine learning. Particularly, they admit (improper labeled) compression schemes of size their VC-dimension.Next, we describe a proper labeled compression scheme of size d for complexes of oriented matroids of VC-dimension d, generalizing the result of Moran and Warmuth for ample sets. Finally, we give a characterization via excluded pc-minors and via forbidden isometric subgraphs of partial cubes isometrically embedded into the grid \mathbb{Z}^2 and the cylinder P_n \square C_{2k} for some n and k > 4
Ben, Salah Fatma. "Modélisation et simulation à base de règles pour la simulation physique." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2293.
Повний текст джерелаThe physical simulation of deformable objects is at the core of several computer graphics applications. In this context, we are interested in the creation of a framework, that combines a topological model, namely Generalized Maps, with one or several mechanical models, for the physical animation of deformable meshed objects that can undergo topological modifications such as tearing or fractures.To obtain a general framework, we chose to rely on graph manipulation and transformation rules, proposed by the JERBOA software. This environment provided us with fast prototyping facilities for different mechanical models. It allowed us to precisely define how to store mechanical properties in the topological description of a mesh and simulate its deformation in a topologically-based manner for interaction computation and force distribution. All mechanical properties are stored in the topological model without any external structure.This framework is general. It allows for the simulation of 2D or 3D objects, with different types of meshes, including non homogeneous ones. It also allowed for the simulation of several, continuous or discrete, mechanical models with various properties of homogeneity and isotropy. Furthermore, different methods to simulate topological modifications have been implemented in the framework. They include both the selection of a criterion to trigger topological modifications and a transformation type. Our approach also managed to reduce the number of updates of the mechanical model after tearing / fracture
Simonovsky, Martin. "Deep learning on attributed graphs." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1133/document.
Повний текст джерелаGraph is a powerful concept for representation of relations between pairs of entities. Data with underlying graph structure can be found across many disciplines, describing chemical compounds, surfaces of three-dimensional models, social interactions, or knowledge bases, to name only a few. There is a natural desire for understanding such data better. Deep learning (DL) has achieved significant breakthroughs in a variety of machine learning tasks in recent years, especially where data is structured on a grid, such as in text, speech, or image understanding. However, surprisingly little has been done to explore the applicability of DL on graph-structured data directly.The goal of this thesis is to investigate architectures for DL on graphs and study how to transfer, adapt or generalize concepts working well on sequential and image data to this domain. We concentrate on two important primitives: embedding graphs or their nodes into a continuous vector space representation (encoding) and, conversely, generating graphs from such vectors back (decoding). To that end, we make the following contributions.First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like operation on graphs performed in the spatial domain where filters are dynamically generated based on edge attributes. The method is used to encode graphs with arbitrary and varying structure.Second, we propose SuperPoint Graph, an intermediate point cloud representation with rich edge attributes encoding the contextual relationship between object parts. Based on this representation, ECC is employed to segment large-scale point clouds without major sacrifice in fine details.Third, we present GraphVAE, a graph generator allowing to decode graphs with variable but upper-bounded number of nodes making use of approximate graph matching for aligning the predictions of an autoencoder with its inputs. The method is applied to the task of molecule generation
Cassagnes, Cyril. "Architecture autonome et distribuée d’adressage et de routage pour la flexibilité des communications dans l’internet." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14600/document.
Повний текст джерелаLocal routing schemes based on virtual coordinates taken from the hyperbolic plane have attracted considerable interest in recent years.However, solutions have been applied to ad-hoc and sensor networks having a random topology and a limited number of nodes. In other hand, some research has focused on the creation of network topology models based on hyperbolic geometric laws. In this case, it has been shown that these graphs have an Internet-like topology and that local hyperbolic routing achieves a near perfect efficiency. However, with these graphs, routing success is not guaranteed even if no failures happen. In this thesis, we aim at building a scalable system for creating overlay networks on top of the Internet that would provide reliable addressing and routing service to its members in a dynamic environment.Next, we investigate how well P2PTV networks would support a growing number of users. In this thesis, we try to address this question by studying scalability and efficiency factors in a typical P2P based live streaming network. Through the use of the data provided by Zattoo a production P2PTV network, we carry out simulations whose results show that there are still hurdles to overcome before P2P based live streaming could depend uniquely of their users
Marlin, Nausica. "Communications structurées dans les réseaux." Phd thesis, Université de Nice Sophia-Antipolis, 2000. http://tel.archives-ouvertes.fr/tel-00505300.
Повний текст джерелаMonnin, Pierre. "Matching and mining in knowledge graphs of the Web of data : Applications in pharmacogenomics." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0212.
Повний текст джерелаIn the Web of data, an increasing number of knowledge graphs are concurrently published, edited, and accessed by human and software agents. Their wide adoption makes key the two tasks of matching and mining. First, matching consists in identifying equivalent, more specific, or somewhat similar units within and across knowledge graphs. This task is crucial since concurrent publication and edition may result in coexisting and complementary knowledge graphs. However, this task is challenging because of the inherent heterogeneity of knowledge graphs, e.g., in terms of granularities, vocabularies, and completeness. Motivated by an application in pharmacogenomics, we propose two approaches to match n-ary relationships represented in knowledge graphs: a symbolic rule-based approach and a numeric approach using graph embedding. We experiment on PGxLOD, a knowledge graph that we semi-automatically built by integrating pharmacogenomic relationships from three distinct sources of this domain. Second, mining consists in discovering new and useful knowledge units from knowledge graphs. Their increasing size and combinatorial nature entail scalability issues, which we address in the mining of path patterns. We also propose Concept Annotation, a refinement approach extending Formal Concept Analysis, a mathematical framework that groups entities based on their common attributes. Throughout all our works, we particularly focus on taking advantage of domain knowledge in the form of ontologies that can be associated with knowledge graphs. We show that, when considered, such domain knowledge alleviates heterogeneity and scalability issues in matching and mining approaches
Giorgetti, Alain. "Combinatoire bijective et énumérative des cartes pointées sur une surface." Phd thesis, Université de Marne la Vallée, 1998. http://tel.archives-ouvertes.fr/tel-00724977.
Повний текст джерелаColin, de Verdière Eric. "Raccourcissement de courbes et décomposition de surfaces." Paris 7, 2003. http://www.theses.fr/2003PA077147.
Повний текст джерелаLiu, Jixiong. "Semantic Annotations for Tabular Data Using Embeddings : Application to Datasets Indexing and Table Augmentation." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS529.
Повний текст джерелаWith the development of Open Data, a large number of data sources are made available to communities (including data scientists and data analysts). This data is the treasure of digital services as long as data is cleaned, unbiased, as well as combined with explicit and machine-processable semantics in order to foster exploitation. In particular, structured data sources (CSV, JSON, XML, etc.) are the raw material for many data science processes. However, this data derives from different domains for which consumers are not always familiar with (knowledge gap), which complicates their appropriation, while this is a critical step in creating machine learning models. Semantic models (in particular, ontologies) make it possible to explicitly represent the implicit meaning of data by specifying the concepts and relationships present in the data. The provision of semantic labels on datasets facilitates the understanding and reuse of data by providing documentation on the data that can be easily used by a non-expert. Moreover, semantic annotation opens the way to search modes that go beyond simple keywords and allow the use of queries of a high conceptual level on the content of the datasets but also their structure while overcoming the problems of syntactic heterogeneity encountered in tabular data. This thesis introduces a complete pipeline for the extraction, interpretation, and applications of tables in the wild with the help of knowledge graphs. We first refresh the exiting definition of tables from the perspective of table interpretation and develop systems for collecting and extracting tables on the Web and local files. Three table interpretation systems are further proposed based on either heuristic rules or graph representation models facing the challenges observed from the literature. Finally, we introduce and evaluate two table augmentation applications based on semantic annotations, namely data imputation and schema augmentation
Leroy, Vincent. "Distributing Social Applications." Phd thesis, INSA de Rennes, 2010. http://tel.archives-ouvertes.fr/tel-00545639.
Повний текст джерелаChoplin, Sébastien. "Dimensionnement de réseaux virtuels de télécommunications." Phd thesis, Université de Nice Sophia-Antipolis, 2002. http://tel.archives-ouvertes.fr/tel-00505397.
Повний текст джерела