Literatura académica sobre el tema "Liage des données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Liage des données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Liage des données"
Ben Gharbia, Abdeljabbar. "Les complétives en arabe classique: entre parataxe et hypotaxe". Arabica 57, n.º 5 (2010): 517–35. http://dx.doi.org/10.1163/157005810x519080.
Texto completoBawin-Legros, B. y M. Sommer. "Famille / Familles : Difficiles et mouvantes typologies". II. La famille ou les familles : objet complexe, insaisissable ?, n.º 18 (15 de diciembre de 2015): 47–55. http://dx.doi.org/10.7202/1034265ar.
Texto completoFontenelle, Thierry. "Towards the Construction of a Collocational Database for Translation Students". Meta 39, n.º 1 (30 de septiembre de 2002): 47–56. http://dx.doi.org/10.7202/002756ar.
Texto completoDerivery, François. "Magritte : les données du problème". Ligeia N° 153-156, n.º 1 (2017): 42. http://dx.doi.org/10.3917/lige.153.0042.
Texto completoSGHAIER, Tahar, Salah GARCHI y Thouraya AZIZI. "Modélisation de la croissance et la production du liège en Tunisie". BOIS & FORETS DES TROPIQUES 346 (11 de enero de 2021): 3–20. http://dx.doi.org/10.19182/bft2020.346.a31805.
Texto completoKounellis, Jannis y Giovanni Lista. "L’intensité dramatique comme donnée positive". Ligeia N° 69-72, n.º 2 (2006): 6. http://dx.doi.org/10.3917/lige.069.0006.
Texto completoDaniel, Sharon y Karen O'Rourke. "[Mapping the Database] Trajectoires et perspectives des bases de données". Ligeia N°45-48, n.º 1 (2003): 105. http://dx.doi.org/10.3917/lige.045.0105.
Texto completoAcharki, Siham, Mina Amharref, Pierre-Louis Frison y Abdes Samed Bernoussi. "CARTOGRAPHIE DES CULTURES DANS LE PÉRIMÈTRE DU LOUKKOS (MAROC) : APPORT DE LA TÉLÉDÉTECTION RADAR ET OPTIQUE". Revue Française de Photogrammétrie et de Télédétection, n.º 222 (26 de noviembre de 2020): 15–29. http://dx.doi.org/10.52638/rfpt.2020.481.
Texto completoBaumann, Pierre. "Étant Donnés, la Réplique et Richard Baquié : morphogénèse de la reproductibilité". Ligeia N° 65-68, n.º 1 (2006): 54. http://dx.doi.org/10.3917/lige.065.0054.
Texto completoVierset, Viviane. "Vers un modèle d’apprentissage réflexif. Recueil de traces d’apprentissage formulées dans les log books des stagiaires en médecine". Approches inductives 3, n.º 1 (17 de febrero de 2016): 157–88. http://dx.doi.org/10.7202/1035198ar.
Texto completoTesis sobre el tema "Liage des données"
Lesnikova, Tatiana. "Liage de données RDF : évaluation d'approches interlingues". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM011/document.
Texto completoThe Semantic Web extends the Web by publishing structured and interlinked data using RDF.An RDF data set is a graph where resources are nodes labelled in natural languages. One of the key challenges of linked data is to be able to discover links across RDF data sets. Given two data sets, equivalent resources should be identified and linked by owl:sameAs links. This problem is particularly difficult when resources are described in different natural languages.This thesis investigates the effectiveness of linguistic resources for interlinking RDF data sets. For this purpose, we introduce a general framework in which each RDF resource is represented as a virtual document containing text information of neighboring nodes. The context of a resource are the labels of the neighboring nodes. Once virtual documents are created, they are projected in the same space in order to be compared. This can be achieved by using machine translation or multilingual lexical resources. Once documents are in the same space, similarity measures to find identical resources are applied. Similarity between elements of this space is taken for similarity between RDF resources.We performed evaluation of cross-lingual techniques within the proposed framework. We experimentally evaluate different methods for linking RDF data. In particular, two strategies are explored: applying machine translation or using references to multilingual resources. Overall, evaluation shows the effectiveness of cross-lingual string-based approaches for linking RDF resources expressed in different languages. The methods have been evaluated on resources in English, Chinese, French and German. The best performance (over 0.90 F-measure) was obtained by the machine translation approach. This shows that the similarity-based method can be successfully applied on RDF resources independently of their type (named entities or thesauri concepts). The best experimental results involving just a pair of languages demonstrated the usefulness of such techniques for interlinking RDF resources cross-lingually
Ben, Ellefi Mohamed. "La recommandation des jeux de données basée sur le profilage pour le liage des données RDF". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT276/document.
Texto completoWith the emergence of the Web of Data, most notably Linked Open Data (LOD), an abundance of data has become available on the web. However, LOD datasets and their inherent subgraphs vary heavily with respect to their size, topic and domain coverage, the schemas and their data dynamicity (respectively schemas and metadata) over the time. To this extent, identifying suitable datasets, which meet specific criteria, has become an increasingly important, yet challenging task to supportissues such as entity retrieval or semantic search and data linking. Particularlywith respect to the interlinking issue, the current topology of the LOD cloud underlines the need for practical and efficient means to recommend suitable datasets: currently, only well-known reference graphs such as DBpedia (the most obvious target), YAGO or Freebase show a high amount of in-links, while there exists a long tail of potentially suitable yet under-recognized datasets. This problem is due to the semantic web tradition in dealing with "finding candidate datasets to link to", where data publishers are used to identify target datasets for interlinking.While an understanding of the nature of the content of specific datasets is a crucial prerequisite for the mentioned issues, we adopt in this dissertation the notion of "dataset profile" - a set of features that describe a dataset and allow the comparison of different datasets with regard to their represented characteristics. Our first research direction was to implement a collaborative filtering-like dataset recommendation approach, which exploits both existing dataset topic proles, as well as traditional dataset connectivity measures, in order to link LOD datasets into a global dataset-topic-graph. This approach relies on the LOD graph in order to learn the connectivity behaviour between LOD datasets. However, experiments have shown that the current topology of the LOD cloud group is far from being complete to be considered as a ground truth and consequently as learning data.Facing the limits the current topology of LOD (as learning data), our research has led to break away from the topic proles representation of "learn to rank" approach and to adopt a new approach for candidate datasets identication where the recommendation is based on the intensional profiles overlap between differentdatasets. By intensional profile, we understand the formal representation of a set of schema concept labels that best describe a dataset and can be potentially enriched by retrieving the corresponding textual descriptions. This representation provides richer contextual and semantic information and allows to compute efficiently and inexpensively similarities between proles. We identify schema overlap by the help of a semantico-frequential concept similarity measure and a ranking criterion based on the tf*idf cosine similarity. The experiments, conducted over all available linked datasets on the LOD cloud, show that our method achieves an average precision of up to 53% for a recall of 100%. Furthermore, our method returns the mappings between the schema concepts across datasets, a particularly useful input for the data linking step.In order to ensure a high quality representative datasets schema profiles, we introduce Datavore| a tool oriented towards metadata designers that provides rankedlists of vocabulary terms to reuse in data modeling process, together with additional metadata and cross-terms relations. The tool relies on the Linked Open Vocabulary (LOV) ecosystem for acquiring vocabularies and metadata and is made available for the community
Abbas, Nacira. "Formal Concept Analysis for Discovering Link Keys in the Web of Data". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0202.
Texto completoThe Web of data is a global data space that can be seen as an additional layer interconnected with the Web of documents. Data interlinking is the task of discovering identity links across RDF (Resource Description Framework) datasets over the Web of data. We focus on a specific approach for data interlinking, which relies on the “link keys”. A link key has the form of two sets of pairs of properties associated with a pair of classes. For example the link key ({(designation,title)},{(designation,title) (creator,author)},(Book,Novel)), states that whenever an instance “a” of the class “Book” and “b” of the class “Novel”, share at least one value for the properties “creator” and “author” and that, “a” and “b” have the same values for the properties “designation” and “title”, then “a” and “b” denote the same entity. Then (a,owl:sameAs,b) is an identity link over the two datasets. However, link keys are not always provided, and various algorithms have been developed to automatically discover these keys. First, these algorithms focus on finding “link key candidates”. The quality of these candidates is then evaluated using appropriate measures, and valid link keys are selected accordingly. Formal Concept Analysis (FCA) has been closely associated with the discovery of link key candidates, leading to the proposal of an FCA-based algorithm for this purpose. Nevertheless, existing algorithms for link key discovery have certain limitations. First, they do not explicitly specify the associated pairs of classes for the discovered link key candidates, which can lead to inaccurate evaluations. Additionally, the selection strategies employed by these algorithms may also produce less accurate results. Furthermore, redundancy is observed among the sets of discovered candidates, which presents challenges for their visualization, evaluation, and analysis. To address these limitations, we propose to extend the existing algorithms in several aspects. Firstly, we introduce a method based on Pattern Structures, an FCA generalization that can handle non-binary data. This approach allows for explicitly specifying the associated pairs of classes for each link key candidate. Secondly, based on the proposed Pattern Structure, we present two methods for link key selection. The first method is guided by the associated pairs of classes of link keys, while the second method utilizes the lattice generated by the Pattern Structure. These two methods improve the selection compared to the existing strategy. Finally, to address redundancy, we introduce two methods. The first method involves Partition Pattern Structure, which identifies and merges link key candidates that generate the same partitions. The second method is based on hierarchical clustering, which groups candidates producing similar link sets into clusters and selects a representative for each cluster. This approach effectively minimizes redundancy among the link key candidates
Symeonidou, Danai. "Automatic key discovery for Data Linking". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112265/document.
Texto completoIn the recent years, the Web of Data has increased significantly, containing a huge number of RDF triples. Integrating data described in different RDF datasets and creating semantic links among them, has become one of the most important goals of RDF applications. These links express semantic correspondences between ontology entities or data. Among the different kinds of semantic links that can be established, identity links express that different resources refer to the same real world entity. By comparing the number of resources published on the Web with the number of identity links, one can observe that the goal of building a Web of data is still not accomplished. Several data linking approaches infer identity links using keys. Nevertheless, in most datasets published on the Web, the keys are not available and it can be difficult, even for an expert, to declare them.The aim of this thesis is to study the problem of automatic key discovery in RDF data and to propose new efficient approaches to tackle this problem. Data published on the Web are usually created automatically, thus may contain erroneous information, duplicates or may be incomplete. Therefore, we focus on developing key discovery approaches that can handle datasets with numerous, incomplete or erroneous information. Our objective is to discover as many keys as possible, even ones that are valid in subparts of the data.We first introduce KD2R, an approach that allows the automatic discovery of composite keys in RDF datasets that may conform to different schemas. KD2R is able to treat datasets that may be incomplete and for which the Unique Name Assumption is fulfilled. To deal with the incompleteness of data, KD2R proposes two heuristics that offer different interpretations for the absence of data. KD2R uses pruning techniques to reduce the search space. However, this approach is overwhelmed by the huge amount of data found on the Web. Thus, we present our second approach, SAKey, which is able to scale in very large datasets by using effective filtering and pruning techniques. Moreover, SAKey is capable of discovering keys in datasets where erroneous data or duplicates may exist. More precisely, the notion of almost keys is proposed to describe sets of properties that are not keys due to few exceptions
Fan, Zhengjie. "Apprentissage de Motifs Concis pour le Liage de Donnees RDF". Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-00986104.
Texto completoLibros sobre el tema "Liage des données"
Parret, Herman. La voix et son temps: Éléments pour une esthétique de la communication : sept leçons données dans le cadre de la Chaire Francqui au titre belge 1997-1998 à l'Université de Liège. Liège: Editions du C.I.L., Université de Liège, 1998.
Buscar texto completoCapítulos de libros sobre el tema "Liage des données"
Nicole, Quinaux, Evraud Martine y Noël Françoise. "Formation a L’Utilisation Des Bases de Donnees Sur CD-ROM: Experience de La Bibliotheque de La Faculte de Medecine de L’Universite de Liege". En Information Transfer: New Age — New Ways, 145–47. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-011-1668-8_33.
Texto completoZribi-Hertz, Anne. "Chapitre 7. La théorie standard du liage face aux données de l’anglais". En L’anaphore et les pronoms, 129–51. Presses universitaires du Septentrion, 1996. http://dx.doi.org/10.4000/books.septentrion.116180.
Texto completoActas de conferencias sobre el tema "Liage des données"
Dubois, Marc. "Le Corbusier et la Belgique / Son Héritage". En LC2015 - Le Corbusier, 50 years later. Valencia: Universitat Politècnica València, 2015. http://dx.doi.org/10.4995/lc2015.2015.896.
Texto completo