Tesi sul tema "Ontology Alignement"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-22 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Ontology Alignement".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Ziani, Mina. "Conception d'une ontologie hybride à partir d'ontologies métier évolutives : intégration et alignement d'ontologies". Thesis, Lyon 3, 2012. http://www.theses.fr/2012LYO30081.
Testo completoThis thesis concerns the scope of knowledge management using ontological models.To represent domain knowledge, we design a hybrid ontology on two levels: In a local level, each experts’ group has designed its own ontology. In a global level, a consensual ontology containing all the shared knowledge is automatically created.We design a computer-aided system to help experts in the process of mapping creation. It allows experts to choice similarity measures relatively to the ontology characteristics, to reuse the calculated similarities and to verify the consistency of the created mappings.In addition, local ontologies can be updated. This involves modifications in the global ontology and on the created mappings. A relevant approach of our domain was developed.In particular, ontology versioning is used in order to keep a record of all the occurred modifications in the ontologies; it allows to return at any time a previous version of the hybrid ontology.The exploited domain is geotechnics which gathers various business experts. A prototype is in progress and currently does not still captures ontology evolution
Abbas, Muhammad Aun. "A Unified Approach for Dealing with Ontology Mappings and their Defects". Thesis, Lorient, 2016. http://www.theses.fr/2016LORIS423/document.
Testo completoAn ontology mapping is a set of correspondences. Each correspondence relates artifacts, such as concepts and properties, of one ontology to artifacts of another ontology. In the last few years, a lot of attention has been paid to establish mappings between source ontologies. Ontology mapping is widely and effectively used for interoperability and integration tasks (data transformation, query answering, or web-service composition, to name a few), and in the creation of new ontologies. On the one side, checking the (logical) correctness of ontology mappings has become a fundamental prerequisite of their use. On the other side, given two ontologies, there are several ontology mappings between them that can be obtained by using different ontology matching methods or just stated manually. Using ontology mappings between two ontologies in combination within a single application or for synthesizing one mapping taking the advantage of two original mappings, may cause errors in the application or in the synthesized mapping because those original mappings may be contradictory (conflicting). In both situations, correctness is usually formalized and verified in the context of fully formalized ontologies (e.g. in logics), even if some “weak” notions of correctness have been proposed when ontologies are informally represented or represented in formalisms preventing a formalization of correctness (such as UML). Verifying correctness is usually performed within one single formalism, requiring on the one side that ontologies need to be represented in this unique formalism and, on the other side, a formal representation of mapping is provided, equipped with notions related to correctness (such as consistency). In practice, there exist several heterogeneous formalisms for expressing ontologies, ranging from informal (text, UML and others) to formal (logical and algebraic). This implies that, willing to apply existing approaches, heterogeneous ontologies should be translated (or just transformed if, the original ontology is informally represented or when full translation, keeping equivalence, is not possible) in one common formalism, mappings need each time to be reformulated, and then correctness can be established. This is possible but possibly leading to correct mappings under one translation and incorrect mapping under another translation. Indeed, correctness (e.g. consistency) depends on the underlying employed formalism in which ontologies and mappings are expressed. Different interpretations of correctness are available within the formal or even informal approaches questioning about what correctness is indeed. In the dissertation, correctness has been reformulated in the context of heterogeneous ontologies by using the theory of Galois connections. Specifically ontologies are represented as lattices and mappings as functions between those lattices. Lattices are natural structures for directly representing ontologies, without changing the original formalisms in which ontologies are expressed. As a consequence, the (unified) notion of correctness has been reformulated by using Galois connection condition, leading to the new notion of compatible and incompatible mappings. It is formally shown that the new notion covers the reviewed correctness notions, provided in distinct state of the art formalisms, and, at the same time, can naturally cover heterogeneous ontologies. The usage of the proposed unified approach is demonstrated by applying it to upper ontology mappings. Notion of compatible and incompatible ontology mappings is also applied on domain ontologies to highlight that incompatible ontology mappings give incorrect results when used for ontology merging
Menad, Safaa. "Enrichissement et alignement sémantique d'οntοlοgies biοmédicales par mοdèles de langue". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR104.
Testo completoThe first part of this thesis addresses the design of siamese neural models trained for semantic similarity between biomedical texts and their application to NLP tasks on biomedical documents. The training of these models was performed by embedding the titles and abstracts from the PubMed corpus along with the MeSH thesaurus into a common space. In the second part, we use these models to align and enrich the terminologies of UMLS (Unified Medical Language System) and automate the integration of new relationships between similar concepts, particularly from diseases (DOID), drugs (DRON), and symptoms. These enriched relationships enhance the usability of these ontologies, thereby facilitating their application in various clinical and scientific domains. Additionally, we propose validation approaches using resources such as LLMs, OpenFDA, the UMLS Metathesaurus, and the UMLS semantic network, supplemented by manual validation from domain experts
Song, Fuqi. "Contribution à l'interopérabilité des entreprises par alignement d'ontologies". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00909637.
Testo completoFan, Zhengjie. "Concise Pattern Learning for RDF Data Sets Interlinking". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM013/document.
Testo completoThere are many data sets being published on the web with Semantic Web technology. The data sets usually contain analogous data which represent the similar resources in the world. If these data sets are linked together by correctly identifying the similar instances, users can conveniently query data through a uniform interface, as if they are connecting a single database. However, finding correct links is very challenging because web data sources usually have heterogeneous ontologies maintained by different organizations. Many existing solutions have been proposed for this problem. (1) One straight-forward idea is to compare the attribute values of instances for identifying links, yet it is impossible to compare all possible pairs of attribute values. (2) Another common strategy is to compare instances with correspondences found by instance-based ontology matching, which can generate attribute correspondences based on overlapping ranges between two attributes, while it is easy to cause incomparable attribute correspondences or undiscovered comparable attribute correspondences. (3) Many existing solutions leverage Genetic Programming to construct interlinking patterns for comparing instances, however the running times of the interlinking methods are usually long. In this thesis, an interlinking method is proposed to interlink instances for different data sets, based on both statistical learning and symbolic learning. On the one hand, the method discovers potential comparable attribute correspondences of each class correspondence via a K-medoids clustering algorithm with instance value statistics. We adopt K-medoids because of its high working efficiency and high tolerance on irregular data and even incorrect data. The K-medoids classifies attributes of each class into several groups according to their statistical value features. Groups from different classes are mapped when they have similar statistical value features, to determine potential comparable attribute correspondences. The clustering procedure effectively narrows the range of candidate attribute correspondences. On the other hand, our solution also leverages a symbolic learning method, called Version Space. Version Space is an iterative learning model that searches for the interlinking pattern from two directions. Our design can solve the interlinking task that does not have a single compatible conjunctive interlinking pattern that covers all assessed correct links with a concise format. The interlinking solution is evaluated with large-scale real-world data from IM@OAEI and CKAN. Experiments confirm that the solution with only 1% of sample links already reaches a high accuracy (up to 0.94-0.99 on F-measure). The F-measure quickly converges improving on other state-of-the-art approaches, by nearly 10 percent of their F-measure values
Annane, Amina. "Using Background Knowledge to Enhance Biomedical Ontology Matching". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS032/document.
Testo completoLife sciences produce a huge amount of data (e.g., clinical trials, scientific articles) so that integrating and analyzing all the datasets related to a given research question like the correlation between phenotypes and genotypes, is a key element for knowledge discovery. The life sciences community adopted Semantic Web technologies to achieve data integration and interoperability, especially ontologies which are the key technology to represent and share the increasing amount of data on the Web. Indeed, ontologies provide a common domain vocabulary for humans, and formal entity definitions for machines.A large number of biomedical ontologies and terminologies has been developed to represent and annotate various datasets. However, datasets represented with different overlapping ontologies are not interoperable. It is therefore crucial to establish correspondences between the ontologies used; an active area of research known as ontology matching.Original ontology matching methods usually exploit the lexical and structural content of the ontologies to align. These methods are less effective when the ontologies to align are lexically heterogeneous i.e., when equivalent concepts are described with different labels. To overcome this issue, the ontology matching community has turned to the use of external knowledge resources as a semantic bridge between the ontologies to align. This approach arises several new issues mainly: (1) the selection of these background resources, (2) the exploitation of the selected resources to enhance the matching results. Several works have dealt with these issues jointly or separately. In our thesis, we made a systematic review and historical evaluation comparison of state-of-the-art approaches.Ontologies, others than the ones to align, are the most used background knowledge resources. Related works often select a set of complete ontologies as background knowledge, even if, only fragments of the selected ontologies are actually effective for discovering new mappings. We propose a novel BK-based ontology matching approach that selects and builds a knowledge resource with just the right concepts chosen from a set of ontologies. The conducted experiments showed that our BK selection approach improves efficiency without loss of effectiveness.Exploiting background knowledge resources in ontology matching is a double-edged sword: while it may increase recall (i.e., retrieve more correct mappings), it may lower precision (i.e., produce more incorrect mappings). We propose two methods to select the most relevant mappings from the candidate ones: (1) based on a set of rules and (2) with Supervised Machine Learning. We experiment and evaluate our approach in the biomedical domain, thanks to the profusion of knowledge resources in biomedicine (ontologies, terminologies and existing alignments).We evaluated our approach with extensive experiments on two Ontology Alignment Evaluation Initiative (OAEI) benchmarks. Our results confirm the effectiveness and efficiency of our approach and overcome or compete with state-of-the-art matchers exploiting background knowledge resources
Tounsi, Dhouib Molka. "Ingénierie des connaissances dans le domaine du sourcing pour la recommandation de prestataires". Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4024.
Testo completoThis CIFRE doctoral thesis is part of a collaborative research project between the I3S laboratory of the University of Côte d'Azur and the Silex company, and addresses the field of recommendation systems. Silex is a start-up that develops a Software-as-a-Service sourcing tool that allows companies to provide a description of their professional activities, their offers and/or the services they are looking for in natural language (currently French).In this context, the objective of this thesis is to propose a decision support system by exploiting the semantic knowledge that are extracted from the textual descriptions of requests for services and providers, in order to recommend relevant providers for a service request.The contributions of this thesis are the following. First, we proposed a vocabulary for the sourcing field by reusing and integrating existing vocabularies, in order to semantically annotate the textual descriptions of providers and requests for services. Second, we proposed an automatic alignment method to establish the correspondence between different concepts of the considered vocabularies. This approach is based on rules exploiting embedding space and measurements on groups of labels to discover the relationships between concepts. Third, we proposed an algorithm for extracting named entities from the textual descriptions of service requests and providers, and an algorithm for semantic annotation of these descriptions, based on the linking of the extracted entities with the concepts of the defined vocabulary.Fourth, we proposed a provider recommendation algorithm that exploits these knowledges extracted.Finally, we studied the contribution of using ontological knowledge to improve our decision support system for the sourcing domain in order to recommend relevant providers for a service request.The contributions of this thesis are the following. First, we proposed a vocabulary for the sourcing field in order to semantically annotate the textual descriptions of providers and requests for services. This vocabulary was built by reusing and integrating existing vocabularies. Second, we proposed an automatic alignment method to establish the correspondence between different concepts of the considered vocabularies. This approach is based on rules exploiting embedding space and measurements on groups of labels to discover the relationships between concepts. Third, we proposed an algorithm for extracting named entities from the textual descriptions of service requests and providers, and an algorithm for semantic annotation of these descriptions, based on the linking of the extracted entities with the concepts of the defined vocabulary.Fourth, we proposed a provider recommendation algorithm that exploits these knowledge extracted.Finally, we studied the contribution of using ontological knowledge to improve our decision support system for the sourcing domain
Hamdi, Fayçal. "Améliorer l'interopérabilité sémantique : applicabilité et utilité de l'alignement d'ontologies". Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00662523.
Testo completoNgo, Duy Hoa. "Enhancing Ontology Matching by Using Machine Learning, Graph Matching and Information Retrieval Techniques". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20096/document.
Testo completoIn recent years, ontologies have attracted a lot of attention in the Computer Science community, especially in the Semantic Web field. They serve as explicit conceptual knowledge models and provide the semantic vocabularies that make domain knowledge available for exchange and interpretation among information systems. However, due to the decentralized nature of the semantic web, ontologies are highlyheterogeneous. This heterogeneity mainly causes the problem of variation in meaning or ambiguity in entity interpretation and, consequently, it prevents domain knowledge sharing. Therefore, ontology matching, which discovers correspondences between semantically related entities of ontologies, becomes a crucial task in semantic web applications.Several challenges to the field of ontology matching have been outlined in recent research. Among them, selection of the appropriate similarity measures as well as configuration tuning of their combination are known as fundamental issues that the community should deal with. In addition, verifying the semantic coherent of the discovered alignment is also known as a crucial task. Furthermore, the difficulty of the problem grows with the size of the ontologies. To deal with these challenges, in this thesis, we propose a novel matching approach, which combines different techniques coming from the fields of machine learning, graph matching and information retrieval in order to enhance the ontology matching quality. Indeed, we make use of information retrieval techniques to design new effective similarity measures for comparing labels and context profiles of entities at element level. We also apply a graph matching method named similarity propagation at structure level that effectively discovers mappings by exploring structural information of entities in the input ontologies. In terms of combination similarity measures at element level, we transform the ontology matching task into a classification task in machine learning. Besides, we propose a dynamic weighted sum method to automatically combine the matching results obtained from the element and structure level matchers. In order to remove inconsistent mappings, we design a new fast semantic filtering method. Finally, to deal with large scale ontology matching task, we propose two candidate selection methods to reduce computational space.All these contributions have been implemented in a prototype named YAM++. To evaluate our approach, we adopt various tracks namely Benchmark, Conference, Multifarm, Anatomy, Library and Large BiomedicalOntologies from the OAEI campaign. The experimental results show that the proposed matching methods work effectively. Moreover, in comparison to other participants in OAEI campaigns, YAM++ showed to be highly competitive and gained a high ranking position
Laadhar, Amir. "Local matching learning of large scale biomedical ontologies". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30126.
Testo completoAlthough a considerable body of research work has addressed the problem of ontology matching, few studies have tackled the large ontologies used in the biomedical domain. We introduce a fully automated local matching learning approach that breaks down a large ontology matching task into a set of independent local sub-matching tasks. This approach integrates a novel partitioning algorithm as well as a set of matching learning techniques. The partitioning method is based on hierarchical clustering and does not generate isolated partitions. The matching learning approach employs different techniques: (i) local matching tasks are independently and automatically aligned using their local classifiers, which are based on local training sets built from element level and structure level features, (ii) resampling techniques are used to balance each local training set, and (iii) feature selection techniques are used to automatically select the appropriate tuning parameters for each local matching context. Our local matching learning approach generates a set of combined alignments from each local matching task, and experiments show that a multiple local classifier approach outperforms conventional, state-of-the-art approaches: these use a single classifier for the whole ontology matching task. In addition, focusing on context-aware local training sets based on local feature selection and resampling techniques significantly enhances the obtained results
Idoudi, Rihab. "Fouille de connaissances en diagnostic mammographique par ontologie et règles d'association". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0005/document.
Testo completoFacing the significant complexity of the mammography area and the massive changes in its data, the need to contextualize knowledge in a formal and comprehensive modeling is becoming increasingly urgent for experts. It is within this framework that our thesis work focuses on unifying different sources of knowledge related to the domain within a target ontological modeling. On the one hand, there is, nowadays, several mammographic ontological modeling, where each resource has a distinct perspective area of interest. On the other hand, the implementation of mammography acquisition systems makes available a large volume of information providing a decisive competitive knowledge. However, these fragments of knowledge are not interoperable and they require knowledge management methodologies for being comprehensive. In this context, we are interested on the enrichment of an existing domain ontology through the extraction and the management of new knowledge (concepts and relations) derived from two scientific currents: ontological resources and databases holding with past experiences. Our approach integrates two knowledge mining levels: The first module is the conceptual target mammographic ontology enrichment with new concepts extracting from source ontologies. This step includes three main stages: First, the stage of pre-alignment. The latter consists on building for each input ontology a hierarchy of fuzzy conceptual clusters. The goal is to reduce the alignment task from two full ontologies to two reduced conceptual clusters. The second stage consists on aligning the two hierarchical structures of both source and target ontologies. Thirdly, the validated alignments are used to enrich the reference ontology with new concepts in order to increase the granularity of the knowledge base. The second level of management is interested in the target mammographic ontology relational enrichment by novel relations deducted from domain database. The latter includes medical records of mammograms collected from radiology services. This section includes four main steps: i) the preprocessing of textual data ii) the application of techniques for data mining (or knowledge extraction) to extract new associations from past experience in the form of rules, iii) the post-processing of the generated rules. The latter is to filter and classify the rules in order to facilitate their interpretation and validation by expert, vi) The enrichment of the ontology by new associations between concepts. This approach has been implemented and validated on real mammographic ontologies and patient data provided by Taher Sfar and Ben Arous hospitals. The research work presented in this manuscript relates to knowledge using and merging from heterogeneous sources in order to improve the knowledge management process
Inants, Armen. "Qualitative calculi with heterogeneous universes". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAMO10/document.
Testo completoQualitative representation and reasoning operate with non-numerical relations holding between objects of some universe. The general formalisms developed in this field are based on various kinds of algebras of relations, such as Tarskian relation algebras. All these formalisms, which are called qualitative calculi, share an implicit assumption that the universe is homogeneous, i.e., consists of objects of the same kind. However, objects of different kinds may also entertain relations. The state of the art of qualitative reasoning does not offer a combination operation of qualitative calculi for different kinds of objects into a single calculus.Many applications discriminate between different kinds of objects. For example, some spatial models discriminate between regions, lines and points, and different relations are used for each kind of objects. In ontology matching, qualitative calculi were shown useful for expressing alignments between only one kind of entities, such as concepts or individuals. However, relations between individuals and concepts, which impose additional constraints, are not exploited.This dissertation introduces modularity in qualitative calculi and provides a methodology for modeling qualitative calculi with heterogeneous universes. Our central contribution is a framework based on a special class of partition schemes which we call modular. For a qualitative calculus generated by a modular partition scheme, we define a structure that associates each relation symbol with an abstract domain and codomain from a Boolean lattice of sorts. A module of such a qualitative calculus is a sub-calculus restricted to a given sort, which is obtained through an operation called relativization to a sort. Of a greater practical interest is the opposite operation, which allows for combining several qualitative calculi into a single calculus. We define an operation called combination modulo glue, which combines two or more qualitative calculi over different universes, provided some glue relations between these universes. The framework is general enough to support most known qualitative spatio-temporal calculi
Elbyed, Abdeltif. "ROMIE, une approche d'alignement d'ontologies à base d'instances". Phd thesis, Institut National des Télécommunications, 2009. http://tel.archives-ouvertes.fr/tel-00541874.
Testo completoKoutraki, Maria. "Approches vers des modèles unifiés pour l'intégration de bases de connaissances". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV082/document.
Testo completoMy thesis aim the automatic integration of new Web services in a knowledge base. For each method of a Web service, a view is automatically calculated. The view is represented as a query on the knowledge base. Our algorithm also calculates an XSLT transformation function associated to the method that is able to transform the call results in a fragment according to the schema of the knowledge base. The novelty of our approach is that the alignment is based only on the instances. It does not depend on the names of the concepts or constraints that are defined by the schema. This makes it particularly relevant for Web services that are currently available on the Web, because these services use the REST protocol. This protocol does not allow the publication schemes. In addition, JSON seems to establish itself as the standard for the representation of technology call results
Tournaire, Rémi. "Découverte automatique de correspondances entre ontologies". Grenoble, 2010. http://www.theses.fr/2010GRENM072.
Testo completoIn this thesis, we investigate a principled approach for defining and discovering probabilistic inclusion mappings between two taxonomies, with a clear semantic, in a purpose of collaborative exchange of documents. Firstly, we compare two ways of modeling probabilistic mappings which are compatible with the logical constraints declared in each taxonomy according to a monotony property, then we show that they are complementary for distinguishing relevant mappings. We provide a way to estimate the probabilities associated to a mapping by a Bayesian estimation technique based on classes extensions involved in the mapping, and using classifiers in order to merge the instances of both taxonomies when they are disjoint. Then we describe a generate and test algorithm called ProbaMap which minimizes the number of calls to the probability estimator for determining those mappings whose probability exceeds a chosen threshold. A thorough experimental analysis of ProbaMap is conducted. We introduce a generator that produce controlled data that allows to analyse the quality and the complexity of ProbaMap in a large and generic panel of situations. We present also two series of results for experiments conducted on real-world data: an alignment of the Directory dataset of the Ontology Alignment Evaluation Initiative (OAEI), and a comparative experiment on Web directories, on which ProbaMap outperforms the state-of-the-art contribution SBI (IJCAI'03). The perspectives of this work are the reuse of probabilistic mappings for a probabilistic query answering setting and a way to convert similarities coefficients of existing matching methods into probabilities
Thiéblin, Elodie. "Génération automatique d'alignements complexes d'ontologies". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30135.
Testo completoThe Linked Open Data (LOD) cloud is composed of data repositories. The data in the repositories are described by vocabularies also called ontologies. Each ontology has its own terminology and model. This leads to heterogeneity between them. To make the ontologies and the data they describe interoperable, ontology alignments establish correspondences, or links between their entities. There are many ontology matching systems which generate simple alignments, i.e., they link an entity to another. However, to overcome the ontology heterogeneity, more expressive correspondences are sometimes needed. Finding this kind of correspondence is a fastidious task that can be automated. In this thesis, an automatic complex matching approach based on a user's knowledge needs and common instances is proposed. The complex alignment field is still growing and little work address the evaluation of such alignments. To palliate this lack, we propose an automatic complex alignment evaluation system. This system is based on instances. A famous alignment evaluation dataset has been extended for this evaluation
Korenchuk, Yuliya. "Méthode d'enrichissement et d'élargissement d'une ontologie à partir de corpus de spécialité multilingues". Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAC014/document.
Testo completoThis thesis proposes a method of enrichment and population of an ontology, a structure of concepts linked by semantic relations, by terms in French, English and German from comparable domain-specific corpora. Our main contribution is the development of extraction methods based on endogenous resources, learned from the corpus and the ontology being analyzed. Using caracter n-grams, these resources are available and independent of a particular language or domain. The first contribution concerns the use of endogenous morphological and morphosyntactic resources for mono- and polylexical terms extraction from the corpus. The second contribution aims to use endogenous resources to identify translations for these terms. The third contribution concerns the construction of endogenous morphological families designed to enrich and populate the ontology
Bach, thanh Lê. "Construction d'un Web sémantique multi-points de vue". Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00001989.
Testo completoAbou, Assali Amjad. "Acquisition des connaissances d'adaptation et traitement de l'hétérogénéité dans un système de RàPC basé sur une ontologie". Compiègne, 2010. http://www.theses.fr/2010COMP1876.
Testo completoThis thesis is about the design of a case-based reasoning (CBR) system for classification problems. Our work is currently applied to the diagnosis of the failure of gas sensors set up at industrial sites. We are mainly interested in two CBR aspects: the first concerns the adaptation, which is a key phase in the CBR cycle. This phase aims at producing solutions to new problems by reusing solutions to problems already solved. Adaptation is considered as the bottleneck of CBR systems because it requires domain-specific knowledge which is generally difficult to acquire. The second aspect concerns the treatment of cases heterogeneity that leads to problems at different levels, especially during the acquisition of adaptation knowledge and the retrieval phase. In this thesis, we present our semi-automatic approach to acquire adaptation knowledge from a case base. This approach relies on the techniques of Formai Concept Analysis (FCA). The acquired knowledge can then be refined by users during problem solving sessions. We present also our case alignment approach to treat the problems related to heterogeneity. Case alignment aims to identify the mappings between the attributes of compared cases. We distinguish an alignment based on the similarity between attributes, and an alignment based on the yoles of attributes. Our work has led to the development of COBRA, a platform allowing to construct ontology-based CBR systems
David, Jérôme. "AROMA : une méthode pour la découverte d'alignements orientés entre ontologies à partir de règles d'association". Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00200040.
Testo completoDans la littérature, la plupart des travaux traitant des méthodes d'alignement d'ontologies ou de schémas s'appuient sur une définition intentionnelle des schémas et utilisent des relations basées sur des mesures de similarité qui ont la particularité d'être symétriques (équivalences). Afin d'améliorer les méthodes d'alignement, et en nous inspirant des travaux sur la découverte de règles d'association, des mesures de qualité associées, et sur l'analyse statistique implicative, nous proposons de découvrir des appariements asymétriques (implications) entre ontologies. Ainsi, la contribution principale de cette thèse concerne la conception d'une méthode d'alignement extensionnelle et orientée basée sur la découverte des implications significatives entre deux hiérarchies plantées dans un corpus textuel.
Notre méthode d'alignement se décompose en trois phases successives. La phase de prétraitement permet de préparer les ontologies à l'alignement en les redéfinissant sur un ensemble commun de termes extraits des textes et sélectionnés statistiquement. La phase de fouille extrait un alignement implicatif entre hiérarchies. La dernière phase de post-traitement des résultats permet de produire des alignements consistants et minimaux (selon un critère de redondance).
Les principaux apports de cette thèse sont : (1) Une modélisation de l'alignement étendue pour la prise en compte de l'implication. Nous définissons les notions de fermeture et couverture d'un alignement permettant de formaliser la redondance et la consistance d'un alignement. Nous étudions également la symétricité et les cardinalités d'un alignement. (2) La réalisation de la méthode AROMA et d'une interface d'aide à la validation d'alignements. (3) Une extension d'un modèle d'évaluation sémantique pour la prise en compte de la présence d'implications dans un alignement. (4) L'étude du comportement et de la performance d'AROMA sur différents types de jeux de tests (annuaires Web, catalogues et ontologies au format OWL) avec une sélection de six mesures de qualité.
Les résultats obtenus sont prometteurs car ils montrent la complémentarité de notre méthode avec les approches existantes.
Touzani, Mohamed. "Alignement des ontologies OWL-Lite". Thèse, 2005. http://hdl.handle.net/1866/16677.
Testo completoJridi, Jamel Eddine. "L'ingénierie des documents d'affaires dans le cadre du web sémantique". Thèse, 2014. http://hdl.handle.net/1866/11934.
Testo completoIn this thesis, we present the problems of business document exchanges. We propose a methodology to adapt the XML-based business standards for the Semantic Web technologies by mapping documents defined on DTD or XML Schema to an ontological representation in OWL 2. Next, we propose an approach based on formal concept analysis techniques to regroup the ontology classes sharing some semantics to improve the quality, readability and the representation of the ontology. Finally, we propose ontology alignment to determine the semantic links between heterogeneous business ontologies generated by the transformation process to help entreprises to communicate fruitfully.