Dissertations / Theses on the topic 'Ontology mapping'

To see the other types of publications on this topic, follow the link: Ontology mapping.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Ontology mapping.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ghawi, Raji. "Ontology-based cooperation of information systems : contributions to database-to-ontology mapping and XML-to-ontology mapping." Phd thesis, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00559089.

Full text
Abstract:
This thesis treats the area of ontology-based cooperation of information systems. We propose a global architecture called OWSCIS that is based on ontologies and web-services for the cooperation of distributed heterogeneous information systems. In this thesis, we focus on the problem of connecting the local information sources to the local ontologies within OWSCIS architecture. This problem is articulated by three main axes: 1) the creation of the local ontology from the local information sources, 2) the mapping of local information sources to an existing local ontology, and 3) the translation of queries over the local ontologies into queries over local information sources.
APA, Harvard, Vancouver, ISO, and other styles
2

Corsar, David. "Developing knowledge-based systems through ontology mapping and ontology guided knowledge acquisition." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=25800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Ying. "Developing Ontology Mapping approaches for Semantic Interoperability." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sengupta, Kunal. "A Language for Inconsistency-Tolerant Ontology Mapping." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1441044183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huve, Cristiane Aparecida Gonçalves. "An architecture for mapping relational database to ontology." reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/47423.

Full text
Abstract:
Orientadora : Profa. Leticia Mara Peres
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 01/02/2017
Inclui referências : f. 80-85
Resumo: Nos últimos anos tem sido propostos trabalhos sobre definições de mapeamento de um banco de dados para ontologias. Este trabalho de mestrado propõe a construção de uma arquitetura que viabiliza um processo de mapeamento automático de um banco de dados relacional para uma ontologia OWL. Para isto, faz uso de regras novas e existentes e tem como contribuições a nomeação dos elementos e sua eliminação quando duplicados, aumentando a legibilidade da ontologia gerada. Destacamos na arquitetura a estrutura de mapeamento de elementos, que permite manter uma rastreabilidade de origem e destino para verificações. Para validar a arquitetura e as regras propostas, um estudo de caso _e realizado utilizando um banco de dados de atendimento odontológico. Palavras-Chave: Banco de dados relacional. Ontologia. Mapeamento.
Abstract: In recent years a number of researches have been written on the topic of definitions of mapping of a database to ontology. This dissertation presents the proposal and the construction of an architecture which enables an automatic mapping process of relational database to OWL ontology. For this purpose, it makes use of new and existent rules and offers as contributions naming and elimination of duplicated elements, increasing the legibility of the generated ontology. We stand out the structure of element mapping, which allows to maintain a source-to-target traceability for verifications. Validating of proposed architecture and rules is made by a case study using a dental care database. Key-words: Relational database. Ontology. Mapping.
APA, Harvard, Vancouver, ISO, and other styles
6

Arnold, Patrick. "Semantic Enrichment of Ontology Mappings." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-192438.

Full text
Abstract:
Schema and ontology matching play an important part in the field of data integration and semantic web. Given two heterogeneous data sources, meta data matching usually constitutes the first step in the data integration workflow, which refers to the analysis and comparison of two input resources like schemas or ontologies. The result is a list of correspondences between the two schemas or ontologies, which is often called mapping or alignment. Many tools and research approaches have been proposed to automatically determine those correspondences. However, most match tools do not provide any information about the relation type that holds between matching concepts, for the simple but important reason that most common match strategies are too simple and heuristic to allow any sophisticated relation type determination. Knowing the specific type holding between two concepts, e.g., whether they are in an equality, subsumption (is-a) or part-of relation, is very important for advanced data integration tasks, such as ontology merging or ontology evolution. It is also very important for mappings in the biological or biomedical domain, where is-a and part-of relations may exceed the number of equality correspondences by far. Such more expressive mappings allow much better integration results and have scarcely been in the focus of research so far. In this doctoral thesis, the determination of the correspondence types in a given mapping is the focus of interest, which is referred to as semantic mapping enrichment. We introduce and present the mapping enrichment tool STROMA, which obtains a pre-calculated schema or ontology mapping and for each correspondence determines a semantic relation type. In contrast to previous approaches, we will strongly focus on linguistic laws and linguistic insights. By and large, linguistics is the key for precise matching and for the determination of relation types. We will introduce various strategies that make use of these linguistic laws and are able to calculate the semantic type between two matching concepts. The observations and insights gained from this research go far beyond the field of mapping enrichment and can be also applied to schema and ontology matching in general. Since generic strategies have certain limits and may not be able to determine the relation type between more complex concepts, like a laptop and a personal computer, background knowledge plays an important role in this research as well. For example, a thesaurus can help to recognize that these two concepts are in an is-a relation. We will show how background knowledge can be effectively used in this instance, how it is possible to draw conclusions even if a concept is not contained in it, how the relation types in complex paths can be resolved and how time complexity can be reduced by a so-called bidirectional search. The developed techniques go far beyond the background knowledge exploitation of previous approaches, and are now part of the semantic repository SemRep, a flexible and extendable system that combines different lexicographic resources. Further on, we will show how additional lexicographic resources can be developed automatically by parsing Wikipedia articles. The proposed Wikipedia relation extraction approach yields some millions of additional relations, which constitute significant additional knowledge for mapping enrichment. The extracted relations were also added to SemRep, which thus became a comprehensive background knowledge resource. To augment the quality of the repository, different techniques were used to discover and delete irrelevant semantic relations. We could show in several experiments that STROMA obtains very good results w.r.t. relation type detection. In a comparative evaluation, it was able to achieve considerably better results than related applications. This corroborates the overall usefulness and strengths of the implemented strategies, which were developed with particular emphasis on the principles and laws of linguistics.
APA, Harvard, Vancouver, ISO, and other styles
7

Lian, Zonghui. "A Tool to Support Ontology Creation Based on Incremental Mini-Ontology Merging." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1663.

Full text
Abstract:
This thesis addresses the problem of tool support for semi-automatic ontology mapping and merging. Solving this problem contributes to ontology creation and evolution by relieving users from tedious and time-consuming work. This thesis shows that a tool can be built that will take a “mini-ontology” and a “growing ontology” as input and make it possible to produce manually, semi-automatically, or automatically an extended growing ontology as output. Characteristics of this tool include: (1) a graphical, interactive user interface with features that will allow users to map and merge ontologies, and (2) a framework supporting pluggable, semi-automatic, and automatic mapping and merging algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Groß, Anika. "Evolution von ontologiebasierten Mappings in den Lebenswissenschaften." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-136766.

Full text
Abstract:
Im Bereich der Lebenswissenschaften steht eine große und wachsende Menge heterogener Datenquellen zur Verfügung, welche häufig in quellübergreifenden Analysen und Auswertungen miteinander kombiniert werden. Um eine einheitliche und strukturierte Erfassung von Wissen sowie einen formalen Austausch zwischen verschiedenen Applikationen zu erleichtern, kommen Ontologien und andere strukturierte Vokabulare zum Einsatz. Sie finden Anwendung in verschiedenen Domänen wie der Molekularbiologie oder Chemie und dienen zumeist der Annotation realer Objekte wie z.B. Gene oder Literaturquellen. Unterschiedliche Ontologien enthalten jedoch teilweise überlappendes Wissen, so dass die Bestimmung einer Abbildung (Ontologiemapping) zwischen ihnen notwendig ist. Oft ist eine manuelle Mappingerstellung zwischen großen Ontologien kaum möglich, weshalb typischerweise automatische Verfahren zu deren Abgleich (Matching) eingesetzt werden. Aufgrund neuer Forschungserkenntnisse und Nutzeranforderungen verändern sich die Ontologien kontinuierlich weiter. Die Evolution der Ontologien hat wiederum Auswirkungen auf abhängige Daten wie beispielsweise Annotations- und Ontologiemappings, welche entsprechend aktualisiert werden müssen. Im Rahmen dieser Arbeit werden neue Methoden und Algorithmen zum Umgang mit der Evolution ontologie-basierter Mappings entwickelt. Dabei wird die generische Infrastruktur GOMMA zur Verwaltung und Analyse der Evolution von Ontologien und Mappings genutzt und erweitert. Zunächst wurde eine vergleichende Analyse der Evolution von Ontologiemappings für drei Subdomänen der Lebenswissenschaften durchgeführt. Ontologien sowie Mappings unterliegen teilweise starken Änderungen, wobei die Evolutionsintensität von der untersuchten Domäne abhängt. Insgesamt zeigt sich ein deutlicher Einfluss von Ontologieänderungen auf Ontologiemappings. Dementsprechend können bestehende Mappings infolge der Weiterentwicklung von Ontologien ungültig werden, so dass sie auf aktuelle Ontologieversionen migriert werden müssen. Dabei sollte eine aufwendige Neubestimmung der Mappings vermieden werden. In dieser Arbeit werden zwei generische Algorithmen zur (semi-) automatischen Adaptierung von Ontologiemappings eingeführt. Ein Ansatz basiert auf der Komposition von Ontologiemappings, wohingegen der andere Ansatz eine individuelle Behandlung von Ontologieänderungen zur Adaptierung der Mappings erlaubt. Beide Verfahren ermöglichen die Wiederverwendung unbeeinflusster, bereits bestätigter Mappingteile und adaptieren nur die von Änderungen betroffenen Bereiche der Mappings. Eine Evaluierung für sehr große, biomedizinische Ontologien und Mappings zeigt, dass beide Verfahren qualitativ hochwertige Ergebnisse produzieren. Ähnlich zu Ontologiemappings werden auch ontologiebasierte Annotationsmappings durch Ontologieänderungen beeinflusst. Die Arbeit stellt einen generischen Ansatz zur Bewertung der Qualität von Annotationsmappings auf Basis ihrer Evolution vor. Verschiedene Qualitätsmaße erlauben die Identifikation glaubwürdiger Annotationen beispielsweise anhand ihrer Stabilität oder Herkunftsinformationen. Eine umfassende Analyse großer Annotationsdatenquellen zeigt zahlreiche Instabilitäten z.B. aufgrund temporärer Annotationslöschungen. Dementsprechend stellt sich die Frage, inwieweit die Datenevolution zu einer Veränderung von abhängigen Analyseergebnissen führen kann. Dazu werden die Auswirkungen der Ontologie- und Annotationsevolution auf sogenannte funktionale Analysen großer biologischer Datensätze untersucht. Eine Evaluierung anhand verschiedener Stabilitätsmaße erlaubt die Bewertung der Änderungsintensität der Ergebnisse und gibt Aufschluss, inwieweit Nutzer mit einer signifikanten Veränderung ihrer Ergebnisse rechnen müssen. Darüber hinaus wird GOMMA um effiziente Verfahren für das Matching sehr großer Ontologien erweitert. Diese werden u.a. für den Abgleich neuer Konzepte während der Adaptierung von Ontologiemappings benötigt. Viele der existierenden Match-Systeme skalieren nicht für das Matching besonders großer Ontologien wie sie im Bereich der Lebenswissenschaften auftreten. Ein effizienter, kompositionsbasierter Ansatz gleicht Ontologien indirekt ab, indem existierende Mappings zu Mediatorontologien wiederverwendet und miteinander kombiniert werden. Mediatorontologien enthalten wertvolles Hintergrundwissen, so dass sich die Mappingqualität im Vergleich zu einem direkten Matching verbessern kann. Zudem werden generelle Strategien für das parallele Ontologie-Matching unter Verwendung mehrerer Rechenknoten vorgestellt. Eine größenbasierte Partitionierung der Eingabeontologien verspricht eine gute Lastbalancierung und Skalierbarkeit, da kleinere Teilaufgaben des Matchings parallel verarbeitet werden können. Die Evaluierung im Rahmen der Ontology Alignment Evaluation Initiative (OAEI) vergleicht GOMMA und andere Systeme für das Matching von Ontologien in verschiedenen Domänen. GOMMA kann u.a. durch Anwendung des parallelen und kompositionsbasierten Matchings sehr gute Ergebnisse bezüglich der Effektivität und Effizienz des Matchings, insbesondere für Ontologien aus dem Bereich der Lebenswissenschaften, erreichen
In the life sciences, there is an increasing number of heterogeneous data sources that need to be integrated and combined in comprehensive analysis tasks. Often ontologies and other structured vocabularies are used to provide a formal representation of knowledge and to facilitate data exchange between different applications. Ontologies are used in different domains like molecular biology or chemistry. One of their most important applications is the annotation of real-world objects like genes or publications. Since different ontologies can contain overlapping knowledge it is necessary to determine mappings between them (ontology mappings). A manual mapping creation can be very time-consuming or even infeasible such that (semi-) automatic ontology matching methods are typically applied. Ontologies are not static but underlie continuous modifications due to new research insights and changing user requirements. The evolution of ontologies can have impact on dependent data like annotation or ontology mappings. This thesis presents novel methods and algorithms to deal with the evolution of ontology-based mappings. Thereby the generic infrastructure GOMMA is used and extended to manage and analyze the evolution of ontologies and mappings. First, a comparative evolution analysis for ontologies and mappings from three life science domains shows heavy changes in ontologies and mappings as well as an impact of ontology changes on the mappings. Hence, existing ontology mappings can become invalid and need to be migrated to current ontology versions. Thereby an expensive redetermination of the mappings should be avoided. This thesis introduces two generic algorithms to (semi-) automatically adapt ontology mappings: (1) a composition-based adaptation relies on the principle of mapping composition, and (2) a diff-based adaptation algorithm allows for individually handling change operations to update mappings. Both approaches reuse unaffected mapping parts, and adapt only affected parts of the mappings. An evaluation for very large biomedical ontologies and mappings shows that both approaches produce ontology mappings of high quality. Similarly, ontology changes may also affect ontology-based annotation mappings. The thesis introduces a generic evaluation approach to assess the quality of annotation mappings based on their evolution. Different quality measures allow for the identification of reliable annotations, e.g., based on their stability or provenance information. A comprehensive analysis of large annotation data sources shows numerous instabilities, e.g., due to the temporary absence of annotations. Such modifications may influence results of dependent applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. The question arises to what degree ontology and annotation changes may affect such analyses. Based on different stability measures the evaluation assesses change intensities of application results and gives insights whether users need to expect significant changes of their analysis results. Moreover, GOMMA is extended by large-scale ontology matching techniques. Such techniques are useful, a.o., to match new concepts during ontology mapping adaptation. Many existing match systems do not scale for aligning very large ontologies, e.g., from the life science domain. One efficient composition-based approach indirectly computes ontology mappings by reusing and combining existing mappings to intermediate ontologies. Intermediate ontologies can contain useful background knowledge such that the mapping quality can be improved compared to a direct match approach. Moreover, the thesis introduces general strategies for matching ontologies in parallel using several computing nodes. A size-based partitioning of the input ontologies enables good load balancing and scalability since smaller match tasks can be processed in parallel. The evaluation of the Ontology Alignment Evaluation Initiative (OAEI) compares GOMMA and other systems in terms of matching ontologies from different domains. Using the parallel and composition-based matching, GOMMA can achieve very good results w.r.t. efficiency and effectiveness, especially for ontologies from the life science domain
APA, Harvard, Vancouver, ISO, and other styles
9

Kong, Choi-yu. "Effective partial ontology mapping in a pervasive computing environment." Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B32002737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kong, Choi-yu, and 江采如. "Effective partial ontology mapping in a pervasive computing environment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B32002737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wong, Alfred Ka Yiu Computer Science &amp Engineering Faculty of Engineering UNSW. "Ontology mapping: a logic-based approach with applications in selected domains." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/41103.

Full text
Abstract:
In advent of the Semantic Web and recent standardization efforts, Ontology has quickly become a popular and core semantic technology. Ontology is seen as a solution provider to knowledge based systems. It facilitates tasks such as knowledge sharing, reuse and intelligent processing by computer agents. A key problem addressed by Ontology is the semantic interoperability problem. Interoperability in general is a common problem in different domain applications and semantic interoperability is the hardest and an ongoing research problem. It is required for systems to exchange knowledge and having the meaning of the knowledge accurately and automatically interpreted by the receiving systems. The innovation is to allow knowledge to be consumed and used accurately in a way that is not foreseen by the original creator. While Ontology promotes semantic interoperability across systems by unifying their knowledge bases through consensual understanding, common engineering and processing practices, it does not solve the semantic interoperability problem at the global level. As individuals are increasingly empowered with tools, ontologies will eventually be created more easily and rapidly at a near individual scale. Global semantic interoperability between heterogeneous ontologies created by small groups of individuals will then be required. Ontology mapping is a mechanism for providing semantic bridges between ontologies. While ontology mapping promotes semantic interoperability across ontologies, it is seen as the solution provider to the global semantic interoperability problem. However, there is no single ontology mapping solution that caters for all problem scenarios. Different applications would require different mapping techniques. In this thesis, we analyze the relations between ontology, semantic interoperability and ontology mapping, and promote an ontology-based semantic interoperability solution. We propose a novel ontology mapping approach namely, OntoMogic. It is based on first order logic and model theory. OntoMogic supports approximate mapping and produces structures (approximate entity correspondence) that represent alignment results between concepts. OntoMogic has been implemented as a coherent system and is applied in different application scenarios. We present case studies in the network configuration, security intrusion detection and IT governance & compliance management domain. The full process of ontology engineering to mapping has been demonstrated to promote ontology-based semantic interoperability.
APA, Harvard, Vancouver, ISO, and other styles
12

Abbas, Muhammad Aun. "A Unified Approach for Dealing with Ontology Mappings and their Defects." Thesis, Lorient, 2016. http://www.theses.fr/2016LORIS423/document.

Full text
Abstract:
Un mapping d’ontologies est un ensemble de correspondances. Chaque correspondance relie des artefacts, typiquement concepts et propriétés, d’une ontologie avec ceux d’une autre ontologie. Le mapping entre ontologies a suscité beaucoup d’intérêt durant ces dernières années. En effet, le mapping d’ontologies est largement utilisé pour mettre en oeuvre de l’interopérabilité et intégration (transformation de données, réponse à la requête, composition de web service) dans les applications, et également dans la création de nouvelles ontologies. D’une part, vérifier l’exactitude (logique) d’un mapping est devenu un prérequis fondamentale à son utilisation. D’autre part, pour deux ontologies données, plusieurs mappings peuvent être établis, obtenus par différentes méthodes d’alignement, ou définis manuellement. L’utilisation de plusieurs mappings entre deux ontologies dans une seule application ou pour synthétiser un seul mapping tirant profit de ces plusieurs mappings, peut générer des erreurs dans l’application ou dans le mapping synthétisé car ces plusieurs mappings peuvent être contradictoires. Dans les deux situations décrites ci-dessus, l’exactitude, la non-contradiction et autres propriétés sont généralement exprimées de façon formelle et vérifiées dans le contexte des ontologies formelles (par exemple, lorsque les ontologies sont représentées en logique) La vérification de ces propriétés est généralement effectuée à l’aide d’un seul formalisme, exigeant d’une part que les ontologies soient représentées par ce seul formalisme et, d’autre part, qu’une représentation formelle des mappings soit fournie, complétée par des notions formalisant les propriétés recherchées. Cependant, il existe une multitude de formalismes hétérogènes pour exprimer les ontologies, allant des plus informels (par exemple, du texte contrôlé, des modèles en UML) aux formels (par exemple, des logiques de description ou des catégories). Ceci implique que pour appliquer les approches existantes, les ontologies hétérogènes doivent être traduites (ou juste transformées, si l’ontologie source est exprimée de façon informelle ou si la traduction complète pour maintenir l’équivalence n’est pas possible) dans un seul formalisme commun et les mappings sont reformulés à chaque fois : seulement à l’issu de ce processus, les propriétés recherchées peuvent être établies. Même si cela est possible, ce processus peut produire à la fois des mappings corrects et incorrects vis-à-vis de ces propriétés, en fonction de la traduction (transformation) opérée. En effet, les propriétés recherchées dépendent du formalisme employé pour exprimer les ontologies et les mappings. Dans cette dissertation, des différentes propriétés ont été a été reformulées d’une manière unifiée dans le contexte d’ontologies hétérogènes utilisant la théorie de Galois. Dans ce contexte, les ontologies sont représentées comme treillis, et les mappings sont reformulés comme fonctions entre ces treillis. Les treillis sont des structures naturelles pour la représentation directe d’ontologies sans obligation de traduire ou transformer les formalismes dans lesquels les ontologies sont exprimées à l’origine. Cette reformulation unifiée a permis d’introduire une nouvelle notion de mappings compatibles et incompatibles. Il est ensuite formellement démontré que cette nouvelle notion couvre plusieurs parmi les propriétés recherchées de mappings, mentionnées dans l’état de l’art. L’utilisation directe de mappings compatibles et incompatibles est démontrée par l’application à des mappings d’ontologies de haut niveau. La notion de mappings compatibles et incompatibles est aussi appliquée sur des ontologies de domaine, mettant en évidence comment les mappings incompatibles génèrent des résultats incorrects pour la fusion d’ontologies
An ontology mapping is a set of correspondences. Each correspondence relates artifacts, such as concepts and properties, of one ontology to artifacts of another ontology. In the last few years, a lot of attention has been paid to establish mappings between source ontologies. Ontology mapping is widely and effectively used for interoperability and integration tasks (data transformation, query answering, or web-service composition, to name a few), and in the creation of new ontologies. On the one side, checking the (logical) correctness of ontology mappings has become a fundamental prerequisite of their use. On the other side, given two ontologies, there are several ontology mappings between them that can be obtained by using different ontology matching methods or just stated manually. Using ontology mappings between two ontologies in combination within a single application or for synthesizing one mapping taking the advantage of two original mappings, may cause errors in the application or in the synthesized mapping because those original mappings may be contradictory (conflicting). In both situations, correctness is usually formalized and verified in the context of fully formalized ontologies (e.g. in logics), even if some “weak” notions of correctness have been proposed when ontologies are informally represented or represented in formalisms preventing a formalization of correctness (such as UML). Verifying correctness is usually performed within one single formalism, requiring on the one side that ontologies need to be represented in this unique formalism and, on the other side, a formal representation of mapping is provided, equipped with notions related to correctness (such as consistency). In practice, there exist several heterogeneous formalisms for expressing ontologies, ranging from informal (text, UML and others) to formal (logical and algebraic). This implies that, willing to apply existing approaches, heterogeneous ontologies should be translated (or just transformed if, the original ontology is informally represented or when full translation, keeping equivalence, is not possible) in one common formalism, mappings need each time to be reformulated, and then correctness can be established. This is possible but possibly leading to correct mappings under one translation and incorrect mapping under another translation. Indeed, correctness (e.g. consistency) depends on the underlying employed formalism in which ontologies and mappings are expressed. Different interpretations of correctness are available within the formal or even informal approaches questioning about what correctness is indeed. In the dissertation, correctness has been reformulated in the context of heterogeneous ontologies by using the theory of Galois connections. Specifically ontologies are represented as lattices and mappings as functions between those lattices. Lattices are natural structures for directly representing ontologies, without changing the original formalisms in which ontologies are expressed. As a consequence, the (unified) notion of correctness has been reformulated by using Galois connection condition, leading to the new notion of compatible and incompatible mappings. It is formally shown that the new notion covers the reviewed correctness notions, provided in distinct state of the art formalisms, and, at the same time, can naturally cover heterogeneous ontologies. The usage of the proposed unified approach is demonstrated by applying it to upper ontology mappings. Notion of compatible and incompatible ontology mappings is also applied on domain ontologies to highlight that incompatible ontology mappings give incorrect results when used for ontology merging
APA, Harvard, Vancouver, ISO, and other styles
13

Esmaeily, Kaveh. "Ontological mapping between different higher educational systems : The mapping of academic educational system on an international level." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-876.

Full text
Abstract:

This Master thesis sets its goals in researching and understanding the structure of different educational systems. The main goal that this paper inflicts is to develop a middleware aiming at translating courses between different educational systems.

The procedure is to find the meaning of objects and courses from the different educational systems point of view, this is mainly done through processes such as identifying the context, semantics and state of the objects involved, perhaps in different activities. The middleware could be applied, with small changes, to any structured system of education.

This thesis introduces a framework for using ontologies in the translation and integration of course aspects in different processes. It suggests using ontologies when adopting and structuring different educational systems on an international level. This thesis will, through an understanding of ontologies construct a middleware for the translation process between different courses in the different educational systems. As an example courses in Sweden, Germany and Tajikistan have been used for the mapping and constructing learning goals and qualifications.

APA, Harvard, Vancouver, ISO, and other styles
14

Muthaiyah, Saravanan. "A framework and methodology for ontology mediation through semantic and syntactic mapping." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3070.

Full text
Abstract:
Thesis (Ph. D.)--George Mason University, 2008.
Vita: p. 177. Thesis director: Larry Kerschberg. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed July 3, 2008). Includes bibliographical references (p. 169-176). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
15

Gould, Nicholas Mark. "Formalising cartographic generalisation knowledge in an ontology to support on-demand mapping." Thesis, Manchester Metropolitan University, 2014. http://e-space.mmu.ac.uk/344342/.

Full text
Abstract:
This thesis proposes that on-demand mapping - where the user can choose the geographic features to map and the scale at which to map them - can be supported by formalising, and making explicit, cartographic generalisation knowledge in an ontology. The aim was to capture the semantics of generalisation, in the form of declarative knowledge, in an ontology so that it could be used by an on-demand mapping system to make decisions about what generalisation algorithms are required to resolve a given map condition, such as feature congestion, caused by a change in scale. The lack of a suitable methodology for designing an application ontology was identified and remedied by the development of a new methodology that was a hybrid of existing domain ontology design methodologies. Using this methodology an ontology that described not only the geographic features but also the concepts of generalisation such as geometric conditions, operators and algorithms was built. A key part of the evaluation phase of the methodology was the implementation of the ontology in a prototype on-demand mapping system. The prototype system was used successfully to map road accidents and the underlying road network at three different scales. A major barrier to on-demand mapping is the need to automatically provide parameter values for generalisation algorithms. A set of measure algorithms were developed to identify the geometric conditions in the features, caused by a change in scale. From this a Degree of Generalisation (DoG) is calculated, which represents the “amount” of generalisation required. The DoG is used as an input to a number of bespoke generalisation algorithms. In particular a road network pruning algorithm was developed that respected the relationship between accidents and road segments. The development of bespoke algorithms is not a sustainable solution and a method for employing the DoG concept with existing generalisation algorithms is required. Consideration was given to how the ontology-driven prototype on-demand mapping system could be extended to use cases other than mapping road accidents and a need for collaboration with domain experts on an ontology for generalisation was identified. Although further testing using different uses cases is required, this work has demonstrated that an ontological approach to on-demand mapping has promise.
APA, Harvard, Vancouver, ISO, and other styles
16

Saleem, Arshad. "Semantic Web Vision : survey of ontology mapping systems and evaluation of progress." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3211.

Full text
Abstract:
Ever increasing complexity of software systems, and distributed and dynamic nature of today’s enterprise level computing have initiated the demand for more self aware, flexible and robust systems, where human beings could delegate much of their work to software agents. The Semantic Web presents new opportunities for enabling, modeling, sharing and reasoning with knowledge available on the web. These opportunities are made possible through the formal representation of knowledge domains with ontologies. Semantic Web is a vision of World Wide Web (WWW) level knowledge representation system where each piece of information is equipped with well defined meaning; enabling software agents to understand and process that information. This, in turn, enables people and software agents to work in a more smooth and collaborative way. In this thesis we have first presented a detailed overview of Semantic web vision by describing its fundamental building blocks which constitutes famous layered architecture of Semantic Web. We have discussed the mile stones Semantic Web vision has achieved so far in the areas of research, education and industry and on the other hand we have presented some of the social, business and technological barriers in the way of this vision to become reality. We have also evaluated that how Semantic vision is effecting some of the current technological and research areas like Web Services, Software Agents, Knowledge Engineering and Grid Computing. In the later part of thesis we have focused on problem of ontology mapping for agents on semantic web. We have precisely defined the problem and categorized it on the basis of syntactic and semantic aspects. Finally we have produced a survey of the current state of the art in ontology mapping research. In the survey we have presented some of the selected ontology mapping systems and described their functionality on the basis of the way they approach the problem, their efficiency, effectiveness and the part of problem space they cover. We consider that the survey of current state of the art in ontology mapping will provide a solid basis for further research in this field.
Ever increasing complexity of software systems, and distributed and dynamic nature of today’s enterprise level computing have initiated the demand for more self aware, flexible and robust systems, where human beings could delegate much of their work to software agents. The Semantic Web presents new opportunities for enabling, modeling, sharing and reasoning with knowledge available on the web. These opportunities are made possible through the formal representation of knowledge domains with ontologies. Semantic Web is a vision of World Wide Web (WWW) level knowledge representation system where each piece of information is equipped with well defined meaning; enabling software agents to understand and process that information. This, in turn, enables people and software agents to work in a more smooth and collaborative way. In this thesis we have first presented a detailed overview of Semantic web vision by describing its fundamental building blocks which constitutes famous layered architecture of Semantic Web. We have discussed the mile stones Semantic Web vision has achieved so far in the areas of research, education and industry and on the other hand we have presented some of the social, business and technological barriers in the way of this vision to become reality. We have also evaluated that how Semantic vision is effecting some of the current technological and research areas like Web Services, Software Agents, Knowledge Engineering and Grid Computing. In the later part of thesis we have focused on problem of ontology mapping for agents on semantic web. We have precisely defined the problem and categorized it on the basis of syntactic and semantic aspects. Finally we have produced a survey of the current state of the art in ontology mapping research. In the survey we have presented some of the selected ontology mapping systems and described their functionality on the basis of the way they approach the problem, their efficiency, effectiveness and the part of problem space they cover. We consider that the survey of current state of the art in ontology mapping will provide a solid basis for further research in this field.
Folkparksvagen 18:01,372 40 Ronneby. Sweden
APA, Harvard, Vancouver, ISO, and other styles
17

Bheemireddy, Shruthi. "MACHINE LEARNING-BASED ONTOLOGY MAPPING TOOL TO ENABLE INTEROPERABILITY IN COASTAL SENSOR NETWORKS." MSSTATE, 2009. http://sun.library.msstate.edu/ETD-db/theses/available/etd-09222009-200303/.

Full text
Abstract:
In todays world, ontologies are being widely used for data integration tasks and solving information heterogeneity problems on the web because of their capability in providing explicit meaning to the information. The growing need to resolve the heterogeneities between different information systems within a domain of interest has led to the rapid development of individual ontologies by different organizations. These ontologies designed for a particular task could be a unique representation of their project needs. Thus, integrating distributed and heterogeneous ontologies by finding semantic correspondences between their concepts has become the key point to achieve interoperability among different representations. In this thesis, an advanced instance-based ontology matching algorithm has been proposed to enable data integration tasks in ocean sensor networks, whose data are highly heterogeneous in syntax, structure, and semantics. This provides a solution to the ontology mapping problem in such systems based on machine-learning methods and string-based methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Reis, Julio Cesar Dos. "Mapping Adaptation between Biomedical Knowledge Organization Systems." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112231/document.

Full text
Abstract:
Les systèmes d'information biomédicaux actuels reposent sur l'exploitation de données provenant de sources multiples. Les Systèmes d'Organisation de la Connaissance (SOC) permettent d'expliciter la sémantique de ces données, ce qui facilite leur gestion et leur exploitation. Bénéficiant de l'évolution des technologies du Web sémantique, un nombre toujours croissant de SOCs a été élaboré et publié dans des domaines spécifiques tels que la génomique, la biologie, l'anatomie, les pathologies, etc. Leur utilisation combinée, nécessaire pour couvrir tout le domaine biomédical, repose sur la définition de mises en correspondance entre leurs éléments ou mappings. Les mappings connectent les entités des SOCs liées au même domaine via des relations sémantiques. Ils jouent un rôle majeur pour l'interopérabilité entre systèmes, en permettant aux applications d'interpréter les données annotées avec différents SOCs. Cependant, les SOCs évoluent et de nouvelles versions sont régulièrement publiées de façon à correspondre à des vues du domaine les plus à jour possible. La validité des mappings ayant été préalablement établis peut alors être remis en cause. Des méthodes sont nécessaires pour assurer leur cohérence sémantique au fil du temps. La maintenance manuelle des mappings est une possibilité lorsque le nombre de mappings est restreint. En présence de SOCs volumineux et évoluant très rapidement, des méthodes les plus automatiques possibles sont indispensables. Cette thèse de doctorat propose une approche originale pour adapter les mappings basés sur les changements détectés dans l'évolution de SOCs du domaine biomédical. Notre proposition consiste à comprendre précisément les mappings entre SOCs, à exploiter les types de changements intervenant lorsque les SOCs évoluent, puis à proposer des actions de modification des mappings appropriées. Nos contributions sont multiples : (i) nous avons réalisé un travail expérimental approfondi pour comprendre l'évolution des mappings entre SOCs; nous proposons des méthodes automatiques (ii) pour analyser les mappings affectés par l'évolution de SOCs, et (iii) pour reconnaître l'évolution des concepts impliqués dans les mappings via des patrons de changement; enfin (iv) nous proposons des techniques d'adaptation des mappings à base d'heuristiques. Nous proposons un cadre complet pour l'adaptation des mappings, appelé DyKOSMap, et un prototype logiciel. Nous avons évalué les méthodes proposées et le cadre formel avec des jeux de données réelles contenant plusieurs versions de mappings entre SOCs du domaine biomédical. Les résultats des expérimentations ont démontré l'efficacité des principes sous-jacents à l'approche proposée. La maintenance des mappings, en grande partie automatique, est de bonne qualité
Modern biomedical information systems require exchanging and retrieving data between them, due to the overwhelming available data generated in this domain. Knowledge Organization Systems (KOSs) offer means to make the semantics of data explicit which, in turn, facilitates their exploitation and management. The evolution of semantic technologies has led to the development and publication of an ever increasing number of large KOSs for specific sub-domains like genomics, biology, anatomy, diseases, etc. The size of the biomedical field demands the combined use of several KOSs, but it is only possible through the definition of mappings. Mappings interconnect entities of domain-related KOSs via semantic relations. They play a key role as references to enable advanced interoperability tasks between systems, allowing software applications to interpret data annotated with different KOSs. However, to remain useful and reflect the most up-to-date knowledge of the domain, the KOSs evolve and new versions are periodically released. This potentially impacts established mappings demanding methods to ensure, as automatic as possible, their semantic consistency over time. Manual maintenance of mappings stands for an alternative only if a restricted number of mappings are available. Otherwise supporting methods are required for very large and highly dynamic KOSs. To address such problem, this PhD thesis proposes an original approach to adapt mappings based on KOS changes detected in KOS evolution. The proposal consists in interpreting the established correspondences to identify the relevant KOS entities, on which the definition relies on, and based on the evolution of these entities to propose actions suited to modify mappings. Through this investigation, (i) we conduct in-depth experiments to understand the evolution of KOS mappings; we propose automatic methods (ii) to analyze mappings affected by KOS evolution, and (iii) to recognize the evolution of involved concepts in mappings via change patterns; finally (iv) we design techniques relying on heuristics explored by novel algorithms to adapt mappings. This research achieved a complete framework for mapping adaptation, named DyKOSMap, and an implementation of a software prototype. We thoroughly evaluated the proposed methods and the framework with real-world datasets containing several releases of mappings between biomedical KOSs. The obtained results from experimental validations demonstrated the overall effectiveness of the underlying principles in the proposed approach to adapt mappings. The scientific contributions of this thesis enable to largely automatically maintain mappings with a reasonable quality, which improves the support for mapping maintenance and consequently ensures a better interoperability over time
APA, Harvard, Vancouver, ISO, and other styles
19

Polowinski, Jan. "Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-229908.

Full text
Abstract:
Data masses on the World Wide Web can hardly be managed by humans or machines. One option is the formal description and linking of data sources using Semantic Web and Linked Data technologies. Ontologies written in standardised languages foster the sharing and linking of data as they provide a means to formally define concepts and relations between these concepts. A second option is visualisation. The visual representation allows humans to perceive information more directly, using the highly developed visual sense. Relatively few efforts have been made on combining both options, although the formality and rich semantics of ontological data make it an ideal candidate for visualisation. Advanced visualisation design systems support the visualisation of tabular, typically statistical data. However, visualisations of ontological data still have to be created manually, since automated solutions are often limited to generic lists or node-link diagrams. Also, the semantics of ontological data are not exploited for guiding users through visualisation tasks. Finally, once a good visualisation setting has been created, it cannot easily be reused and shared. Trying to tackle these problems, we had to answer how to define composable and shareable mappings from ontological data to visual means and how to guide the visual mapping of ontological data. We present an approach that allows for the guided visualisation of ontological data, the creation of effective graphics and the reuse of visualisation settings. Instead of generic graphics, we aim at tailor-made graphics, produced using the whole palette of visual means in a flexible, bottom-up approach. It not only allows for visualising ontologies, but uses ontologies to guide users when visualising data and to drive the visualisation process at various places: First, as a rich source of information on data characteristics, second, as a means to formally describe the vocabulary for building abstract graphics, and third, as a knowledge base of facts on visualisation. This is why we call our approach ontology-driven. We suggest generating an Abstract Visual Model (AVM) to represent and »synthesise« a graphic following a role-based approach, inspired by the one used by J. v. Engelhardt for the analysis of graphics. It consists of graphic objects and relations formalised in the Visualisation Ontology (VISO). A mappings model, based on the declarative RDFS/OWL Visualisation Language (RVL), determines a set of transformations from the domain data to the AVM. RVL allows for composable visual mappings that can be shared and reused across platforms. To guide the user, for example, we discourage the construction of mappings that are suboptimal according to an effectiveness ranking formalised in the fact base and suggest more effective mappings instead. The guidance process is flexible, since it is based on exchangeable rules. VISO, RVL and the AVM are additional contributions of this thesis. Further, we initially analysed the state of the art in visualisation and RDF-presentation comparing 10 approaches by 29 criteria. Our approach is unique because it combines ontology-driven guidance with composable visual mappings. Finally, we compare three prototypes covering the essential parts of our approach to show its feasibility. We show how the mapping process can be supported by tools displaying warning messages for non-optimal visual mappings, e.g., by considering relation characteristics such as »symmetry«. In a constructive evaluation, we challenge both the RVL language and the latest prototype trying to regenerate sketches of graphics we created manually during analysis. We demonstrate how graphics can be varied and complex mappings can be composed from simple ones. Two thirds of the sketches can be almost or completely specified and half of them can be almost or completely implemented
Datenmassen im World Wide Web können kaum von Menschen oder Maschinen erfasst werden. Eine Option ist die formale Beschreibung und Verknüpfung von Datenquellen mit Semantic-Web- und Linked-Data-Technologien. Ontologien, in standardisierten Sprachen geschrieben, befördern das Teilen und Verknüpfen von Daten, da sie ein Mittel zur formalen Definition von Konzepten und Beziehungen zwischen diesen Konzepten darstellen. Eine zweite Option ist die Visualisierung. Die visuelle Repräsentation ermöglicht es dem Menschen, Informationen direkter wahrzunehmen, indem er seinen hochentwickelten Sehsinn verwendet. Relativ wenige Anstrengungen wurden unternommen, um beide Optionen zu kombinieren, obwohl die Formalität und die reichhaltige Semantik ontologische Daten zu einem idealen Kandidaten für die Visualisierung machen. Visualisierungsdesignsysteme unterstützen Nutzer bei der Visualisierung von tabellarischen, typischerweise statistischen Daten. Visualisierungen ontologischer Daten jedoch müssen noch manuell erstellt werden, da automatisierte Lösungen häufig auf generische Listendarstellungen oder Knoten-Kanten-Diagramme beschränkt sind. Auch die Semantik der ontologischen Daten wird nicht ausgenutzt, um Benutzer durch Visualisierungsaufgaben zu führen. Einmal erstellte Visualisierungseinstellungen können nicht einfach wiederverwendet und geteilt werden. Um diese Probleme zu lösen, mussten wir eine Antwort darauf finden, wie die Definition komponierbarer und wiederverwendbarer Abbildungen von ontologischen Daten auf visuelle Mittel geschehen könnte und wie Nutzer bei dieser Abbildung geführt werden könnten. Wir stellen einen Ansatz vor, der die geführte Visualisierung von ontologischen Daten, die Erstellung effektiver Grafiken und die Wiederverwendung von Visualisierungseinstellungen ermöglicht. Statt auf generische Grafiken zielt der Ansatz auf maßgeschneiderte Grafiken ab, die mit der gesamten Palette visueller Mittel in einem flexiblen Bottom-Up-Ansatz erstellt werden. Er erlaubt nicht nur die Visualisierung von Ontologien, sondern verwendet auch Ontologien, um Benutzer bei der Visualisierung von Daten zu führen und den Visualisierungsprozess an verschiedenen Stellen zu steuern: Erstens als eine reichhaltige Informationsquelle zu Datencharakteristiken, zweitens als Mittel zur formalen Beschreibung des Vokabulars für den Aufbau von abstrakten Grafiken und drittens als Wissensbasis von Visualisierungsfakten. Deshalb nennen wir unseren Ansatz ontologie-getrieben. Wir schlagen vor, ein Abstract Visual Model (AVM) zu generieren, um eine Grafik rollenbasiert zu synthetisieren, angelehnt an einen Ansatz der von J. v. Engelhardt verwendet wird, um Grafiken zu analysieren. Das AVM besteht aus grafischen Objekten und Relationen, die in der Visualisation Ontology (VISO) formalisiert sind. Ein Mapping-Modell, das auf der deklarativen RDFS/OWL Visualisation Language (RVL) basiert, bestimmt eine Menge von Transformationen von den Quelldaten zum AVM. RVL ermöglicht zusammensetzbare »Mappings«, visuelle Abbildungen, die über Plattformen hinweg geteilt und wiederverwendet werden können. Um den Benutzer zu führen, bewerten wir Mappings anhand eines in der Faktenbasis formalisierten Effektivitätsrankings und schlagen ggf. effektivere Mappings vor. Der Beratungsprozess ist flexibel, da er auf austauschbaren Regeln basiert. VISO, RVL und das AVM sind weitere Beiträge dieser Arbeit. Darüber hinaus analysieren wir zunächst den Stand der Technik in der Visualisierung und RDF-Präsentation, indem wir 10 Ansätze nach 29 Kriterien vergleichen. Unser Ansatz ist einzigartig, da er eine ontologie-getriebene Nutzerführung mit komponierbaren visuellen Mappings vereint. Schließlich vergleichen wir drei Prototypen, welche die wesentlichen Teile unseres Ansatzes umsetzen, um seine Machbarkeit zu zeigen. Wir zeigen, wie der Mapping-Prozess durch Tools unterstützt werden kann, die Warnmeldungen für nicht optimale visuelle Abbildungen anzeigen, z. B. durch Berücksichtigung von Charakteristiken der Relationen wie »Symmetrie«. In einer konstruktiven Evaluation fordern wir sowohl die RVL-Sprache als auch den neuesten Prototyp heraus, indem wir versuchen Skizzen von Grafiken umzusetzen, die wir während der Analyse manuell erstellt haben. Wir zeigen, wie Grafiken variiert werden können und komplexe Mappings aus einfachen zusammengesetzt werden können. Zwei Drittel der Skizzen können fast vollständig oder vollständig spezifiziert werden und die Hälfte kann fast vollständig oder vollständig umgesetzt werden
APA, Harvard, Vancouver, ISO, and other styles
20

Werlang, Ricardo. "Ontology-based approach for standard formats integration in reservoir modeling." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/115196.

Full text
Abstract:
A integração de dados oriundos de fontes autônomas e heterogêneas ainda é um grande problema para diversas aplicações. Na indústria de petróleo e gás, uma grande quantidade de dados é gerada diariamente a partir de múltiplas fontes, tais como dados sísmicos, dados de poços, dados de perfuração, dados de transporte e dados de marketing. No entanto, estes dados são adquiridos através da aplicação de diferentes técnicas e representados em diferentes formatos e padrões. Assim, estes dados existem de formas estruturadas em banco de dados e de formas semi-estruturadas em planilhas e documentos, tais como relatórios e coleções multimídia. Para lidar com a heterogeneidade dos formatos de dados, a informação precisa ser padronizada e integrada em todos os sistemas, disciplinas e fronteiras organizacionais. Como resultado, este processo de integração permitirá uma melhor tomada de decisão dentro de colaborações, uma vez que dados de alta qualidade poderão ser acessados em tempo hábil. A indústria do petróleo depende do uso eficiente desses dados para a construção de modelos computacionais, a fim de simplificar a realidade geológica e para ajudar a compreende-la. Tal modelo, que contém objetos geológicos analisados por diferentes profissionais—geólogos, geofísicos e engenheiros — não representa a realidade propriamente dita, mas a conceitualização do especialista. Como resultado, os objetos geológicos modelados assumem representações semânticas distintas e complementares no apoio à tomada de decisões. Para manter os significados pretendidos originalmente, ontologias estão sendo usadas para explicitar a semântica dos modelos e para integrar os dados e arquivos gerados nas etapas da cadeia de exploração. A principal reivindicação deste trabalho é que a interoperabilidade entre modelos da terra construídos e manipulados por diferentes profissionais e sistemas pode ser alcançada evidenciando o significado dos objetos geológicos representados nos modelos. Nós mostramos que ontologias de domínio desenvolvidas com o apoio de conceitos teórico de ontologias de fundamentação demonstraram ser uma ferramenta adequada para esclarecer a semântica dos conceitos geológicos. Nós exemplificamos essa capacidade através da análise dos formatos de comunicação padrões mais utilizados na cadeia de modelagem (LAS, WITSML e RESQML), em busca de entidades semanticamente relacionadas com os conceitos geológicos descritos em ontologias de Geociências. Mostramos como as noções de identidade, rigidez, essencialidade e unidade, aplicadas a conceitos ontológicos, conduzem o modelador à definir mais precisamente os objetos geológicos no modelo. Ao tornar explícitas as propriedades de identidade dos objetos modelados, o modelador pode superar as ambiguidades da terminologia geológica. Ao fazer isso, explicitamos os objetos e propriedades relevantes que podem ser mapeados a partir de um modelo para outro, mesmo quando eles estão representados em diferentes nomes e formatos.
The integration of data issued from autonomous and heterogeneous sources is still a significant problem for an important number of applications. In the oil and gas industry, a large amount of data is generated every day from multiple sources such as seismic data, well data, drilling data, transportation data, and marketing data. However, these data are acquired by the application of different techniques and represented in different standards and formats. Thus, these data exist in a structured form in databases, and in semi-structured forms in spreadsheets and documents such as reports and multimedia collections. To deal with this large amount of information, as well as the heterogeneous data formats of the data, the information needs to be standardized and integrated across systems, disciplines and organizational boundaries. As a result, this information integration will enable better decision making within collaborations, once high quality data will be accessible timely. The petroleum industry depends on the efficient use of these data to the construction of computer models in order to simplify the geological reality and to help understanding it. Such a model, which contains geological objects analyzed by different professionals – geologists, geophysicists and engineers – does not represent the reality itself, but the expert’s conceptualization. As a result, the geological objects modeled assume distinct semantic representations and complementary in supporting decision-making. For keeping the original intended meanings, ontologies were used for expliciting the semantic of the models and for integrating the data and files generated in the various stages of the exploration chain. The major claim of this work is that interoperability among earth models built and manipulated by different professionals and systems can be achieved by making apparent the meaning of the geological objects represented in the models. We show that domain ontologies developed with support of theoretical background of foundational ontologies show to be an adequate tool to clarify the semantic of geology concepts. We exemplify this capability by analyzing the communication standard formats most used in the modeling chain (LAS,WITSML, and RESQML), searching for entities semantically related with the geological concepts described in ontologies for Geosciences. We show how the notions of identity, rigidity, essentiality and unity applied to ontological concepts lead the modeler to more precisely define the geological objects in the model. By making explicit the identity properties of the modeled objects, the modeler who applies data standards can overcome the ambiguities of the geological terminology. In doing that, we clarify which are the relevant objects and properties that can be mapped from one model to another, even when they are represented with different names and formats.
APA, Harvard, Vancouver, ISO, and other styles
21

Ducrou, Amanda Joanne. "Complete interoperability in healthcare technical, semantic and process interoperability through ontology mapping and distributed enterprise integration techniques /." Access electronically, 2009. http://ro.uow.edu.au/theses/3048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Polowinski, Jan. "Semi-Automatic Mapping of Structured Data to Visual Variables." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-108497.

Full text
Abstract:
While semantic web data is machine-understandable and well suited for advanced filtering, in its raw representation it is not conveniently understandable to humans. Therefore, visualization is needed. A core challenge when visualizing the structured but heterogeneous data turned out to be a flexible mapping to Visual Variables. This work deals with a highly flexible, semi-automatic solution with a maximum support of the visualization process, reducing the mapping possibilities to a useful subset. The basis for this is knowledge, concerning metrics and structure of the data on the one hand and available visualization structures, platforms and common graphical facts on the other hand — provided by a novel basic visualization ontology. A declarative, platform-independent mapping vocabulary and a framework was developed, utilizing current standards from the semantic web and the Model-Driven Architecture (MDA)
Während Semantic-Web-Daten maschinenverstehbar und hervorragend filterbar sind, sind sie — in ihrer Rohform — nicht leicht von Menschen verstehbar. Eine Visualisierung der Daten ist deshalb notwendig. Die Kernherausforderung dabei ist eine flexible Abbildung der strukturierten aber heterogenen Daten auf Visuelle Variablen. Diese Arbeit beschreibt eine hochflexible halbautomatische Lösung bei maximaler Unterstützung des Visualisierungsprozesses, welcher die Abbildungsmöglichkeiten, aus denen der Nutzer zu wählen hat, auf eine sinnvolle Teilmenge reduziert. Die Grundlage dafür sind einerseits Metriken und das Wissen über die Struktur der Daten und andererseits das Wissen über verfügbare Visualisierungsstrukturen, -plattformen und bekannte grafische Fakten, welche durch eine neuentwickelte Visualisierungsontologie bereitgestellt werden. Basierend auf Standards des Semantic Webs und der Model-getriebenen Architektur, wurde desweiteren ein deklaratives, plattformunabhängiges Visualisierungsvokabular und -framework entwickelt
APA, Harvard, Vancouver, ISO, and other styles
23

Lera, Castro Isaac. "Ontology Matching based On Class Context: to solve interoperability problem at Semantic Web." Doctoral thesis, Universitat de les Illes Balears, 2012. http://hdl.handle.net/10803/84074.

Full text
Abstract:
When we look at the amount of resources to convert formats to other formats, that is to say, to make information systems useful, it is the time when we realise that our communication model is inefficient. The transformation of information, as well as the transformation of energy, remains inefficient for the efficiency of the converters. In this work, we propose a new way to ``convert'' information, we propose a mapping algorithm of semantic information based on the context of the information in order to redefine the framework where this paradigm merges with multiple techniques. Our main goal is to offer a new view where we can make further progress and, ultimately, streamline and minimize the communication chain in integration process
APA, Harvard, Vancouver, ISO, and other styles
24

Junior, Esdras Lins Bispo. "Métricas de avaliação de alinhamento de ontologias." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-21022014-162402/.

Full text
Abstract:
Na área de emparelhamento de ontologias, são utilizadas algumas métricas para avaliar os alinhamentos produzidos. As métricas baseadas em alinhamento têm como princípio básico confrontar um alinhamento proposto com um alinhamento de referência. Algumas destas métricas, entretanto, não têm alcançado êxito suficiente porque (i) não conseguem discriminar sempre entre um alinhamento totalmente errado e um quase correto; e (ii) não conseguem estimar o esforço do usuário para refinar o alinhamento resultante. Este trabalho tem como objetivo apresentar uma nova abordagem para avaliar os alinhamentos de ontologias. A nossa abordagem apresenta uma métrica na qual utilizamos as próprias consultas normalmente já realizadas nas ontologias originais para julgar a qualidade do alinhamento proposto. Apresentamos também alguns resultados satisfatórios de nossa abordagem em relação às outras métricas já existentes e largamente utilizadas.
In the ontology matching field, different metrics are used to evaluate the resulting alignments. Metrics based on alignment adopt the basic principle of verifying a proposed alignment against a reference alignment. Some of these metrics do not achieve good results because (i) they cannot always distinguish between a totally wrong alignment and one which is almost correct; and (ii) they cannot estimate the effort for the user to refine the resulting alignment. This work aims to present a new approach to evaluate ontology alignments. Our approach presents a measure that uses the usual queries in the original ontologies to assess the quality of the proposed alignment. We also present some satisfactory results of our approach with regard to widely used metrics.
APA, Harvard, Vancouver, ISO, and other styles
25

Hamdi, Fayçal. "Améliorer l'interopérabilité sémantique : applicabilité et utilité de l'alignement d'ontologies." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00662523.

Full text
Abstract:
Dans cette thèse, nous présentons des approches d'adaptation d'un processus d'alignement aux caractéristiques des ontologies alignées, qu'il s'agisse de caractéristiques quantitatives telles que leur volume ou de caractéristiques particulières liées par exemple à la façon dont les labels des concepts sont construits. Concernant les caractéristiques quantitatives, nous proposons deux méthodes de partitionnement d'ontologies qui permettent l'alignement des ontologies très volumineuses. Ces deux méthodes génèrent, en entrée du processus d'alignement, des sous ensembles de taille raisonnable des deux ontologies à aligner en prenant en compte dès le départ l'objectif d'alignement dans le processus de partitionnement.Concernant les caractéristiques particulières des ontologies alignées, nous présentons l'environnement TaxoMap Framework qui permet la spécification de traitements de raffinement à partir de primitives prédéfinies. Nous proposons un langage de patrons MPL (the Mapping Pattern Language) que nous utilisons pour spécifier les traitements de raffinement.En plus des approches d'adaptation aux caractéristiques des ontologies alignées, nous présentons des approches de réutilisation des résultats d'alignement pour l'ingénierie ontologique. Nous nous focalisons plus particulièrement sur l'utilisation de l'alignement pour l'enrichissement d'ontologies. Nous étudions l'apport des techniques d'alignement pour l'enrichissement et l'impact des caractéristiques de la ressource externe utilisée comme source d'enrichissement. Enfin, nous présentons la façon dont l'environnement TaxoMap Framework a été implémenté et les expérimentations réalisées : des tests sur le module d'alignement TaxoMap, sur l'approche de raffinement de mappings, sur les méthodes de partitionnement d'ontologies de très grande taille et sur l'approche d'enrichissement d'ontologies.
APA, Harvard, Vancouver, ISO, and other styles
26

Cavaco, Francisco António Gonçalves. "Ontologies learn by searching." Master's thesis, FCT-UNL, 2011. http://hdl.handle.net/10362/7086.

Full text
Abstract:
Dissertation to obtain the Master degree in Electrical Engineering and Computer Science
Due to the worldwide diversity of communities, a high number of ontologies representing the same segment of reality which are not semantically coincident have appeared. To solve this problem, a possible solution is to use a reference ontology to be the intermediary in the communications between the community enterprises and to outside. Since semantic mappings between enterprise‘s ontologies are established, this solution allows each of the enterprises to keep internally its own ontology and semantics unchanged. However information systems are not static, thus established mappings become obsoletes with time. This dissertation‘s objective is to identify a suitable method that combines semantic mappings with user‘s feedback, providing an automatic learning to ontologies & enabling auto-adaptability and dynamism to the information systems
APA, Harvard, Vancouver, ISO, and other styles
27

Saunders, Garret. "Family-Wise Error Rate Control in Quantitative Trait Loci (QTL) Mapping and Gene Ontology Graphs with Remarks on Family Selection." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/2164.

Full text
Abstract:
The main aim of this dissertation is to meet real needs of practitioners in multiple hypothesis testing. The issue of multiplicity has become a signicant concern in most elds of research as computational abilities have increased, allowing for the simultaneous testing of many (thousands or millions) statistical hypothesis tests. While many error rates have been dened to address this issue of multiplicity, this work considers only the most natural generalization of the Type I Error rate to multiple tests, the family-wise error rate (FWER). Much work has already been done to establish powerful yet general methods which control the FWER under arbitrary dependencies among tests. This work both introduces these methods and expands upon them as is detailed through its four main chapters. Chapter 1 contains general introductions and preliminaries important to the remainder of the work, particularly a previously published graphical weighted Bonferroni multiplicity adjustment. Chapter 2 then applies the principles introduced in Chapter 1 to achieve a substantial computational improvement to an existing FWER controlling multiplicity approach (the Focus Level method) for gene set testing in high throughput microarray and next generation sequencing studies using Gene Ontology graphs. This improvement to the Focus Level procedure, which we call the Short Focus Level procedure, is achieved by extending the reach of graphical weighted Bonferroni testing to closed testing situations where restricted hypotheses are present. This is accomplished through Theorem 1 of Chapter 2. As a result of the improvement, the full top-down approach to the Focus Level procedure can now be performed, overcoming a signicant disadvantage of the otherwise powerful approach to multiple testing. Chapter 3 presents a solution to a multiple testing diculty within quantitative trait loci (QTL) mapping in natural populations for QTL LD (linkage disequilibrium) mapping models. Such models apply a two-hypothesis framework to the testing of thousands of genetic markers across the genome in search of QTL underlying a quantitative trait of interest. Inherent to the model is an unidentiability issue where a parameter of interest is identiable only under the alternative hypothesis. Through a second application of graphical weighted Bonferroni methods we show how the multiplicity can be accounted for while simultaneously accounting for the required logical structuring of the testing such that identiability is preserved. Finally, Chapter 4 details some of the diculties associated with the distributional assumptions for the test statistics of the two hypotheses of the LDbased QTL mapping framework. A novel bivariate testing strategy is proposed for these test statistics in order to overcome these distributional diculties while preserving power in the multiplicity correction by reducing the number of tests performed. Chapter 5 concludes the work with a summary of the main contributions and future research goals aimed at continual improvement to the multiple testing issues inherent to both the elds of genetics and genomics.
APA, Harvard, Vancouver, ISO, and other styles
28

Saunders, Garrett. "Family-Wise Error Rate Control in Quantitative Trait Loci (QTL) Mapping and Gene Ontology Graphs with Remarks on Family Selection." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/7021.

Full text
Abstract:
One of the great aims of statistics, the science of collecting, analyzing, and interpreting data, is to protect against the probability of falsely rejecting an accepted claim, or hypothesis, given observed data stemming from some experiment. This is generally known as protecting against a Type I Error, or controlling the Type I Error rate. The extension of this protection against Type I Errors to the situation where thousands upon thousands of hypotheses are examined simultaneously is known as multiple hypothesis testing. This dissertation presents an improvement to an existing multiple hypothesis testing approach, the Focus Level method, specific to gene set testing (a branch of genomics) on Gene Ontology graphs. This improvement resolves a long standing computational difficulty of the Focus Level method, providing more than a 15.000-fold increase in computational efficiency. This dissertation also presents a solution to a multiple testing problem in genetics where a specific approach to mapping genes underlying quantitative traits of interest requires a multiplicity adjustment approach that both corrects for the number of tests while also ensuring logical consistency. The power advantage of the solution is demonstrated over the current standard approach to the problem. A side issue of this model framework led to the development of a new bivariate approach to quantitative trait marker detection, which is presented herein. The overall contribution of this dissertation to the statistics literature is that it provides novel solutions that meet real needs of practitioners in genetics and genomics with the aim of ensuring both that truth is discovered and that discoveries are actually true.
APA, Harvard, Vancouver, ISO, and other styles
29

Bedeschi, Luca. "Ricerca, elaborazione e mapping su standard ontologici moderni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12841/.

Full text
Abstract:
Questa Tesi, nell’ambito del Semantic Web applicato al campo dei Beni Culturali (BC), si propone lo scopo di definire in un nuovo formato ontologico l’attuale sistema di registrazione e salvataggio delle informazioni che riguardano un Bene Culturale, ad esempio i dati identificativi, bibliografici, di scavo, ecc., attualmente registrati e salvati senza alcuna tecnologia Semantic Web. Nello specifico, questo progetto di Tesi si svilupperà considerando tra le tante schede che descrivono i termini per la catalogazione di un qualsiasi Bene Culturale, la scheda dei Reperti Archeologici (RA). Per fare questo verrà definito un mapping tra l’attuale sistema di registrazione delle informazioni di un Reperto Archeologico, e di conseguenza un nuovo dominio ontologico, in formato standardizzato RDF e OWL, seguendo le direttive sulle informazioni necessarie alla catalogazione dettate dagli organi del settore. Il risultato è una nuova ontologia, Central Institute for Cataloguing and Documentation Ontology CICDO, che a sua volta importa diverse ontologie e vocabolari tra cui: FOAF (Vocabolario Friend of a Friend), CITO (Citation Typing Ontology), Erlangen CRM (CIDOC Conceptual Reference Model), PROV-O, FaBiO (FRBR-aligned Bibliographic Ontology), HiCO (Historical Context Ontology), FRBR (Functional Requirements for Bibliographic Records), ed altre importate indirettamente. Nello specifico, CICDO definisce nuove entità e specializza quelle importate per un totale di: quarantasei classi e quarantasette object properties che descrivono le sezioni e le relazioni dei documenti da compilare di un reperto archeologico, e due datatype properties. Il mapping qui presentato è in forma tabellare, gli elementi sono raggruppati, quando serve, in sotto-tabelle, riproducendo parzialmente i paragrafi, campi e sotto-campi dei documenti ICCD di riferimento.
APA, Harvard, Vancouver, ISO, and other styles
30

Hoffmann, Patrick. "Similarité sémantique inter ontologies basée sur le contexte." Phd thesis, Université Claude Bernard - Lyon I, 2008. http://tel.archives-ouvertes.fr/tel-00363300.

Full text
Abstract:
Cette thèse étudie l'intérêt du contexte pour améliorer l'interopérabilité entre ontologies hétérogènes, d'une manière qui permette leur évolution indépendante. Lors de collaborations, les organisations échangent leurs données, qui sont décrites par des concepts définis dans des ontologies. L'objectif est d'obtenir un service d'évaluation de tels concepts, basé sur le contexte.
Nous proposons une méthodologie pour déterminer, modeler et utiliser le contexte. En l'appliquant, nous découvrons trois usages du contexte qui contribuent à améliorer la réconciliation d'ontologies : Nous proposons de désambiguïser les sens pragmatiques possibles des concepts en comparant les "perspectives" avec lesquelles les concepts ont été développés ; de personnaliser en considérant le contexte des agents, constitué d'une sélection pertinente parmi les domaines et tâches de l'organisation ; d'évaluer la pertinence des données associées au concept pour la tâche qui a suscité le besoin en interopérabilité.
APA, Harvard, Vancouver, ISO, and other styles
31

Cavalcanti, Neto Olavo de Holanda. "Joint-de: sistema de mapeamento objeto-ontologia com suporte a objetos desconectados." Universidade Federal de Alagoas, 2014. http://www.repositorio.ufal.br/handle/riufal/1609.

Full text
Abstract:
In the last few years, it is increasing the development and use of ontologies in creating more intelligent and effective applications that aim to solve problems commonly found on the Web. This popularity is due to the fact that ontologies attempt to provide semantics to the data consumed by machines, so that they can reason about these data. However, the large adoption of the Semantic Web can be further accelerated by providing sophisticated tools that lower the barrier to the development of applications based on RDF and OWL. Developers of applications with relational databases are already familiar with tools like Hibernate, which provide an object-relational mapping and the management of the objects states. Actually, the main object state that Hibernate provides is the detached. Nevertheless, the great majority of the object-ontology mapping systems (OOMS) only provide persistent objects. The big difference between these two types of objects is that the former one has its life cycle independent of the underlying triple store connection, but the latter one is bounded to the connection. In this context, this paper proposes the creation of an object-ontology mapping systems that supports detached objects, called Joint-DE. With this system, developers of ontology-based applications can: i) use the objects coming from the triple store as objects of the business model; ii) use such objects as data transfer objects ( DTOs) between subsystems and; iii) develop small transactions with detached objects that represent a long transaction unit for the application user. To illustrate the benets of the proposed system, a case study of a real application is presented, outlining the architectural limitations of the application using an existing OOMS in the literature, as well as showing positive results to the use of JOINT-DE. Finally, an experiment was planned and executed aiming to compare the JOINT-DE with another OOMS widely used by the community: Alibaba. The statistical analyzes performed in this experiment showed satisfactory results with regard to JOINT-DE.
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Nos últimos anos, é crescente o desenvolvimento e o uso de ontologias na criação de aplicações mais inteligentes e eficazes que têm como objetivo solucionar problemas encontrados comumente na Web. Toda essa popularidade se deve ao fato de que ontologias tentam oferecer semântica aos dados consumidos pelas máquinas de forma que ela possa raciocinar sobre estes dados. Todavia, a larga adoção da Web Semântica pode ser ainda acelerada ao prover ferramentas sosticadas que diminuam a barreira de desenvolvimento de aplicações baseadas em RDF e OWL. Desenvolvedores de aplicações com bancos de dados relacionais já estão acostumados com ferramentas como o Hibernate, que oferecem um mapeamento objeto-relacional e o gerenciamento de estados dos objetos. Na verdade, o principal estado de objeto que o Hibernate disponibiliza é o desconectado. Entretanto, a grande maioria dos sistemas de mapeamento objeto-ontologia (OOMS) apenas disponibiliza objetos persistentes. A grande diferença entre os dois tipos de objetos é que o primeiro tem seu ciclo de vida independente da conexão com o banco de dados RDF, já o último é limitado à conexão. Neste contexto, este trabalho propõe a criação de um sistema de mapeamento objeto-ontologia que suporta objetos desconectados, chamado JOINT-DE. Com este sistema, desenvolvedores de aplicações baseados em ontologias podem: i) utilizar os objetos oriundos do banco de dados RDF como objetos do modelo de negócio, transitando nas diversas camadas da aplicação; ii) utilizar esses objetos como objetos de transferência de dados (DTOs) entre subsistemas e iii) desenvolver pequenas transações com objetos desconectados que representam uma unidade longa de transação para o usuário da aplicação. Para exemplificar os benefícios do sistema proposto, um estudo de caso de uma aplicação real é apresentado, expondo as limitações arquiteturais dessa aplicação ao utilizar um OOMS existente na literatura, além de mostrar resultados favoráveis à implantação do JOINT-DE. Por fim, um experimento foi planejado e executado com o objetivo de comparar o JOINT-DE com outro OOMS bastante utilizado pela comunidade: Alibaba. As análises estatísticas realizadas nesse experimento apontaram resultados satisfatórios com relação ao JOINT-DE.
APA, Harvard, Vancouver, ISO, and other styles
32

Reynolds, Peggy E. "Depth Technology: Remediating Orientation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354298228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Rangaraj, Jithendra Kumar. "Knowledge-based Data Extraction Workbench for Eclipse." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354290498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Arsénio, Pedro. "Qualidade da paisagem e fitodiversidade. Contributo para o ordenamento e gestão de áreas costeiras de elevado valor natural." Doctoral thesis, ISA/UTL, 2011. http://hdl.handle.net/10400.5/5380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Saad, Sawsan. "Conception et Optimisation Distribuée d’un Système d’Information des Services d’Aide à la Mobilité Urbaine Basé sur une Ontologie Flexible dans le Domaine de Transport." Thesis, Ecole centrale de Lille, 2010. http://www.theses.fr/2010ECLI0017/document.

Full text
Abstract:
De nos jours, les informations liées au déplacement et à la mobilité dans un réseau de transport représentent sans aucun doute un potentiel important.Ces travaux visent à mettre en œuvre un Système d’Information de Service d’Aide à la Mobilité Urbaine (SISAMU).Le SISAMU doit pouvoir procéder par des processus de décomposition des requêtes simultanées en un ensemble de tâches indépendantes. Chaque tâche correspond à un service qui peut être proposé par plusieurs fournisseurs d’information en concurrence, avec différents coûts, temps de réponse et formats. Le SISAMU est lié à un Réseau informatique Etendu et distribué de Transport Multimodal (RETM) qui comporte plusieurs sources d’information hétérogènes des différents services proposés aux utilisateurs de transport. L’aspect dynamique, distribué et ouvert du problème, nous a conduits à adopter une modélisation multi-agent pour assurer au système une évolution continue et une flexibilité pragmatique. Pour ce faire, nous avons proposé d’automatiser la modélisation des services en utilisant la notion d’ontologie. Notre SISAMU prend en considération les éventuelles perturbations sur le RETM.Ansi, nous avons créé un protocole de négociation entre les agents. Le protocole de négociation proposé qui utilise l’ontologie de la cartographie se base sur un système de gestion des connaissances pour soutenir l'hétérogénéité sémantique. Nous avons détaillé l’Algorithme de Reconstruction Dynamique des Chemins des Agents (ARDyCA) qui est basé sur l’approche de l’ontologie cartographique. Finalement, les résultats présentés dans cette thèse justifient l’utilisation de l’ontologie flexible et son rôle dans le processus de négociation
Nowadays, information related on displacement and mobility in a transport network represents certainly a significant potential. So, this work aims to modeling, to optimize and to implement an Information System of Services to Aid the Urban Mobility (ISSAUM).The ISSAUM has firstly to decompose each set of simultaneous requests into a set of sub-requests called tasks. Each task corresponds to a service which can be proposed different by several information providers with different. An information provider which aims to propose some services through our ISSAUM has to register its ontology. Indeed, ISSAUM is related to an Extended and distributed Transport Multimodal Network (ETMN) which contains several heterogeneous data sources. The dynamic and distributed aspects of the problem incite us to adopt a multi-agent approach to ensure a continual evolution and a pragmatic flexibility of the system. So, we proposed to automate the modeling of services by using ontology idea. Our ISSAUM takes into account possible disturbance through the ETMN. In order to satisfy user requests, we developed a negotiation protocol between our system agents. The proposed ontology mapping negotiation model based on the knowledge management system for supporting the semantic heterogeneity and it organized as follow: Negotiation Layer (NL), the Semantic Layer (SEL), and the Knowledge Management Systems Layer(KMSL).We detailed also the reassignment process by using Dynamic Reassigned Tasks (DRT) algorithm supporting by ontology mapping approach. Finally, the experimental results presented in this thesis, justify the using of the ontology solution in our system and its role in the negotiation process
APA, Harvard, Vancouver, ISO, and other styles
36

Saleem, Khalid. "Schema Matching and Integration in Large Scale Scenarios." Montpellier 2, 2008. http://www.theses.fr/2008MON20126.

Full text
Abstract:
Le besoin d'intégrer et d'analyser des grands ensembles de données issus des bases de données publiées sur le web est présent dans de nombreux domaines d'applications comme la génomique, l'environnement, la médecine et le commerce électronique. Ces données sont, après intégration, utilisées pour prendre des décisions, des échanges de services, etc. Les outils existants pour la découverte de correspondances (appelés matchers) permettent de traiter les schémas deux par deux et nécessitent l'intervention d'un expert afin de garantir une bonne qualité des correspondances. Dans un scénario de large échelle, ces approches ne sont plus pertinentes et sont voire même infaisables à cause du nombre important de schémas et de leur taille (de l'ordre d'un millier d'éléments). Il est donc nécessaire d'automatiser la découverte de correspondances. Cependant, une méthode automatique doit préserver la qualité des correspondances et garantir des performances acceptables si l'on veut qu'elle soit utilisable. Nous avons développé des méthodes qui passent à l'échelle et qui réaliseront une découverte automatique. Nous avons proposé une méthode PORSCHE (Performance ORiented SCHEma Mediation) qui permet d'intégrer plusieurs schémas simultanément et de fournir un schéma médiateur. Cette méthode utilise un algorithme basé sur la fouille d'arbres (tree mining) et a été implémentée et expérimentée sur un grand nombre de schémas disponibles sur le web. Le Web sémantique est fortement dépendant du paradigme XML, qui suit une structure hiérarchique. Par ailleurs, l'utilisation d'ontologie se développe fortement. Nous nous intéressons à la construction d'ontologie à partir de schemas XML disponible sur le web. Nous proposons une approche automatique pour modéliser la sémantique émergente des ontologies. C'est une méthode collaborative pour la construction d'ontologie sans l'interaction directe avec les utilisateurs du domaine, des experts ou des développeurs. Une des caractéristiques très importante d'une ontologie est sa structure hiérarchique des concepts. Nous considérons des grands ensembles de schémas pour un domaine spécifique comme étant des arbres et leur appliquons des algorithmes d'extraction de sous-arbres fréquents pour découvrir des motifs (patterns) hiérarchiques en vue de construire une ontologie. Nous présentons un technique pour découvrir et proposer des correspondances complexes entre deux schemas. Ces correspondances sont ensuite validées à l'aide des mini-taxonomies qui sont les sous-arbres fréquents. La technique démontre une fois de plus la construction de la taxonomie ontologie de domaine. À cet égard, nous considérons le plus grand arbre ou un arbre créé par la fusion de l'ensemble des plus grands souvent sous-arbres comme étant une taxonomie. Nous plaidons en faveur de la confiance d'une telle taxonomie et des concepts associés car elle a été extraite à partir des schémas utilisés dans le domaine spécifié considéré
Semantic matching of schemas in heterogeneous data sharing systems is time consuming and error prone. The dissertation presents a new robust automatic method which integrates a large set of domain specific schemas, represented as tree structures, based upon semantic correspondences among them. The method also creates the mappings from source schemas to the integrated schema. Existing mapping tools employ semi-automatic techniques for mapping two schemas at a time. In a large-scale scenario, where data sharing involves a large number of data sources, such techniques are not suitable. Semi-automatic matching requires user intervention to finalize a certain mapping. Although it provides the flexibilty to compute the best possible mapping but time performance wise abates the whole matching process. At first, the dissertation gives a detail discussion about the state of the art in schema matching. We summarize the deficiencies in the currently available tools and techniques for meeting the requirements of large scale schema matching scenarios. Our approach, PORSCHE (Performance ORiented SCHEma Mediation) is juxtaposed to these shortcomings and its advantages are highlighted with sound experimental support. PORSCHE associated algorithms, first cluster the tree nodes based on linguistic label similarity. Then, it applies a tree mining technique using node ranks calculated during depth-first traversal. This minimises the target node search space and improves time performance, which makes the technique suitable for large scale data sharing. PORSCHE implements a hybrid approach, which also in parallel, incrementally creates an integrated schema encompassing all schema trees, and defines mappings from the contributing schemas to the integrated schema. The approach discovers 1:1 mappings for integration and mediation purposes. Formal experiments on real and synthetic data sets show that PORSCHE is scalable in time performance for large scale scenarios. The quality of mappings and integrity of the integrated schema is also verified by the experimental evaluation. Moreover, we present a technique for discovering complex match (1:n, n:1 and n:m) propositions between two schemas, validated by mini-taxonomies. These mini-taxonomies are extracted from the large set of domain specific metadata instances represented as tree structures. We propose a framework, called ExSTax (Extracting Structurally Coherent Mini-Taxonomies) based on frequent sub-tree mining, to support our idea. We further extend the ExSTax framework for extracting a reliable domain specific taxonomy
APA, Harvard, Vancouver, ISO, and other styles
37

Elbyed, Abdeltif. "ROMIE, une approche d'alignement d'ontologies à base d'instances." Phd thesis, Institut National des Télécommunications, 2009. http://tel.archives-ouvertes.fr/tel-00541874.

Full text
Abstract:
L'interoperabilite semantique est une question importante, largement identifiee dans les technologies d'organisation et de l'information et dans la communaute de recherche en systemes d'information. L'adoption large du Web afin d'acceder a des informations distribuees necessite l'interoperabilite des systemes qui gerent ces informations. Des solutions et reflexions comme le Web Semantique facilitent la localisation et l'integration des donnees d'une maniere plus intelligente par l'intermediaire des ontologies. Il offre une vision plus semantique et comprehensible du web. Pourtant, il souleve un certain nombre de defis de recherche. Un des principaux defis est de comparer et aligner les differentes ontologies qui apparaissent dans des taches d'integration. Le principal objectif de cette these est de proposer une approche d'alignement pour identifier les liens de correspondance entre des ontologies. Notre approche combine les techniques et les methodes d'appariement linguistiques, syntaxiques, structurelles ou encore semantiques (basees sur les instances). Elle se compose de deux phases principales : la phase d'enrichissement semantique des ontologies a comparer et la phase d'alignement ou de mapping. La phase d'enrichissement est basee sur l'analyse des informations que les ontologies developpent (des ressources web, des donnees, des documents, etc.) et qui sont associes aux concepts de l'ontologie. Notre intuition est que ces informations ainsi que les relations qui peuvent exister entre elles participent a l'enrichissement semantique entre les concepts. A l'issue de la phase d'enrichissement, une ontologie contient plus de relations semantiques entre les concepts qui seront exploitees dans la deuxieme phase. La phase de mapping prend deux ontologies enrichies et calcule la similarite entre les couples de concepts. Un processus de filtrage nous permet de reduire automatiquement le nombre de fausses relations. La validation des correspondances est un processus interactif direct (avec un expert) ou indirect (en mesurant le degre de satisfaction de l'utilisateur). Notre approche a donne lieu a un systeme de mapping appele ROMIE (Resource based Ontology Mapping within an Interactive and Extensible environment). Il a ete experimente et evalue dans deux differentes applications : une application biomedicale et une application dans le domaine de l'apprentissage enrichi par les technologies (ou e-learning).
APA, Harvard, Vancouver, ISO, and other styles
38

MATONGO, Tanguy, and Auriol DEGBELO. "APPLYING ENTERPRISE MODELS AS INTERFACE FOR INFORMATION SEARCHING." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-11091.

Full text
Abstract:

Nowadays, more and more companies use Enterprise Models to integrate and coordinate their business processes with the aim of remaining competitive on the market. Consequently, Enterprise Models play a critical role in this integration enabling to improve the objectives of the enterprise, and ways to reach them in a given period of time. Through Enterprise Models, companies are able to improve the management of their operations, actors, processes and also to improve communication within the organisation.

This thesis describes another use of Enterprise Models. In this work, we intend to apply Enterprise Models as interface for information searching. The underlying needsfor this project lay in the fact that we would like to show that Enterprise Models canbe more than just models but it can be used in a more dynamic way which is through a software program for information searching. The software program aimed at, first,extracting the information contained in the Enterprise Models (which are stored into aXML file on the system). Once the information is extracted, it is used to express a query which will be sent into a search engine to retrieve some relevant document to the query and return them to the user.

The thesis was carried out over an entire academic semester. The results of this workare a report which summarizes all the knowledge gained into the field of the study. A software has been built to serve as a proof of testing the theories.

APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Qiang. "Dealing with Missing Mappings and Structure in a Network of Ontologies." Licentiate thesis, Linköpings universitet, Databas och informationsteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64281.

Full text
Abstract:
With the popularity of the World Wide Web, a large amount of data is generated and made available through the Internet everyday. To integrate and query this huge amount of heterogeneous data, the vision of Semantic Web has been recognized as a possible solution. One key technology for the Semantic Web is ontologies. Many ontologies have been developed in recent years. Meanwhile, due to the demand of applications using multiple ontologies,  mappings between entities of these ontologies are generated as well, which leads to the generation of ontology networks consisting of ontologies and mappings between these ontologies. However, neither developing ontologies nor finding mappings between ontologies is an easy task. It may happen that the ontologies are not consistent or complete, or the mappings between these ontologies are not correct or complete, or the resulting ontology network is not consistent. This may lead to problems when they are used in semantically-enabled applications. In this thesis, we address two issues relevant to the quality of the mappings and the structure in the ontology network. The first issue deals with the missing mappings between networked ontologies. Assuming existing mappings between ontologies are correct, we investigate whether and how to use these existing mappings, to find more mappings between ontologies. We propose and test several strategies of using the given correct mappings to align ontologies. The second issue deals with the missing structure, in particular missing is-a relations, in networked ontologies. Based on the assumption that missing is-a relations are a kind of modeling defects, we propose an ontology debugging approach to tackle this issue. We develop an algorithm for detecting missing is-a relations in ontologies, as well as algorithms which assist the user in repairing by generating and recommending possible ways of repairing and executing the repairing. Based on this approach, we develop a system and test its use and performance.
APA, Harvard, Vancouver, ISO, and other styles
40

Aleksakhin, Vladyslav. "Visualization of gene ontology and cluster analysis results." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-21248.

Full text
Abstract:
The purpose of the thesis is to develop a new visualization method for Gene Ontologiesand hierarchical clustering. These are both important tools in biology andmedicine to study high-throughput data such as transcriptomics and metabolomicsdata. Enrichment of ontology terms in the data is used to identify statistically overrepresentedontology terms, that give insight into relevant biological processes orfunctional modules. Hierarchical clustering is a standard method to analyze andvisualize data to nd relatively homogeneous clusters of experimental data points.Both methods support the analysis of the same data set, but are usually consideredindependently. However, often a combined view such as: visualizing a large data setin the context of an ontology under consideration of a clustering of the data.The result of the current work is a user-friendly program that combines twodi erent views for analysing Gene Ontology and Cluster simultaneously. To makeexplorations of such a big data possible we developed new visualization approach.
APA, Harvard, Vancouver, ISO, and other styles
41

Lopes, Fernanda LiÂgia Rodrigues. "Access to data from Ontology Using Mappings Heterogeneous and Logic Programming." Universidade Federal do CearÃ, 2010. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14156.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Ontologies have been used in different areas, including Data Integration and Semantic Web, to provide formal descriptions to data sources as well as to associate semantics to them and to make information easier to discover and to recover. In this context, one of the most relevant issues is the Ontology-Based Data Access â OBDA, which is the problem of accessing one or more data sources by means of a conceptual representation expressed in terms of an ontology. The independence between the ontology layer and the data layer, and the ability of answering more expressive queries than the ones defined using description logics are some of the main distinguished issues of the ODBA. In this work, we specify an environment for OBDA, which deals with this problem considering a set of independent tasks. Our main contribution concerns the definition and implementation of a query rewriting process between ontologies structurally heterogeneous. In the proposed query rewriting approach, we combine the semantics and expressiveness of SPARQL with logic programming and we adopt a rulebased formalism to represent mappings between ontologies. We also deal with some relevant questions, including: the structural heterogeneity, the prune of irrelevant parts of the rewritten query and the representation of query results according to the target ontology. It is important to note that, although in this work we discuss the use of the proposed solution considering just two ontologies, it can also be extended and applied for data distributions cenarios with multiple ontologies.
Em vÃrias Ãreas, tais como IntegraÃÃo de Dados e Web SemÃntica, ontologias tÃm sido adotadas para descrever formalmente a semÃntica das fontes de dados, com o intuito de facilitar a descoberta e a recuperaÃÃo de informaÃÃes. Dentro desse contexto, o Acesso a Dados Baseado em Ontologias (Ontology-Based Data Access - OBDA) à um problema decorrente da necessidade de acessar tais fontes a partir das ontologias que representam seus modelos conceituais. Dentre as principais caracterÃsticas do OBDA, destacamos a independÃncia entre as ontologias e a camada de dados e a possibilidade de responder a consultas que sejam mais expressivas que as geralmente realizadas utilizando LÃgica Descritiva. Neste trabalho, especificamos um ambiente de OBDA no qual este problema à dividido em uma sÃrie de passos que podem ser tratados de maneira independente. Dentre cada um destes passos especificados, nossa principal contribuiÃÃo reside na definiÃÃo e implementaÃÃo de um processo para reescrita de consultas entre ontologias estruturalmente distintas. Em nossa abordagem de reescrita, manipulamos a consulta de entrada combinando a semÃntica e a expressividade da linguagem SPARQL com um mÃtodo baseado em noÃÃes de ProgramaÃÃo em LÃgica, uma vez que utilizamos mapeamentos heterogÃneos expressos atravÃs de regras. AlÃm disso, tratamos aspectos referentes Ãs diferenÃas estruturais entre as ontologias, possibilitamos que partes da consulta reescrita possam ser descartadas durante o processo, caso seja constatado que tais partes seriam desnecessÃrias, e permitimos que os resultados sejam reestruturados e apresentados conforme a ontologia alvo. Por fim, à vÃlido destacar que, embora a soluÃÃo apresentada tenha como foco duas ontologias, esta pode ser estendida para considerar aspectos especÃficos de distribuiÃÃo.
APA, Harvard, Vancouver, ISO, and other styles
42

Sicilia, Gómez Álvaro. "Supporting Tools for Automated Generation and Visual Editing of Relational-to-Ontology Mappings." Doctoral thesis, Universitat Ramon Llull, 2016. http://hdl.handle.net/10803/398843.

Full text
Abstract:
La integració de dades amb formats heterogenis i de diversos dominis mitjançant tecnologies de la web semàntica permet solucionar la seva disparitat estructural i semàntica. L'accés a dades basat en ontologies (OBDA, en anglès) és una solució integral que es basa en l'ús d'ontologies com esquemes mediadors i el mapatge entre les dades i les ontologies per facilitar la consulta de les fonts de dades. No obstant això, una de les principals barreres que pot dificultar més l'adopció de OBDA és la manca d'eines per donar suport a la creació de mapatges entre dades i ontologies. L'objectiu d'aquesta investigació ha estat desenvolupar noves eines que permetin als experts sense coneixements d'ontologies la creació de mapatges entre dades i ontologies. Amb aquesta finalitat, s'han dut a terme dues línies de treball: la generació automàtica de mapatges entre dades relacionals i ontologies i l'edició dels mapatges a través de la seva representació visual. Les eines actualment disponibles per automatitzar la generació de mapatges estan lluny de proporcionar una solució completa, ja que es basen en els esquemes relacionals i amb prou feines tenen en compte els continguts de la font de dades relacional i les característiques de l'ontologia. No obstant això, les dades poden contenir relacions ocultes que poden ajudar a la generació de mapatges. Per superar aquesta limitació, hem desenvolupat AutoMap4OBDA, un sistema que genera automàticament mapatges R2RML a partir de l'anàlisi dels continguts de la font relacional i tenint en compte les característiques de l'ontologia. El sistema fa servir una tècnica d'aprenentatge d'ontologies per inferir jerarquies de classes, selecciona les mètriques de similitud de cadenes en base a les etiquetes de les ontologies i analitza les estructures de grafs per generar els mapatges a partir de l'estructura de l'ontologia. La representació visual per mitjà d'interfícies intuïtives pot ajudar els usuaris sense coneixements tècnics a establir mapatges entre una font relacional i una ontologia. No obstant això, les eines existents per a l'edició visual de mapatges mostren algunes limitacions. En particular, la representació visual de mapatges no contempla les estructures de la font relacional i de l'ontologia de forma conjunta. Per superar aquest inconvenient, hem desenvolupat Map-On, un entorn visual web per a l'edició manual de mapatges. AutoMap4OBDA ha demostrat que supera les prestacions de les solucions existents per a la generació de mapatges. Map-On s'ha aplicat en projectes d'investigació per verificar la seva eficàcia en la gestió de mapatges.
La integración de datos con formatos heterogéneos y de diversos dominios mediante tecnologías de la Web Semántica permite solventar su disparidad estructural y semántica. El acceso a datos basado en ontologías (OBDA, en inglés) es una solución integral que se basa en el uso de ontologías como esquemas mediadores y mapeos entre los datos y las ontologías para facilitar la consulta de las fuentes de datos. Sin embargo, una de las principales barreras que puede dificultar más la adopción de OBDA es la falta de herramientas para apoyar la creación de mapeos entre datos y ontologías. El objetivo de esta investigación ha sido desarrollar nuevas herramientas que permitan a expertos sin conocimientos de ontologías la creación de mapeos entre datos y ontologías. Con este fin, se han llevado a cabo dos líneas de trabajo: la generación automática de mapeos entre datos relacionales y ontologías y la edición de los mapeos a través de su representación visual. Las herramientas actualmente disponibles para automatizar la generación de mapeos están lejos de proporcionar una solución completa, ya que se basan en los esquemas relacionales y apenas tienen en cuenta los contenidos de la fuente de datos relacional y las características de la ontología. Sin embargo, los datos pueden contener relaciones ocultas que pueden ayudar a la generación de mapeos. Para superar esta limitación, hemos desarrollado AutoMap4OBDA, un sistema que genera automáticamente mapeos R2RML a partir del análisis de los contenidos de la fuente relacional y teniendo en cuenta las características de la ontología. El sistema emplea una técnica de aprendizaje de ontologías para inferir jerarquías de clases, selecciona las métricas de similitud de cadenas en base a las etiquetas de las ontologías y analiza las estructuras de grafos para generar los mapeos a partir de la estructura de la ontología. La representación visual por medio de interfaces intuitivas puede ayudar a los usuarios sin conocimientos técnicos a establecer mapeos entre una fuente relacional y una ontología. Sin embargo, las herramientas existentes para la edición visual de mapeos muestran algunas limitaciones. En particular, la representación de mapeos no contempla las estructuras de la fuente relacional y de la ontología de forma conjunta. Para superar este inconveniente, hemos desarrollado Map-On, un entorno visual web para la edición manual de mapeos. AutoMap4OBDA ha demostrado que supera las prestaciones de las soluciones existentes para la generación de mapeos. Map-On se ha aplicado en proyectos de investigación para verificar su eficacia en la gestión de mapeos.
Integration of data from heterogeneous formats and domains based on Semantic Web technologies enables us to solve their structural and semantic heterogeneity. Ontology-based data access (OBDA) is a comprehensive solution which relies on the use of ontologies as mediator schemas and relational-to-ontology mappings to facilitate data source querying. However, one of the greatest obstacles in the adoption of OBDA is the lack of tools to support the creation of mappings between physically stored data and ontologies. The objective of this research has been to develop new tools that allow non-ontology experts to create relational-to-ontology mappings. For this purpose, two lines of work have been carried out: the automated generation of relational-to-ontology mappings, and visual support for mapping editing. The tools currently available to automate the generation of mappings are far from providing a complete solution, since they rely on relational schemas and barely take into account the contents of the relational data source and features of the ontology. However, the data may contain hidden relationships that can help in the process of mapping generation. To overcome this limitation, we have developed AutoMap4OBDA, a system that automatically generates R2RML mappings from the analysis of the contents of the relational source and takes into account the characteristics of ontology. The system employs an ontology learning technique to infer class hierarchies, selects the string similarity metric based on the labels of ontologies, and analyses the graph structures to generate the mappings from the structure of the ontology. The visual representation through intuitive interfaces can help non-technical users to establish mappings between a relational source and an ontology. However, existing tools for visual editing of mappings show somewhat limitations. In particular, the visual representation of mapping does not embrace the structure of the relational source and the ontology at the same time. To overcome this problem, we have developed Map-On, a visual web environment for the manual editing of mappings. AutoMap4OBDA has been shown to outperform existing solutions in the generation of mappings. Map-On has been applied in research projects to verify its effectiveness in managing mappings.
APA, Harvard, Vancouver, ISO, and other styles
43

Arnold, Patrick [Verfasser], Erhard [Akademischer Betreuer] Rahm, and Sören [Gutachter] Auer. "Semantic Enrichment of Ontology Mappings / Patrick Arnold ; Gutachter: Sören Auer ; Betreuer: Erhard Rahm." Leipzig : Universitätsbibliothek Leipzig, 2016. http://d-nb.info/1240243200/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Do, Hong-Hai. "Schema matching and mapping based data integration architecture, approaches and evaluation." Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2863983&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Polowinski, Jan. "Visualisierung großer Datenmengen im Raum." Thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-108506.

Full text
Abstract:
Large, strongly connected amounts of data, as collected in knowledge bases or those occurring when describing software, are often read slowly and with difficulty by humans when they are represented as spreadsheets or text. Graphical representations can help people to understand facts more intuitively and offer a quick overview. The electronic representation offers means that are beyond the possibilities of print such as unlimited zoom and hyperlinks. This paper addresses a framework for visualizing connected information in 3D-space taking into account the techniques of media design to build visualization structures and map information to graphical properties
Große, stark vernetzte Datenmengen, wie sie in Wissensbasen oder Softwaremodellen vorkommen, sind von Menschen oft nur langsam und mühsam zu lesen, wenn sie als Tabellen oder Text dargestellt werden. Graphische Darstellungen können Menschen helfen, Tatsachen intuitiver zu verstehen und bieten einen schnellen Überblick. Die elektronische Darstellung bietet Mittel, welche über die Möglichkeiten von Print hinausgehen, wie z.B. unbegrenzten Zoom und Hyperlinks. Diese Arbeit stellt ein Framework für die Visualisierung vernetzter Informationen im 3D-Raum vor, welches Techniken der Gestaltung zur Erstellung von graphischen Strukturen und zur Abbildung von Informationen auf graphische Eigenschaften berücksichtigt
APA, Harvard, Vancouver, ISO, and other styles
46

Serra, Simone. "Background annotation of entities in Linked Data vocabularies." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-162758.

Full text
Abstract:
One the key feature behind Linked Data is the use of vocabularies that allow datasets to share a common language to describe similar concepts and relationships and resolve ambiguities between them. The development of vocabularies is often driven by a consensus process among datasets implementers, in which the criterion of interoperability is considered to be sufficient. This can lead to misrepresentation of real-world entities in Linked Data vocabularies entities. Such drawbacks can be fixed by the use of a formal methodology for modelling Linked Data vocabularies entities and identifying ontological distinctions. One proven example is the OntoClean methodology for curing taxonomies. In this work, it is presented a software tool that implements the PURO approach to ontological distinction modelling. PURO models vocabularies as Ontological Foreground Models (OFM), and the structure of ontological distinctions as Ontological Background Models (OBM), constructed using meta-properties attached to vocabulary entities, in a process known as vocabulary annotation. The software tool, named Background Annotation plugin, written in Java and integrated in the Protégé ontology editor, enables a user to graphically annotate vocabulary entities through an annotation workflow, that implements, among other things, persistency of annotations and their retrieval. Two kinds of workflows are supported: generic and dataset-specific, in order to differentiate a vocabulary usage, in terms of a PURO OBM, with respect to a given Linked Data dataset. The workflow is enhanced by the use of dataset statistical indicators retrieved through the Sindice service, for a sample of chosen datasets, such as the number of entities present in a dataset, and the relative frequency of vocabulary entities in that dataset. A further enhancement is provided by dataset summaries that offer an overview of the most common entity-property paths found in a dataset. Foreseen utilisation of the Background Annotation plugin include: 1) the checking of mapping agreement between different datasets, as produced by the R2R framework and 2) annotation of dependent resources in Concise Boundaries Descriptions of entities, used in data sampling from Linked Data datasets for data mining purposes.
APA, Harvard, Vancouver, ISO, and other styles
47

Polowinski, Jan [Verfasser], Uwe [Akademischer Betreuer] [Gutachter] Aßmann, and Ulrich W. [Gutachter] Eisenecker. "Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings / Jan Polowinski ; Gutachter: Uwe Aßmann, Ulrich W. Eisenecker ; Betreuer: Uwe Aßmann." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://d-nb.info/1144286557/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Boyd, Tyler. "Ontology of Geological Mapping." 2016. http://scholarworks.gsu.edu/geosciences_theses/94.

Full text
Abstract:
In this thesis, an ontology for the geological mapping domain is constructed using the Protégé ontology editor. The Geological Mapping ontology is developed using terms and relationships, and their properties, as they relate to creating a geologic map. This vocabulary is semantically modeled in the ontology using Web Ontology Language (OWL). The purpose of this thesis is to exemplify how an ontology can be designed and developed to represent geological knowledge as it relates to mapping.
APA, Harvard, Vancouver, ISO, and other styles
49

Tsai, Bai-Jung, and 蔡佰忠. "A Similarity Selection Method for Ontology Mapping." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/85757472517067425802.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
101
Ontology is a good method to define knowledge and to record the domain of knowledge. There are different ways of designing ontology. Based on different goals of different builders, same domain may have different ontology structures defined in different types. How to integrate same ontology in different structures has become an important research topic, and ontology mapping has received some attention. OAEI is an organization which provides a chance for those who study ontology mapping to compare their study with others. What is important for ontology mapping is to know how to find the similarity part in two sets of ontology. In early years, most study just considered single similarity for their ontology mapping. Later, more researchers have started to develop many methods to filter out the possible combination of mapping two structures to produce a better result. In this study, I expect to improve ontology mapping efficacy by using multiple similarity and a series of filtering actions to produce high possibility to mapping.
APA, Harvard, Vancouver, ISO, and other styles
50

Falconer, Sean M. "Cognitive support for semi-automatic ontology mapping." Thesis, 2009. http://hdl.handle.net/1828/1362.

Full text
Abstract:
Structured vocabularies are often used to annotate and classify data. These vocabularies represent a shared understanding about the terms used within a specific domain. People often rely on overlapping, but independently developed terminologies. This representational divergence becomes problematic when researchers wish to share, find, and compare their data with others. One approach to resolving this is to create a mapping across the vocabularies. Generating these mappings is a difficult, semi-automatic process, requiring human intervention. There has been little research investigating how to aid users with performing this task, despite the important role the user typically plays. Much of the research focus has been to explore techniques to automatically determine correspondences between terms. In this thesis, we explore the user-side of mapping, specifically investigating how to support the user's decision making process and exploration of mappings. We combine data gathered from theories of human inference and decision making, an observational case study, online survey, and interview study to propose a cognitive support framework for ontology mapping. The framework describes the user information needs and the process users follow during mapping. We also propose a number of design principles, which help guide the development of an ontology mapping tool called CogZ. We evaluate the tool and thus implicitly the framework through a case study and controlled user study. The work presented in this thesis also helps to draw attention to the importance of the user role during the mapping process. We must incorporate a "human in the loop", where the human is essential to the process of developing a mapping. Helping to establish and harness this symbiotic relationship between human processes and the tool's automated process will allow people to work more efficiently and effectively, and afford them the time to concentrate on difficult tasks that are not easily automated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography