Dissertations / Theses on the topic 'Semantic interoperability'

To see the other types of publications on this topic, follow the link: Semantic interoperability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Semantic interoperability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marques, Fernando Sérgio Bryton Dias. "Semantic interoperability assessment : iShare framework." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/663891.

Full text
Abstract:
Interagency information sharing is widely acknowledged for increasing the efficiency and effectiveness of several domains with high societal impact such as security, cybersecurity and health. Therefore, it comes as no surprise that the development of interoperability among public services is a political priority in many countries around the world, and that, presently, several initiatives are ongoing with this purpose. The proper management of such initiatives demands adequate instruments to support the definition of the existing (as-is) and desired (to-be) situations, as well as the identification, prioritization, monitoring and control of the actions that are necessary to achieve the objectives defined for developing interoperability. Moreover, appropriate instruments are also required to support the justification and comparison of initiatives, for example in situations where they compete for funds. However, the existing practical solutions are scarce and do not fit well these requirements. Therefore, this research proposes a framework (iShare) for assessing the semantic interoperability - one of the facets of interoperability - of governmental agencies that use a common information model for exchanging information with each other. This assessment is made in two parts. The first part assesses how organizations are performing, in terms of semantic interoperability, and the second part assesses the relevance of that performance, considering a series of pre-defined factors. To develop the iShare framework we followed the Design Science Research Method. The framework itself is based on Process Performance Indicators, on the Delphi Method and on the Weighted Sums Model. Its validation was performed during the development of the Portuguese maritime surveillance information exchange system (NIPIMAR), which is based on the information model of the European Maritime Common Information Sharing Environment (CISE). The result of the validation was the assessment of the semantic interoperability of six public organizations participating in the project. In addition, some of the main ideas of the framework were immediately used within the project to assess the semantic interoperability of all organizations that were participating in it and to develop an action plan to improve their interoperability and information exchange. The iShare framework has thus proven to be an innovative, useful, relevant and more objective way of assessing semantic interoperability among various organizations, which tells us how much and how relevant that interoperability is. Hence, the iShare framework contributes to the body of knowledge in the field and opens new possibilities for assessing interoperability and information exchange, and thus to increase the efficiency and effectiveness of governmental agencies.
Es un hecho reconocido que el intercambio de información entre organismos y agencias mejora la eficiencia y efectividad en dominios con alto impacto en la sociedad, tales como seguridad, ciberseguridad y salud pública. Por tanto, no es una sorpresa que el desarrollo de la interoperabilidad entre organismos públicos sea una prioridad política en muchos países y que, en la actualidad, se esté trabajando en diversas iniciativas con esta finalidad. La correcta gestión de tales iniciativas requiere instrumentos adecuados, que puedan soportar la definición de la situación existente y la deseada, así como la identificación, priorización, monitorización y control de las acciones que son necesarias para conseguir los objetivos definidos para alcanzar la interoperabilidad. Asimismo, son necesarios instrumentos que permitan justificar y comparar diferentes iniciativas, por ejemplo, en situaciones en las que compitan por financiación. Sin embargo, actualmente las soluciones que ofrecen instrumentos reutilizables para estos propósitos son escasas, y aquellas que existen no cubren bien todos los requisitos. En este contexto, esta tesis doctoral propone un nuevo marco teórico llamado iShare para evaluar la interoperabilidad semántica, uno de los aspectos de la interoperabilidad, de organismos gubernamentales y/o agencias que utilicen un modelo de información común para el intercambio de datos entre ellos. Con iShare, esta evaluación se realiza en dos etapas. En la primera, se evalúa el rendimiento de las organizaciones en términos de interoperabilidad semántica, mientras que en la segunda etapa se evalúa la relevancia de ese rendimiento medido, considerando una serie de factores predeterminados. Para el desarrollo del marco teórico iShare, se utilizó la metodología de investigación Design Science, basada en los indicadores de rendimiento de proceso, en el método Delphi y el modelo Weighted Sums Model. Para su validación, se aplicó el mismo al desarrollo del sistema de vigilancia marítima NIPIMAR en Portugal, el cual está orientado al intercambio de datos y utiliza el modelo de información del programa europeo Maritime CISE (Common Information Sharing Environment for Maritime Surveillance, entorno común de intercambio de datos para vigilancia marítima). El proceso de validación permitió evaluar la interoperabilidad semántica de seis organismos públicos que participaban en el proyecto. Asimismo, algunas de las ideas del marco teórico fueron directamente incorporadas al proyecto para evaluar la interoperabilidad semántica de todos los organismos públicos portugueses y para desarrollar un plan de acción que mejore su interoperabilidad y el intercambio de información entre ellas. Los resultados de esta investigación demuestran que el marco teórico iShare ofrece una solución innovadora, útil, relevante y más objetiva para la evaluación de la interoperabilidad semántica entre varias organizaciones. Por todo ello, el marco teórico iShare contribuye al cuerpo de conocimiento en este campo científico y abre nuevas posibilidades para evaluar la interoperabilidad y así aumentar la eficiencia y la eficacia de las agencias gubernamentales.
APA, Harvard, Vancouver, ISO, and other styles
2

Kiljander, J. (Jussi). "Semantic interoperability framework for smart spaces." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526210810.

Full text
Abstract:
Abstract At the heart of the smart space vision is the idea that devices interoperate with each other autonomously to assist people in their everyday activities. In order to make this vision a reality, it is important to achieve semantic-level interoperability between devices. The goal of this dissertation is to enable Semantic Web technology-based interoperability in smart spaces. There are many challenges that need to be solved before this goal can be achieved. In this dissertation, the focus has been on the following four challenges: The first challenge is that the Semantic Web technologies have neither been designed for sharing real-time data nor large packets of data such as video and audio files. This makes it challenging to apply them in smart spaces, where it is typical that devices produce and consume this type of data. The second challenge is the verbose syntax and encoding formats of Semantic Web technologies that make it difficult to utilise them in resource-constrained devices and networks. The third challenge is the heterogeneity of smart space communication technologies that makes it difficult to achieve interoperability even at the connectivity level. The fourth challenge is to provide users with simple means to interact with and configure smart spaces where device interoperability is based on Semantic Web technologies. Even though autonomous operation of devices is a core idea in smart spaces, this is still important in order to achieve successful end-user adoption. The main result of this dissertation is a semantic interoperability framework, which consists of following individual contributions: 1) a semantic-level interoperability architecture for smart spaces, 2) a knowledge sharing protocol for resource-constrained devices and networks, and 3) an approach to configuring Semantic Web-based smart spaces. The architecture, protocol and smart space configuration approach are evaluated with several reference implementations of the framework components and proof-of-concept smart spaces that are also key contributions of this dissertation
Tiivistelmä Älytilavision ydinajatuksena on, että erilaiset laitteet tuottavat yhteistyössä ihmisten elämää helpottavia palveluita. Vision toteutumisen kannalta on tärkeää saavuttaa semanttisen tason yhteentoimivuus laitteiden välillä. Tämän väitöskirjan tavoitteena on mahdollistaa semanttisen webin teknologioihin pohjautuva yhteentoimivuus älytilan laitteiden välillä. Monenlaisia haasteita täytyy ratkaista, ennen kuin tämä tavoite voidaan saavuttaa. Tässä työssä keskityttiin seuraaviin neljään haasteeseen: Ensimmäinen haaste on, että semanttisen webin teknologioita ei ole suunniteltu reaaliaikaiseen kommunikaatioon, eivätkä ne sovellu isojen tiedostojen jakamiseen. Tämän vuoksi on haasteellista hyödyntää niitä älytiloissa, joissa laitteet tyypillisesti jakavat tällaista tietoa. Toinen haaste on, että semanttisen webin teknologiat perustuvat syntakseihin ja koodausformaatteihin, jotka tuottavat laitteiden kannalta tarpeettoman pitkiä viestejä. Tämä tekee niiden hyödyntämisestä hankalaa resurssirajoittuneissa laitteissa ja verkoissa. Kolmas haaste on, että älytiloissa hyödynnetään hyvin erilaisia kommunikaatioteknologioita, minkä vuoksi jopa tiedonsiirto laitteiden välillä on haasteellista. Neljäs haaste on tarjota loppukäyttäjälle helppoja menetelmiä sekä vuorovaikutukseen semanttiseen webiin pohjautuvien älytilojen kanssa että tällaisen älytilan muokkaamiseen käyttäjän tarpeiden mukaiseksi. Vaikka laitteiden itsenäinen toiminta onkin älytilojen perusajatuksia, tämä on kuitenkin tärkeää teknologian hyväksymisen ja käyttöönoton kannalta. Väitöskirjan päätulos on laitteiden semanttisen yhteentoimivuuden viitekehys, joka koostuu seuraavista itsenäisistä kontribuutioista: 1) semanttisen tason yhteentoimivuusarkkitehtuuri älytiloille, 2) tiedonjakoprotokolla resurssirajoittuneille laitteille ja verkoille sekä 3) menetelmä semanttiseen webiin pohjautuvien älytilojen konfigurointiin. Näiden kontribuutioiden evaluointi suoritettiin erilaisten järjestelmäkomponenttien referenssitoteutuksilla ja prototyyppiälytiloilla, jotka kuuluvat myös väitöskirjan keskeisiin kontribuutioihin
APA, Harvard, Vancouver, ISO, and other styles
3

Lister, Kendall. "Toward semantic interoperability for software systems." Connect to thesis, 2008. http://repository.unimelb.edu.au/10187/3594.

Full text
Abstract:
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57]
In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application.
The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data.
The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed.
Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems.
In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
APA, Harvard, Vancouver, ISO, and other styles
4

Hafsia, Raouf. "Semantic interoperability in ad hoc wireless networks." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA390328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rendo, Fernandez Jose Ignacio. "Semantic interoperability in ad-hoc computing environments." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/3072.

Full text
Abstract:
This thesis introduces a novel approach in which multiple heterogeneous devices collaborate to provide useful applications in an ad-hoc network. This thesis proposes a smart home as a particular ubiquitous computing scenario considering all the requirements given by the literature for succeed in this kind of systems. To that end, we envision a horizontally integrated smart home built up from independent components that provide services. These components are described with enough syntactic, semantic and pragmatic knowledge to accomplish spontaneous collaboration. The objective of these collaboration is domestic use, that is, the provision of valuable services for home residents capable of supporting users in their daily activities. Moreover, for the system to be attractive for potential customers, it should offer high levels of trust and reliability, all of them not at an excessive price. To achieve this goal, this thesis proposes to study the synergies available when an ontological description of home device functionality is paired with a formal method. We propose an ad-hoc home network in which components are home devices modelled as processes represented as semantic services by means of the Web Service Ontology (OWL-S). In addition, such services are specified, verified and implemented by means of the Communicating Sequential Processes (CSP), a process algebra for describing concurrent systems. The utilisation of an ontology brings the desired levels of knowledge for a system to compose services in a ad-hoc environment. Services are composed by a goal based system in order to satisfy user needs. Such system is capable of understaning, both service representations and user context information. Furthermore, the inclusion of a formal method contributes with additional semantics to check that such compositions will be correctly implemented and executed, achieving the levels of reliability and costs reduction (costs derived form the design, development and implementation of the system) needed for a smart home to succeed.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ying. "Developing Ontology Mapping approaches for Semantic Interoperability." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Li, and llI@it swin edu au. "Agent-based ontology management towards interoperability." Swinburne University of Technology, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20060504.153959.

Full text
Abstract:
Ontologies are widely used as data representations for knowledge bases and marking up data on the emerging Semantic Web. Hence, techniques for managing ontol- ogy come to the centre of any practical and general solution of knowledge-based systems. Challenges arise when we look a step further in order to achieve flexibility and scalability of the ontology management. Previous works in ontology management, primarily for ontology mapping, ontology integration and ontology evolution, have exploited only one form or another of ontology management in restrictive settings. However, a distributed and heterogeneous environment makes it necessary for re- searchers in this field to consider ontology interoperability in order to achieve the vision of the Semantic Web. Several challenges arise when we set our goal to achieve ontology interoperability on the Web. The first one is to decide which soft- ware engineering paradigm to employ. The issue of such a paradigm is the core of ontology management when dynamic property is involved. It should make it easy to model complex systems and significantly improve current practice in software engineering. Moreover, it allows the extension of the range of applications that can feasibly be tackled. The second challenge is to exploit frameworks based on the pro- posed paradigm. Such a framework should make possible flexibility, interactivity, reusability and reliability for systems which are built on it. The third challenge is to investigate suitable mechanisms to cope with ontology mapping, integration and evolution based on the framework. It is known that predefined rules or hypotheses may not apply given that the environment hosting an ontology is changing over time. Fortunately, agents are being advocated as a next generation model for en- gineering complex and distributed systems. Also some researchers in this field have given a qualitative analysis to provide a justification for precisely why the agent-based approach is well suited to engineer complex software systems. From a multi-agent perspective, agent technology fits well in developing applications in uncontrolled and distributed environments which require substantial support for change. Agents in multi-agent systems (MAS) are autonomous and can engage in interactions which are essential for any ongoing agents� actions. A MAS approach is thus regarded as an intuitive and suitable way of modelling dynamic systems. Following the above discussion, an agent-based framework for managing ontology in a dynamic environment is developed. The framework has several key characteris- tics such as flexibility and extensibility that differentiate this research from others. Three important issues of the ontology management are also investigated. It is be- lieved that inter-ontology processes like ontology mapping with logical semantics are foundations of ontology-based applications. Hence, firstly, ontology mapping is discussed. Several types of semantic relations are proposed. Following these, the mapping mechanisms are developed. Secondly, based on the previous mapping results, ontology integration is developed to provide abstract views for participating organisations in the presence of a variety of ontologies. Thirdly, as an ontology is subject to evolution in its life cycle, there must be some kind of mechanisms to reflect their changes in corresponding interrelated ontologies. Ontology refinement is investigated to take ontology evolution into consideration. Process algebra is employed to catch and model information exchanges between ontologies. Agent negotiation strategy is applied to guide corresponding ontologies to react properly. A prototype is built to demonstrate the above design and functionalities. It is applied to ontologies dealing with the subject of beer (type). This prototype con- sists of four major types of agents, ranging from user agent, interface agent, ontology agent, and functionary agent. Evaluations such as query, consistency checking are conducted on the prototype. This shows that the framework is not only flexible but also completely workable. All agents derived from the framework exhibit their behaviours appropriately as expected.
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Yun. "Semantic Annotation for Process Models : Facilitating Process Knowledge Management via Semantic Interoperability." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2119.

Full text
Abstract:

Business process models representing process knowledge about doing business are necessary for designing Information Systems (IS) solutions in enterprises. Interoperability of business process knowledge in legacy systems is crucial for enterprise systems interoperation and integration due to increased enterprise cooperation and business exchange. Many modern technologies and approaches are deployed to support business process interoperability either at the instance level or the protocol level, such as BPML, WSDL and SOAP. However, we argue that a holistic approach is necessary for semantic interoperability of business process models at the conceptual level when considering the process models as reusable process knowledge for other (new or integrated) IS solutions. This brings requirements to manage semantic heterogeneity of process knowledge in process models which are distributed across different enterprise systems. Semantic annotation is an approach to achieve semantic interoperability of heterogeneous resources. However, such an approach has usually been applied to enhance the semantics of unstructured and structured artifacts (e.g. textual resources [72] [49], and Web services [166] [201]).

The aim of the research is to introduce an ontology-based semantic annotation approach to enrich and reconcile semantics of process models — a kind of semi-structured artifact, for managing process knowledge. The approach brings together techniques in process modeling, ontology building, semantic matching, and Description Logic inference in order to provide a comprehensive semantic annotation framework. Furthermore, a prototype system that supports the process of ontology-based semantic annotation of heterogeneous process models is described. The applicational goal of our approach is to facilitate process knowledge management activities (e.g. discovery, reuse, and integration of process knowledge/models) by enhanced semantic interoperability.

A survey has been performed through identifying semantic heterogeneity in process modeling and investigating semantic technology from theoretical and practical views. Based on the results from the survey, a comprehensive semantic annotation framework has been developed, which provides a method to manage semantic heterogeneity of process models from the following perspectives. First, basic descriptions of process models (profile annotation); second, process modeling languages (meta-model annotation); third, contents of process models (model annotation) and finally intentions of process model owners (goal annotation). Applying the semantic annotation framework, an ontology-based annotation method has been elaborated, which results in two categories of research activity — ontology building and semantic mapping. In ontology building, we use Web Ontology Language (OWL), a Semantic Web technology, which can be used to model ontologies. GPO (General Process Ontology) comprising core concepts in most process modeling languages is proposed; domain concepts are classified in the corresponding categories of GPO as a domain ontology; design principles for building a goal ontology are introduced in order to serve the annotation of process models pragmatically. In semantic mapping, a set of mapping strategies are developed to conduct the annotation by considering the semantic relationships between model artifacts and ontology references and as well the semantic inference mechanism supported by OWL DL (Description Logic). The annotation method is finally formalized into a process semantic annotation model - PSAM.

The proposed approach has been implemented in a prototype annotation tool —ProSEAT to facilitate the annotation process. Procedures of applying the semantic annotation approach with the tool are described through exemplar study. The annotation approach and the prototype tool are evaluated using a quality framework. Furthermore, the applicability of the annotation results is validated by going through a process knowledge management application. The Semantic Web Rule Language (SWRL) is applied in the application demonstration. We argue that the ontology-based annotation approach combined with the Semantic Web technology is a feasible approach to reconcile semantic heterogeneity in the process knowledge management. Limitations and future work are discussed after concluding this research work.

The contributions of this thesis are summarized as follows. First, a general process ontology is proposed for unifying process representations at a high level of abstraction. Second, a semantic annotation framework is introduced to describe process knowledge systematically. Third, ontology-based annotation methods are elaborated and formalized. Fourth, an annotation system, utilizing the developed formal methods, is designed and implemented. Fifth, a process knowledge management system is outlined as the platform for manipulating the annotation results. Moreover, applying results of the approach is demonstrated through a process model integration example.

APA, Harvard, Vancouver, ISO, and other styles
9

AYDAR, MEHMET. "Developing a Semantic Framework for Healthcare Information Interoperability." Kent State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=kent1447721121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tewolde, Noh Teamrat. "Evaluating a Semantic Approach to Address Data Interoperability." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/46272.

Full text
Abstract:
Semantic approaches have been used to facilitate interoperability in different fields of study. Current literature, however, shows that the semantic approach has not been used to facilitate the interoperability of addresses across domains. Addresses are important reference data used to identify locations and /or delivery points. Interoperability of address data across address or application domains is important because it facilitates the sharing of address data, addressing software and tools which can be used across domains. The aim of this research study has been to evaluate how a semantic (ontologies) approach could be used to facilitate address data interoperability and what the challenges and benefits of the semantic approach are. To realize the hypothesis and answer the research problems, a multi-tier hierarchy of ontology architecture was designed to integrate (across domain) address data with different levels of granularities. Four-tier hierarchy of ontologies was argued to be the optimal architecture for address data interoperability. At the top of the hierarchy was Foundation-Tier that includes vocabularies for location-related information and semantic language rules and concepts. The second tier has address reference ontology (called Base Address Ontology) that was developed to facilitate interoperability across the address domains. Developing optimal address reference ontology was one of the major goals of the research. Different domain ontologies were developed at the third tier of the hierarchy. Domain ontologies extend the vocabulary of the BAO (address reference ontology) with domain specific concepts. At the bottom of the hierarchy are application ontologies that are designed for specific purpose within an address domain or domains. Multiple scenarios of address data usage were considered to answer the research questions from different perspectives. Two interoperable address systems were developed to demonstrate the proof of concepts for the semantic approach. These interoperable environments were created using the UKdata+UPUdata ontology and UKpostal ontology, which illustrate different use cases of ontologies that facilitate interoperability. Ontology reason, inference, and SPARQL query tools were used to share, exchange, and process address data across address domains. Ontology inferences were done to exchange address data attributes between the UK administrative address data and UK postal service address data systems in the UKdata+UPUdata ontology. SPARQL queries were, furthermore, run to extract and process information from different perspective of an address domain and from combined perspectives of two (UK administrative and UK postal) address domains. The second interoperable system (UKpostal ontology) illustrated the use of ontology inference tools to share address data between two address data systems that provide different perspectives of a domain.
Dissertation (MSc)--University of Pretoria, 2014.
tm2015
Computer Science
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
11

Ejarque, Artigas Jorge. "Semantic resource management and interoperability between distributed computing platforms." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/334416.

Full text
Abstract:
Distributed Computing is the paradigm where the application execution is distributed across different computers connected by a communication network. Distributed Computing platforms have evolved very fast during the las decades: starting from Clusters, where a set of computers were working together in a single location; then evolving to the Grids, where computing resources are shared by different entities, creating a global computing infrastructure which is available to different user communities; and finally becoming in what is currently known as the Cloud, where computing and data resources are provided, on demand, in a very dynamic fashion, and following the Utility Computing model where you pay only for what you consume. Different types of companies and institutions are exploring the potential benefits of moving their IT services and applications to Cloud infrastructures, in order to decouple the management of computing resources from their core business process to become more productive. Nevertheless, migrating software to Clouds is not an easy task, since it requires a deep knowledge of the technology to decompose the application and the capabilities offered by providers and how to use them. Besides this complex deployment process, the current cloud market place has several providers offering resources with different capabilities, prices and quality, and each provider uses their own properties and APIs for describing and accessing their resources. Therefore, when customers want to execute an application in the providers' resources, they must understand the different providers' description, compare them and select the most suitable resources for their interests. Once the provider and resources have been selected, developers have to inter-operate with the different providers' interfaces to perform the application execution steps. To do all the mentioned steps, application developers have to deal with the design and implementation of complex integration procedures. This thesis presents several contributions to overcome the aforementioned problems by providing a platform that facilitates and automates the integration of applications in different providers' infrastructures lowering the barrier of adopting new distributed computing infrastructure such as Clouds. The achievement of this objective has been split in several parts. In the first part, we have studied how semantic web technologies are helping to describe applications and to automatically infer a model for deploying them in a distributed platform. Once the application deployment model has been inferred, the second step is finding the resources to deploy and execute the different application components. Regarding this topic, we have studied how semantic web technologies can be applied in the resource allocation problem. Once the different components have been allocated in the providers' resources, it is time to deploy and execute the application components on these resources by invoking a workflow of provider API calls. However, every provider defines their own management interfaces, so the workflow to perform the same actions is different depending on the selected provider. In this thesis, we propose a framework to automatically infer the workflow of provider interface calls required to perform any resource management tasks. In the last part of the thesis, we have studied how to introduce the benefits of software agents for coordinating the application management in distributed platforms. We propose a multi-agent system which is in charge of coordinating the different steps of the application deployment in a distributed way as well as monitoring the correct execution of the application in the computing resources. The different contributions have been validated with a prototype implementation and a set of use cases.
La Computación Distribuida es un paradigma donde la ejecución de aplicaciones se distribuye entre diferentes computadores contados a través de una red de comunicación. Las plataformas de computación distribuida han evolucionado rápidamente durante las últimas décadas, empezando por los "Clusters", donde varios computadores están conectados por una red local; pasando por los "Grids", donde los recursos computacionales son compartidos por varias instituciones creando un red de computación global; llegando finalmente a lo que actualmente conocemos como "Clouds", donde nos podemos proveer de recursos de manera dinámica, bajo demanda y pagando solo por lo que consumimos. Actualmente, varias compañías están descubriendo los beneficios de mover sus aplicaciones a las infraestructuras Cloud, desacoplando la administración de los recursos computacionales de su "core business" para ser más productivos. Sin embargo migrar el software al Cloud no es una tarea fácil porque se requiere un conocimiento exhaustivo de la tecnología y como usar los servicios ofrecidos por los diferentes proveedores. Además cada proveedor ofrece recursos con diferentes capacidades, precios y calidades, con su propia interfaz para acceder a ellos. Por consiguiente, cuando un usuario quiere ejecutar una aplicación en el Cloud, debe entender que ofrece cada proveedor y como usarlo y una vez que ha elegido debe programar los diferentes pasos del despliegue de su aplicación. Si además se quieren usar varios proveedores o cambiar a otro, este proceso debe repetirse varias veces. Esta tesis presenta varias contribuciones para mitigar estos problemas diseñando una plataforma para facilitar y automatizar la integración de aplicaciones en los diferentes proveedores. Estas contribuciones se dividen en varias partes: Primero, el estudio de como las tecnologías semánticas pueden ayudar para describir aplicaciones y automáticamente inferir como se puede desplegar en un plataforma distribuida. Una vez obtenemos este modelo de despliegue, la segunda contribución nos presenta como estas mismas tecnologías pueden usarse para asignar las diferentes partes del despliegue de la aplicación a los recursos de los proveedores. Una vez sabemos la asignación, la siguiente contribución nos resuelve como se puede usar "AI planning" para encontrar la secuencia de servicios que se deben ejecutar para realizar el despliegue deseado. Finalmente, la última parte de la tesis, nos presenta como el despliegue y ejecuciones de las aplicaciones puede coordinarse por un sistema multi-agentes de una manera escalable y distribuida. Las diferentes contribuciones de la tesis han sido validadas mediante la implementación de prototipos y casos de uso.
APA, Harvard, Vancouver, ISO, and other styles
12

Farrugia, James A. "Semantic Interoperability of Geospatial Ontologies: A Model-theoretic Analysis." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/FarrugiaJA2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tan, Juan Jim. "Adaptive management and interoperability for secure semantic open services." Thesis, Queen Mary, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wynden, Rob. "The Health Ontology Mapper (HOM) Method Semantic Interoperability at Scale." Thesis, University of California, San Francisco, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3587911.

Full text
Abstract:

The Health Ontology Mapper (HOM) method is a proposed solution to the semantic gap problem. The HOM Method provides the following functionality to enable the scalable deployment of informatics systems involving data from multiple health systems. The HOM method allows a relatively small population of biomedical ontology experts to describe the interpretation and analysis of biomedical information collected at thousands of hospitals via a cloud based terminology server. As such the HOM Method is focused on the scalability of the human talent required for successful informatics projects. The HOM promotes a means of converting UML based medical data into OWL format via a cloud-based method of controlling the data loading process. HOM subscribes to a means of converting data into a HIPAA Limited Data Set format to lower the risk associated with developing large virtual data repositories. HOM also provides a means of allowing access to medical data over grid computing environments by translating all information via a centralized web-based terminology server technology.

An integrated data repository (IDR) containing aggregations of clinical, biomedical, economic, administrative, and public health data is a key component of research infrastructure, quality improvement and decision support. But most available medical data is encoded using standard data warehouse architecture that employs arbitrary data encoding standards, making queries across disparate repositories difficult. In response to these shortcomings the Health Ontology Mapper (HOM) translates terminologies into formal data encoding standards without altering the underlying source data. The HOM method promotes inter-institutional data sharing and research collaboration, and will ultimately lower the barrier to developing and using an IDR.

APA, Harvard, Vancouver, ISO, and other styles
15

SAMBRA, Andrei Vlad. "Data ownership and interoperability for a decentralized social semantic web." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00917965.

Full text
Abstract:
Ensuring personal data ownership and interoperability for decentralized social Web applications is currently a debated topic, especially when taking into consideration the aspects of privacy and access control. Since the user's data are such an important asset of the current business models for most social Websites, companies have no incentive to share data among each other or to offer users real ownership of their own data in terms of control and transparency of data usage. We have concluded therefore that it is important to improve the social Web in such a way that it allows for viable business models while still being able to provide increased data ownership and data interoperability compared to the current situation. To this regard, we have focused our research on three different topics: identity, authentication and access control. First, we tackle the subject of decentralized identity by proposing a new Web standard called "Web Identity and Discovery" (WebID), which offers a simple and universal identification mechanism that is distributed and openly extensible. Next, we move to the topic of authentication where we propose WebID-TLS, a decentralized authentication protocol that enables secure, efficient and user friendly authentication on the Web by allowing people to login using client certificates and without relying on Certification Authorities. We also extend the WebID-TLS protocol, offering delegated authentication and access delegation. Finally we present our last contribution, the Social Access Control Service, which serves to protect the privacy of Linked Data resources generated by users (e.g. pro le data, wall posts, conversations, etc.) by applying two social metrics: the "social proximity distance" and "social contexts"
APA, Harvard, Vancouver, ISO, and other styles
16

Dean, Christopher James. "Semantic correlation of behavior for the interoperability of heterogeneous simulations." Master's thesis, University of Central Florida, 1996. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/13267.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
A desirable goal of military simulation training is to provide large scale or joint exercises to train personnel at higher echelons. To help meet this goal, many of the lower echelon combatants must consist of computer generated forces with some of these echelons composed of units from different simulations. The object of the research described is to correlate the behaviors of entities in different simulations so that they can interoperate with one another to support simulation training. Specific source behaviors can be translated to a form in terms of general behaviors which can then be correlated to any desired specific destination simulation behavior without prior knowledge of the pairing. The correlation, however, does not result in 100% effectiveness because most simulations have different semantics and were designed for different training needs. An ontology of general behaviors and behavior parameters, a database of source behaviors written in terms of these general behaviors with a database of destination behaviors. This comparison is based upon the similarity of sub-behaviors and the behavior parameters. Source behaviors/parameters may be deemed similar based upon their sub-behaviors or sub-parameters and their relationship (more specific or more general) to destination behaviors/parameters. As an additional constraint for correlation, a conversion path from all required destination parameters to a souce parameter must be found in order for the behavior to be correlated and thus executed. The length of this conversion path often determines the similarity for behavior parameters, both source and destination. This research has shown, through a set of experiments, that heuristic metrics, in conjunction with a corresponding behavior and parameter ontology, are sufficient for the correlation of heterogeneous simulation behavior. These metrics successfully correlated known pairings provided by experts and provided reasonable correlations for behaviors that have no corresponding destination behavior. For different simulations, these metrics serve as a foundation for more complex methods of behavior correlation.
M.S.;
Computer Engineering
Engineering;
Computer Engineering
198 p.
viii, 198 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
17

FELICISSIMO, CAROLINA HOWARD. "SEMANTIC WEB INTEROPERABILITY: ONE STRATEGY FOR THE TAXONOMIC ONTOLOGY ALIGNMENT." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5864@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Com a evolução da Web atual para a Web Semântica, acredita- se que as informações disponíveis estarão estruturadas de forma a permitir o processamento automático de seu conteúdo por máquinas. Além do processamento individual, deseja-se uma melhor troca de informações entre aplicações Web. Para estes propósitos, são necessários mecanismos que garantam a interoperabilidade semântica, i.e., identificação e compatibilidade de informações. Neste sentido, ontologias são utilizadas como um recurso para disponibilizar um vocabulário estruturado e livre de ambigüidades. Ontologias fornecem um padrão bem definido para a estruturação da informação e promovem um formalismo passível de processamento automático. Neste trabalho, propomos uma estratégia para interoperabilidade de ontologias. O Componente para Alinhamento Taxonômico de Ontologias - CATO, resultado da implementação desta estratégia proposta, alinha automaticamente as taxonomias de ontologias comparadas. O alinhamento realizado é obtido em três etapas executadas seqüencialmente. A primeira etapa compara lexicalmente os conceitos das ontologias entradas e usa um mecanismo de poda estrutural dos conceitos associados como condição de parada. A segunda etapa compara estruturalmente as hierarquias das ontologias identificando as similaridades entre suas sub-árvores comuns. A terceira etapa refina os resultados da etapa anterior classificando os conceitos identificados como similares em bem similares ou pouco similares, de acordo com um percentual de similaridade prédefinido.
With the Web evolving towards a Semantic Web, it is believed that the available information will be presented in a meaningful way to allow machines to automatically process its content. Besides the individual processing, a better information exchange among Web applications is desired. For this purpose, mechanisms are called for guarantee the semantic interoperability, that is, the identification and compatibility of information. In this direction, ontologies are used as one resource to make available a structured vocabulary, free of ambiguities. Ontologies provide a well-defined standard to structure the information and to promote formalism for automatic processing. In this work, we propose one strategy for ontology interoperability. The Ontology Taxonomic Alignment Component - CATO, which is the result of the implementation of this proposed strategy, provides an automatic taxonomic ontologies alignment. In this way, the alignment is obtained by a three-step process. The first step is the lexical comparison between the concepts from the entries ontologies. It uses a trimming mechanism of the related associated concepts as a stop condition. The second step is the structural comparison of the ontologies structures used to identify the similarities between common sub-trees. The third step refines the results of the previous step, classifying the similar identified concepts as very similar or little similar, according to a pre-defined similarity measurement.
APA, Harvard, Vancouver, ISO, and other styles
18

Lee, Jacob L. (Jacob Lye-Hock). "Integrating information from disparate contexts : a theory of semantic interoperability." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sambra, Andrei Vlad. "Data ownership and interoperability for a decentralized social semantic web." Thesis, Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0027/document.

Full text
Abstract:
Assurer l'appropriation des données personnelles et l'interopérabilité des applications Web sociaux décentralisées est actuellement un sujet controversé, surtout en prenant compte des aspects de la vie privée et du contrôle d'accès. Il est important d'améliorer le Web social de telle manière à permettre des modèles d'affaires viables tout en étant capable de fournir une plus grande appropriation des données et l'interopérabilité des données par rappport à la situation actuelle. A cet égard, nous avons concentré notre recherche sur trois thèmes différents: le contrôle d'identité, l'authentifiaction et le contrôle d'accès. Tout d'abord, nous abordons le sujet de l'identité décentralisée en proposant un nouveau standard Web appelé "Web Identity and Discovery" (WebID), qui offre un mécanisme d'identification simple et universel qui est distribué et ouvertement extensible. Ensuite, nous passons à la question de l'authentification où nous proposons WebID-TLS, un protocole d'authentification décentralisé qui permet l'authentification sécurisée, simple et efficace sur le Web en permettant aux personnes de se connecter à l'aide de certificats clients. Nous étendons également WebID-TLS, en offrant des moyens d'effectuer de l'authentification déléguée et de la délégation d'accès. Enfin, nous présentons notre dernière contribution, un service de contrôle d'accès social, qui sert à protéger l'accès aux ressources Linked Data générés par les utilisateurs (par exemple, les données de profil, messages du mur, conversations, etc) par l'application de deux mesures: la "distance de proximité sociale" et "contexte social"
Ensuring personal data ownership and interoperability for decentralized social Web applications is currently a debated topic, especially when taking into consideration the aspects of privacy and access control. Since the user's data are such an important asset of the current business models for most social Websites, companies have no incentive to share data among each other or to offer users real ownership of their own data in terms of control and transparency of data usage. We have concluded therefore that it is important to improve the social Web in such a way that it allows for viable business models while still being able to provide increased data ownership and data interoperability compared to the current situation. To this regard, we have focused our research on three different topics: identity, authentication and access control. First, we tackle the subject of decentralized identity by proposing a new Web standard called "Web Identity and Discovery" (WebID), which offers a simple and universal identification mechanism that is distributed and openly extensible. Next, we move to the topic of authentication where we propose WebID-TLS, a decentralized authentication protocol that enables secure, efficient and user friendly authentication on the Web by allowing people to login using client certificates and without relying on Certification Authorities. We also extend the WebID-TLS protocol, offering delegated authentication and access delegation. Finally we present our last contribution, the Social Access Control Service, which serves to protect the privacy of Linked Data resources generated by users (e.g. pro le data, wall posts, conversations, etc.) by applying two social metrics: the "social proximity distance" and "social contexts"
APA, Harvard, Vancouver, ISO, and other styles
20

Sambra, Andrei Vlad. "Data ownership and interoperability for a decentralized social semantic web." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0027.

Full text
Abstract:
Assurer l'appropriation des données personnelles et l'interopérabilité des applications Web sociaux décentralisées est actuellement un sujet controversé, surtout en prenant compte des aspects de la vie privée et du contrôle d'accès. Il est important d'améliorer le Web social de telle manière à permettre des modèles d'affaires viables tout en étant capable de fournir une plus grande appropriation des données et l'interopérabilité des données par rappport à la situation actuelle. A cet égard, nous avons concentré notre recherche sur trois thèmes différents: le contrôle d'identité, l'authentifiaction et le contrôle d'accès. Tout d'abord, nous abordons le sujet de l'identité décentralisée en proposant un nouveau standard Web appelé "Web Identity and Discovery" (WebID), qui offre un mécanisme d'identification simple et universel qui est distribué et ouvertement extensible. Ensuite, nous passons à la question de l'authentification où nous proposons WebID-TLS, un protocole d'authentification décentralisé qui permet l'authentification sécurisée, simple et efficace sur le Web en permettant aux personnes de se connecter à l'aide de certificats clients. Nous étendons également WebID-TLS, en offrant des moyens d'effectuer de l'authentification déléguée et de la délégation d'accès. Enfin, nous présentons notre dernière contribution, un service de contrôle d'accès social, qui sert à protéger l'accès aux ressources Linked Data générés par les utilisateurs (par exemple, les données de profil, messages du mur, conversations, etc) par l'application de deux mesures: la "distance de proximité sociale" et "contexte social"
Ensuring personal data ownership and interoperability for decentralized social Web applications is currently a debated topic, especially when taking into consideration the aspects of privacy and access control. Since the user's data are such an important asset of the current business models for most social Websites, companies have no incentive to share data among each other or to offer users real ownership of their own data in terms of control and transparency of data usage. We have concluded therefore that it is important to improve the social Web in such a way that it allows for viable business models while still being able to provide increased data ownership and data interoperability compared to the current situation. To this regard, we have focused our research on three different topics: identity, authentication and access control. First, we tackle the subject of decentralized identity by proposing a new Web standard called "Web Identity and Discovery" (WebID), which offers a simple and universal identification mechanism that is distributed and openly extensible. Next, we move to the topic of authentication where we propose WebID-TLS, a decentralized authentication protocol that enables secure, efficient and user friendly authentication on the Web by allowing people to login using client certificates and without relying on Certification Authorities. We also extend the WebID-TLS protocol, offering delegated authentication and access delegation. Finally we present our last contribution, the Social Access Control Service, which serves to protect the privacy of Linked Data resources generated by users (e.g. pro le data, wall posts, conversations, etc.) by applying two social metrics: the "social proximity distance" and "social contexts"
APA, Harvard, Vancouver, ISO, and other styles
21

Ducrou, Amanda Joanne. "Complete interoperability in healthcare technical, semantic and process interoperability through ontology mapping and distributed enterprise integration techniques /." Access electronically, 2009. http://ro.uow.edu.au/theses/3048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hetmank, Lars. "Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-209828.

Full text
Abstract:
The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive. Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2). An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics. Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis: 1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles). 2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process. 3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type. 4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.
APA, Harvard, Vancouver, ISO, and other styles
23

Svensson, Martin. "Promoting Semantic Interoperability of Contextual Metadata for Learner Generated Digital Content." Licentiate thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8453.

Full text
Abstract:

Technological advancements in computing have led to a reality where computational devices are more mobile, connected and context aware than ever before. Several of these devices are primarily designed for or support the creation of digital content via built-in or attachable sensors, e.g. mobile phones. The portability and connectivity of mobile devices make them suitable tools to support learning experiences; their features can be used to generate digital content and metadata related to the particular learning situation. These types of objects, referred to as Emerging Learning Objects (ELOs), introduce challenges in terms of metadata enrichment as their metadata should reflect aspects related to the particular learning situation in which they were created to be properly indexed. A claim made in this thesis is that semantic interoperability of ELO metadata is an integral concern that needs to be explored in order to benefit from these metadata outside custom tailored applications and systems. Therefore, the main research question explored in this thesis focuses on the ability to enrich ELOs with semantically interoperable contextual metadata.

This thesis is comprised of a collection of five peer-reviewed articles that describe interrelated stages of research in pursuit of an answer to the main research question. The overall research process consisted of three main stages: a literature review; the development a system artefact; and the exploration of the technological solution (Linked Data) applied in the system artefact. An instantiation of the Unified Process guided the development of the system artefact.

The outcomes of these activities provide insights on how to perceive the relationship between context and contextual metadata, as well as properties related to a particular technological solution, namely data distribution, flexibility and expressivity. In order to decouple the findings from a particular instance of technology, a generalization effort in the analysis identified two generic factors that affect the semantic interoperability of metadata: the level of ontological consensus and the level of metadata expressivity. The main conclusion of this thesis is that until the constituent parts of context are agreed upon, metadata expressivity is an important feature for promoting semantic interoperability of ELO contextual metadata.Technological advancements in computing have led to a reality where computational devices are more mobile, connected and context aware than ever before. Several of these devices are primarily designed for or support the creation of digital content via built-in or attachable sensors, e.g. mobile phones. The portability and connectivity of mobile devices make them suitable tools to support learning experiences; their features can be used to generate digital content and metadata related to the particular learning situation. These types of objects, referred to as Emerging Learning Objects (ELOs), introduce challenges in terms of metadata enrichment as their metadata should reflect aspects related to the particular learning situation in which they were created to be properly indexed. A claim made in this thesis is that semantic interoperability of ELO metadata is an integral concern that needs to be explored in order to benefit from these metadata outside custom tailored applications and systems. Therefore, the main research question explored in this thesis focuses on the ability to enrich ELOs with semantically interoperable contextual metadata. This thesis is comprised of a collection of five peer-reviewed articles that describe interrelated stages of research in pursuit of an answer to the main research question. The overall research process consisted of three main stages: a literature review; the development a system artefact; and the exploration of the technological solution (Linked Data) applied in the system artefact. An instantiation of the Unified Process guided the development of the system artefact.The outcomes of these activities provide insights on how to perceive the relationship between context and contextual metadata, as well as properties related to a particular technological solution, namely data distribution, flexibility and expressivity. In order to decouple the findings from a particular instance of technology, a generalization effort in the analysis identified two generic factors that affect the semantic interoperability of metadata: the level of ontological consensus and the level of metadata expressivity. The main conclusion of this thesis is that until the constituent parts of context are agreed upon, metadata expressivity is an important feature for promoting semantic interoperability of ELO contextual metadata.

APA, Harvard, Vancouver, ISO, and other styles
24

Yarimagan, Yalin. "Semantic Enrichment For The Automated Customization And Interoperability Of Ubl Schemas." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609427/index.pdf.

Full text
Abstract:
Universal Business Language (UBL) is an initiative to develop common business document schemas to provide standardization in the electronic business domain. However, businesses operate in different industry, geopolitical, and regulatory contexts and consequently they have different rules and requirements for the information they exchange. In this thesis, we provide semantic enrichment mechanisms for UBL that (i) allow automated customization of document schemas in response to contextual needs and (ii) maintain interoperability among different schema versions. For this purpose, we develop ontologies to provide machine processable representations for context domains, annotate custom components using classes from those ontologies and show that using these semantic annotations, automated discovery of components and automated customization of schemas becomes possible. We then provide a UBL Component Ontology that represents the semantics of individual components and their structural relationships and show that when an ontology reasoner interprets the expressions from this ontology, it computes equivalence and class-subclass relationships between classes representing components with similar content. Finally we describe how these computed relationships are used by a translation mechanism to establish interoperability among schema versions customized for different business context values.
APA, Harvard, Vancouver, ISO, and other styles
25

Chungoora, Nitishal. "A framework to support semantic interoperability in product design and manufacture." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/5897.

Full text
Abstract:
It has been recognised that the ability to communicate the meaning of concepts and their intent within and across system boundaries, for supporting key decisions in product design and manufacture, is impaired by the semantic interoperability issues that are presently encountered. This work contributes to the field of semantic interoperability in product design and manufacture. An attribution is made to the understanding and application of relevant concepts coming from the computer science world, notably ontology-based approaches, to help resolve semantic interoperability problems. A novel ontological approach, identified as the Semantic Manufacturing Interoperability Framework (SMIF), has been proposed following an exploration of the important requirements to be satisfied. The framework, built on top of a Common Logic-based ontological formalism, consists of a manufacturing foundation to capture the semantics of core feature-based design and manufacture concepts, over which the specialisation of domain models can take place. Furthermore, the framework supports the mechanisms for allowing the reconciliation of semantics, thereby improving the knowledge sharing capability between heterogeneous domains that need to interoperate and have been based on the same manufacturing foundation. This work also analyses a number of test case scenarios, where the framework has been deployed for fostering knowledge representation and reconciliation of models involving products with standard hole features and their related machining process sequences. The test cases have shown that the Semantic Manufacturing Interoperability Framework (SMIF) provides effective support towards achieving semantic interoperability in product design and manufacture. Proposed extensions to the framework are additionally identified so as to provide a view on imminent future work.
APA, Harvard, Vancouver, ISO, and other styles
26

Tebai, Wissem. "Establishing semantic interoperability, under denied, disconnected, intermittant, and limited telecommunications conditions." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/42738.

Full text
Abstract:
Approved for public release; distribution is unlimited
The different approaches and technologies used for system integration and interoperability are explored in this thesis. Of particular interest is the use of the Data Distribution Service (DDS) open standard to integrate the component of any given command and control system working in a denied network environment. The method used for this research includes a review of past literature on the different specification to implement semantic interoperability for better and more efficient integration, as well as an exploration of the different functionalities and capabilities of DDS. We present a middleware design based on the DDS specifications as developed by the Object Management Group. The design was influenced by the different limitations and requirements of the networking environment. Meanwhile, the proposed architecture also offers ways to implement semantic interoperability solutions in the system. Finally, the thesis describes a deployment scenario with a small network in order to accurately define the system controls that could impact the overall functionality of the DDS design, primarily through the Quality of Service (QoS) provisions.
APA, Harvard, Vancouver, ISO, and other styles
27

Mukwaya, Jovia Namugerwa. "An Investigation of Semantic Interoperability with EHR systems for Precision Dosing." Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279143.

Full text
Abstract:
In healthcare, vulnerable populations that are using medications with a narrow therapeutic index and wide interpatient PK/PD (pharmacokinetic/pharmacodynamic modelling) variability are increasing. As such, variable dosage regimens may result in severe therapeutic failures or adverse drug reactions (ADR). Improved monitoring of patient response to medication and personalization of treatment is therefore warranted. Precision dosing aims to individualize drug regimens for each patient based on independent factors obtained from a patient’s clinical records. Personalization of dosing increases the accuracy and efficiency of medication delivery. This can be achieved through utilizing the wide range of Electronic Health Records (EHR) contain the patients’ medical history, diagnoses, laboratory test results, demographics, treatment plans, biomarker data; information that can be exploited to generate a patient-specific treatment regimen. For example, Fast Healthcare Interoperability Resources (FHIR) is an existing healthcare standard that provides a framework on which semantic exchange of meaningful clinical information can be developed such as using an ontology as a decision support tool to achieve precision medicine. The purpose of this thesis is to make an investigation of the feasibility of interoperability in EHR and propose an ontology framework for precision dosing using currently existing health standards. The methodology involved carrying out of semi-structured interviews from professionals in relevant areas of expertise and document analysis of already existent literature, a precision dosing ontology framework is developed. Results show key tenants for an ontology framework and drugs and their covariates. The thesis therefore advances to investigate how data requirements in EHR systems, IT platforms, implementation, and integration of Model Imposed Precision Dosing (MIPD) and recommendations have been evaluated to cater to interoperability. With modern healthcare striving for personalized healthcare, precision medicine would offer an improved therapeutic experience for a patient.
APA, Harvard, Vancouver, ISO, and other styles
28

Moreno, Conde A. "Quality framework for semantic interoperability in health informatics : definition and implementation." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1529311/.

Full text
Abstract:
Aligned with the increased adoption of Electronic Health Record (EHR) systems, it is recognized that semantic interoperability provides benefits for promoting patient safety and continuity of care. This thesis proposes a framework of quality metrics and recommendations for developing semantic interoperability resources specially focused on clinical information models, which are defined as formal specifications of structure and semantics for representing EHR information for a specific domain or use case. This research started with an exploratory stage that performed a systematic literature review with an international survey about the clinical information modelling best practice and barriers. The results obtained were used to define a set of quality models that were validated through Delphi study methodologies and end user survey, and also compared with related quality standards in those areas that standardization bodies had a related work programme. According to the obtained research results, the defined framework is based in the following models: Development process quality model: evaluates the alignment with the best practice in clinical information modelling and defines metrics for evaluating the tools applied as part of this process. Product quality model: evaluates the semantic interoperability capabilities of clinical information models based on the defined meta-data, data elements and terminology bindings. Quality in use model: evaluates the suitability of adopting semantic interoperability resources by end users in their local projects and organisations. Finally, the quality in use model was implemented within the European Interoperability Asset register developed by the EXPAND project with the aim of applying this quality model in a broader scope to contain any relevant material for guiding the definition, development and implementation of interoperable eHealth systems in our continent. Several European projects already expressed interest in using the register, which will now be sustained by the European Institute for Innovation through Health Data.
APA, Harvard, Vancouver, ISO, and other styles
29

Alves, Gonçalo Franco Pita Louro. "A framework for semantic checking of information systems." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8753.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
In this day and age, enterprises often find that their business benefits greatly if they collaborate with others in order to be more competitive and productive. However these collaborations often come with some costs since the worldwide diversity of communities has led to the development of various knowledge representation elements, namely ontologies that, in most cases, are not semantically equivalent. Consequently, even though some enterprises may operate in the same domain, they can have different representations of that same knowledge. However, even after solving this issue and establishing a semantic alignment with other systems, they do not remain unchanged. Subsequently, a regular check of its semantic alignment is needed. To aid in the resolution of this semantic interoperability problem, the author proposes a framework that intends to provide generic solutions and a mean to validate the semantic consistency of ontologies in various scenarios, thus maintaining the interoperability state between the enrolled systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Hetmank, Lars. "An ontology for enhancing automation and interoperability in Enterprise Crowdsourcing Environments." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. https://tud.qucosa.de/id/qucosa%3A29780.

Full text
Abstract:
The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive. Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2). An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics. Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis: 1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles). 2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process. 3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type. 4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.:Summary: Hetmank, L. (2014). Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Summary). Article 1: Hetmank, L. (2013). Components and Functions of Crowdsourcing Systems – A Systematic Literature Review. In 11th International Conference on Wirtschaftsinformatik (WI). Leipzig. Article 2: Hetmank, L. (2014). A Synopsis of Enterprise Crowdsourcing Literature. In 22nd European Conference on Information Systems (ECIS). Tel Aviv. Article 3: Hetmank, L. (2013). Towards a Semantic Standard for Enterprise Crowdsourcing – A Scenario-based Evaluation of a Conceptual Prototype. In 21st European Conference on Information Systems (ECIS). Utrecht. Article 4: Hetmank, L. (2014). Developing an Ontology for Enterprise Crowdsourcing. In Multikonferenz Wirtschaftsinformatik (MKWI). Paderborn. Article 5: Hetmank, L. (2014). An Ontology for Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Technical Report). Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155187.
APA, Harvard, Vancouver, ISO, and other styles
31

Kabak, Yildiray. "Semantic Interoperability Of The Un/cefact Ccts Based Electronic Business Document Standards." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610689/index.pdf.

Full text
Abstract:
The interoperability of the electronic documents exchanged in eBusiness applications is an important problem in industry. Currently, this problem is handled by the mapping experts who understand the meaning of every element in the involved document schemas and define the mappings among them which is a very costly and tedious process. In order to improve electronic document interoperability, the UN/CEFACT produced the Core Components Technical Specification (CCTS) which defines a common structure and semantic properties for document artifacts. However, at present, this document content information is available only through text-based search mechanisms and tools. In this thesis, the semantics of CCTS based business document standards is explicated through a formal, machine processable language as an ontology. In this way, it becomes possible to compute a harmonized ontology, which gives the similarities among document schema ontology classes of different document standards through both the semantic properties they share and the semantic equivalences established through reasoning. However, as expected, the harmonized ontology only helps discovering the similarities of structurally and semantically equivalent elements. In order to handle the structurally different but semantically similar document artifacts, heuristic rules are developed describing the possible ways of organizing simple document artifacts into compound artifacts as defined in the CCTS methodology. Finally, the equivalences discovered among document schema ontologies are used for the semi-automated generation of XSLT definitions for the translation of real-life document instances.
APA, Harvard, Vancouver, ISO, and other styles
32

Shedd, Stephen F. "Semantic and syntactic object correlation in the object-oriented method for interoperability." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FShedd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tufan, Emrah. "Context Based Interoperability To Support Infrastructure Management In Municipalities." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612535/index.pdf.

Full text
Abstract:
Interoperability between Geographic Information System (GIS) of different infrastructure companies is still a problem to be handled. Infrastructure companies deal with many operations as a part of their daily routine such as a regular maintenance, or sometimes they deal with unexpected situations such as a malfunction due to natural event, like a flood or an earthquake. These situations may affect all companies and affected infrastructure companies response to these effects. Responses may result in consequences and in order to model these consequences on GIS, GISs are able to share information, which brings the interoperability problem into the scene. The present research, aims at finding an answer to interoperability problem between GISs of different companies by considering contextual information. During the study, the geographical features are handled as the major concern and interoperability problem is examined by targeting them. The model constructed in this research is based on the ontology and because the meaning of the terms in the ontology depends on the context, ontology based context modeling is also used. v In this research, a system implementation is done for two different GISs of two
APA, Harvard, Vancouver, ISO, and other styles
34

Lera, Castro Isaac. "Ontology Matching based On Class Context: to solve interoperability problem at Semantic Web." Doctoral thesis, Universitat de les Illes Balears, 2012. http://hdl.handle.net/10803/84074.

Full text
Abstract:
When we look at the amount of resources to convert formats to other formats, that is to say, to make information systems useful, it is the time when we realise that our communication model is inefficient. The transformation of information, as well as the transformation of energy, remains inefficient for the efficiency of the converters. In this work, we propose a new way to ``convert'' information, we propose a mapping algorithm of semantic information based on the context of the information in order to redefine the framework where this paradigm merges with multiple techniques. Our main goal is to offer a new view where we can make further progress and, ultimately, streamline and minimize the communication chain in integration process
APA, Harvard, Vancouver, ISO, and other styles
35

Xie, Ming, and Xiao Zhou. "Analyzing the usability of BORO methodology for semantic interoperability in the military context." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138145.

Full text
Abstract:
In the context of military field, more and more international coalitions among allied forces have takenplace. Information from heterogeneous systems needs to be exchanged without misinterpretation so theinvolved participating actors can share a common situational awareness regarding certain data and/ormessages. This, in turn, requires the preservation of the intended meaning not only on the syntax,language, and representation level, but on a semantic level as well.The application domain of the Business Object Reference Ontology Program (BORO) method focuseson the development of ontological or semantic models for large complex operational applications,especially in the military context. It is chosen by FOI, the Swedish Defense Research Agency in thefield of Information Systems, to apply to their Semantic Interoperability (SI) project.The goal of this thesis is to investigate how BORO method can be implemented for aligning the dataand/or messages between the Swedish Armed Forces and other military organizations on a semanticlevel for the FOI SI project. To achieve this goal the design science research methodology is conductedthrough a series of steps. The analysis regarding the usability of BORO method for FOI to obtainsemantic interoperability in its project will be demonstrated as the result of this thesis, which can alsobe utilized as a reference for other military organizations when conducting activities of informationexchange.
APA, Harvard, Vancouver, ISO, and other styles
36

Tutcher, Jonathan. "Development of semantic data models to support data interoperability in the rail industry." Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6774/.

Full text
Abstract:
Railways are large, complex systems that comprise many heterogeneous subsystems and parts. As the railway industry continues to enjoy increasing passenger and freight custom, ways of deriving greater value from the knowledge within these subsystems are increasingly sought. Interfaces to and between systems are rare, making data sharing and analysis difficult. Semantic data modelling provides a method of integrating data from disparate sources by encoding knowledge about a problem domain or world into machine-interpretable logic and using this knowledge to encode and infer data context and meaning. The uptake of this technique in the Semantic Web and Linked Data movements in recent years has provided a mature set of techniques and toolsets for designing and implementing ontologies and linked data applications. This thesis demonstrates ways in which semantic data models and OWL ontologies can be used to foster data exchange across the railway industry. It sets out a novel methodology for the creation of industrial semantic models, and presents a new set of railway domain ontologies to facilitate integration of infrastructure-centric railway data. Finally, the design and implementation of two prototype systems is described, each of which use the techniques and ontologies in solving a known problem.
APA, Harvard, Vancouver, ISO, and other styles
37

Hua, Yingbing [Verfasser], and B. [Akademischer Betreuer] Hein. "Methods for Semantic Interoperability in AutomationML-based Engineering / Yingbing Hua ; Betreuer: B. Hein." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1227451202/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Hsinchun. "Artificial Intelligence Techniques for Emerging Information Systems Applications: Trailblazing Path to Semantic Interoperability." Wiley Periodicals, Inc, 1998. http://hdl.handle.net/10150/106389.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
Introduction to Special Issue of JASIS on AI Techniques for Emerging Information Systems Applications in which five articles report research in adopting artificial intelligence techniques for emerging information systems applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Sigwele, Tshiamo, Yim Fun Hu, M. Ali, Jiachen Hou, M. Susanto, and H. Fitriawan. "An intelligent edge computing based semantic gateway for healthcare systems interoperability and collaboration." IEEE, 2018. http://hdl.handle.net/10454/17552.

Full text
Abstract:
Yes
The use of Information and Communications Technology (ICTs) in healthcare has the potential of minimizing medical errors, reducing healthcare cost and improving collaboration between healthcare systems which can dramatically improve the healthcare service quality. However interoperability within different healthcare systems (clinics/hospitals/pharmacies) remains an issue of further research due to a lack of collaboration and exchange of healthcare information. To solve this problem, cross healthcare system collaboration is required. This paper proposes a conceptual semantic based healthcare collaboration framework based on Internet of Things (IoT) infrastructure that is able to offer a secure cross system information and knowledge exchange between different healthcare systems seamlessly that is readable by both machines and humans. In the proposed framework, an intelligent semantic gateway is introduced where a web application with restful Application Programming Interface (API) is used to expose the healthcare information of each system for collaboration. A case study that exposed the patient's data between two different healthcare systems was practically demonstrated where a pharmacist can access the patient's electronic prescription from the clinic.
British Council Institutional Links grant under the BEIS-managed Newton Fund.
APA, Harvard, Vancouver, ISO, and other styles
40

Parvaresh, Karan Ebrahim. "Extending Building Information Modeling (BIM) interoperability to geo-spatial domain using semantic web technology." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53213.

Full text
Abstract:
As Building Information Modeling (BIM) applications become more sophisticated and used within other knowledge domains, the limitations of existing data exchange and sharing methods become apparent. The integration of BIM and Geographic Information System (GIS) can offer substantial benefits to manage the planning process during the design and construction stages. Currently, building (and geospatial) data are shared between BIM software tools through a common data format, such as Industry Foundation Classes (IFC). Because of the diversity and complexity of domain knowledge across BIM and GIS systems, however, these syntactic approaches are not capable of overcoming semantic heterogeneity. This study uses semantic web technology to ensure the highest level of interoperability between existing BIM and GIS tools. The proposed approach is composed of three main steps; ontology construction, semantic integration through interoperable data formats and standards, and query of heterogeneous information sources. Because no application ontology is available to encompass all IFC classes with different attributes, we first develop an IFC-compliant ontology describing the hierarchy structure of BIM objects. Then, we can translate the building's elements and GIS data into semantic web standard formats. Once the information has been gathered from different sources and transformed into an appropriate semantic web format, the SPARQL query language is used in the last step to retrieve this information from a dataset. The completeness of the methodology is validated through a case study and two use case examples.
APA, Harvard, Vancouver, ISO, and other styles
41

Bakillah, Mohamed. "Real Time Semantic Interoperability in ad hoc Networks of Geospatial Databases : Disaster Management Context." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29101/29101.pdf.

Full text
Abstract:
Avec le développement rapide des technologies permettant la collecte et l’échange des données géospatiales, la quantité de bases de données géospatiales disponibles est en constante augmentation. Ces bases de données géospatiales représentent souvent une même réalité géographique de façon différente, et sont, par conséquent, sémantiquement hétérogènes. Afin que les utilisateurs de différentes bases de données puissent échanger des données et collaborer dans un but commun, ils doivent avoir une compréhension commune de la signification des données échangées et résoudre ces hétérogénéités, c’est-à-dire que l’interopérabilité sémantique doit être assurée. Il existe actuellement plusieurs approches visant à établir l’interopérabilité sémantique. Cependant, l’arrivée puis la récente prolifération des réseaux ad hoc modifient et rendent plus complexe la résolution du problème de l’interopérabilité sémantique. Les réseaux ad hoc de bases de données géospatiales sont des réseaux qui peuvent s’auto-organiser, pour des besoins ponctuels, sans qu’une structure particulière soit définie a priori. En raison de leur dynamicité, de l’autonomie des sources qui les composent, et du grand volume de sources disponibles, les approches dites « traditionnelles » qui ont été conçues pour établir l’interopérabilité sémantique entre deux sources ou un nombre limité et statique de sources ne sont plus adaptées. Néanmoins, bien que les caractéristiques d’une approche pour l’interopérabilité sémantique dans les réseaux ad hoc doivent permettre d’agir sur un grand volume de sources, il demeure essentiel de prendre en compte, dans la représentation de la sémantique des données, les caractéristiques particulières, les contextes et les dimensions spatiales, thématiques et temporelles des données géospatiales. Dans cette thèse, une nouvelle approche pour l’interopérabilité sémantique en temps réel dans les réseaux ad hoc de bases de données géospatiales est proposée afin de répondre à la fois aux problématiques engendrées par les réseaux ad hoc et les bases de données géospatiales. Les contributions de cette approche pour l’interopérabilité sémantique en temps réel concernent majoritairement la collaboration dynamique entre les utilisateurs de bases de données géospatiales du réseau ad hoc, la représentation et l’extraction des connaissances, le mapping sémantique automatisé, la similarité sémantique et la propagation des requêtes dans le réseau ad hoc. Le cadre conceptuel qui sous-tend l’approche se base sur les principes de la communication dans les réseaux sociaux. À la suite du cadre conceptuel qui établit les fondements de l’approche, cette thèse présente un nouveau modèle de représentation des coalitions de bases de données géospatiales, dans le but de faciliter, dans un contexte d’interopérabilité sémantique, la collaboration entre les utilisateurs de bases de données géospatiales du réseau. Sur la base de ce modèle, une approche de découverte des sources pertinentes et de formation des coalitions se basant sur les principes de l’analyse des réseaux est proposée. Afin de gérer les changements du réseau en temps réel, des opérateurs de gestion des changements dans les coalitions sont proposés. Une fois les coalitions établies, les échanges de données entre les membres d’une même coalition ou de coalitions différentes ne peuvent être assurées que si la représentation de la sémantique est suffisamment riche et que les ontologies qui décrivent les différentes bases de données sont sémantiquement réconciliées. Pour ce faire, nous avons développé dans cette thèse un nouveau modèle de représentation des concepts, le soit le Concept multi-vues augmenté (Multi-View Augmented Concept - MVAC) dont le rôle est d’enrichir les concepts des ontologies avec leurs différentes contextes, la sémantique de leurs propriétés spatiotemporelles, ainsi que les dépendances entre leurs caractéristiques. An Ensuite, une méthode pour générer les concepts MVAC est développée, laquelle comprend une méthode pour l’extraction des différentes vues d’un concept qui sont valides dans différents contextes, puis une méthode d’augmentation du concept qui extrait les dépendances implicites au moyen d’algorithme de fouille de règles d’association. Ensuite, deux modèles complémentaires furent développés pour résoudre les hétérogénéités sémantiques entre les concepts MVAC. Dans un premier lieu, un modèle graduel de mapping sémantique automatisé, le G-MAP, établit les relations sémantiques qualitatives entre les concepts MVAC au moyen de moteurs de raisonnement basé sur des règles d’inférence qui intègrent de nouveaux critères de matching. Ce modèle se distingue par sa capacité à prendre en compte une représentation plus riche et complexe des concepts. Puis, nous avons développé un nouveau modèle de similarité sémantique, Sim-Net, adapté aux réseaux ad hoc et basé sur le langage de la logique descriptive. La contribution des deux modèles permet une interprétation optimale par l’utilisateur de la signification des relations entre les concepts de différentes bases de données géospatiales, améliorant ainsi l’interopérabilité sémantique. La dernière composante est une approche multi-stratégies de propagation des requêtes dans le réseau ad hoc, où les stratégies, formalisées à l’aide du langage Lightweight Coordination Calculus (LCC) qui supporte les interactions basées sur des normes sociales et des contraintes dans un système distribué, représentent différents moyens employés pour communiquer dans les réseaux sociaux. L’approche de propagation intègre un algorithme d’adaptation en temps réel des stratégies aux changements qui modifient le réseau. L’approche a été implémentée sous forme de prototype utilisant la plateforme Java JXTA qui simule les interactions dynamiques entre des pairs et des groupes de pairs (réseau peer-to-peer). L’utilité, la faisabilité et les avantages de l’approche sont démontrés par un scénario de gestion de désastre naturel. Cette thèse apporte aussi une contribution supplémentaire en développant le nouveau concept de qualité de l’interopérabilité sémantique ainsi qu’un cadre pour l’évaluation de la qualité de l’interopérabilité sémantique en tant que processus. Ce cadre est utilisé à des fins d’évaluation pour valider l’approche. Ce concept de qualité de l’interopérabilité sémantique ouvre de nombreuses perspectives de recherches futures concernant la qualité des échanges de données dans un réseau et son effet sur la prise de décision.
The recent technological advances regarding the gathering and the sharing of geospatial data have made available important volume of geospatial data to potential users. Geospatial databases often represent the same geographical features but from different perspectives, and therefore, they are semantically heterogeneous. In order to support geospatial data sharing and collaboration between users of geospatial databases to achieve common goals, semantic heterogeneities must be resolved and users must have a shared understanding of the data being exchanged. That is, semantic interoperability of geospatial data must be achieved. At this time, numerous semantic interoperability approaches exist. However, the recent arrival and growing popularity of ad hoc networks has made the semantic interoperability problem more complex. Ad hoc networks of geospatial databases are network that self-organize for punctual needs and that do not rely on any predetermined structure. “Traditional” semantic interoperability approaches that were designed for two sources or for a limited static number of known sources are not suitable for ad hoc networks, which are dynamic and composed of a large number of autonomous sources. Nevertheless, while a semantic interoperability approach designed for ad hoc network should be scalable, it is essential to consider, when describing semantics of data, the particularities, the different contexts and the thematic, spatial and temporal aspects of geospatial data. In this thesis, a new approach for real time semantic interoperability in ad hoc network of geospatial databases that address the requirements posed by both geospatial databases and ad hoc networks is proposed. The main contributions of this approach for real time semantic interoperability are related to the dynamic collaboration among user agents of different geospatial databases, knowledge representation and extraction, automatic semantic mapping and semantic similarity, and query propagation in ad hoc network based on multi-agent theory. The conceptual framework that sets the foundation of the approach is based on principles of communication between agents in social network. Following the conceptual framework, this thesis proposes a new model for representing coalitions of geospatial databases that aim at supporting the collaboration among user agents of different geospatial databases of the network, in a semantic interoperability context. Based on that model, a new approach for discovering relevant sources and coalitions mining based on network analysis techniques is proposed. Operators for the management of events affecting coalitions are defined to manage real times changes occurring in the ad hoc network. Once coalitions are established, data exchanges inside a coalition or between different coalitions are possible only if the representation of semantics of rich enough, and the semantic reconciliation is achieved between ontologies describing the different geospatial databases. To achieve this goal, in this thesis we have defined a new representation model for concepts, the Multi-View Augmented Concept (MVAC). The role of this model is to enrich concepts of ontologies with their various contexts, the semantics of their spatiotemporal properties, and the dependencies between their features. A method to generate MVAC concept was developed. This method includes a method for the extraction of the different views of a concept that correspond to the different contexts, and an augmentation method based on association rule mining to extract dependencies between features. Then, two complementary models to resolve semantic heterogeneity between MVAC concepts were developed. First, a gradual automated semantic mapping model, the G-MAP, discovers qualitative semantic relations between MVAC concepts using rule-based reasoning engines that integrate new matching criteria. The ability of this model to take as input a rich and complex representation of concepts constitutes the contribution of this model with respect to existing ones. Second, we have developed Sim-Net, a Description Logic- based semantic similarity model adapted to ad hoc networks. The combination of both models supports an optimal interpretation by the user of the meaning of relations between concepts of different geospatial databases, improving semantic interoperability. The last component is a multi-strategy query propagation approach for ad hoc network. Strategies are formalized with the Lightweight Coordination Calculus (LCC), which support interactions between agents based on social norms and constraints in a distributed system, and they represent the different strategies employed to communicate in social networks. An algorithm for the real time adaptation of strategies to changes affecting the network is proposed. The approach was implemented with a prototype using the Java JXTA platform that simulates dynamic interaction between peers and groups of peers. The advantages, the usefulness and the feasibility of the approach were demonstrated with a disaster management scenario. An additional contribution is made in this thesis with the development of the new notion of semantic interoperability quality, and a framework to assess semantic interoperability quality. This framework was used to validate the approach. This new concept of semantic interoperability quality opens many new research perspectives with respect to the quality of data exchanges in network and its impact on decision-making.
APA, Harvard, Vancouver, ISO, and other styles
42

Rathnam, Tarun. "Using Ontologies to Support Interoperability in Federated Simulation." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4788.

Full text
Abstract:
A vast array of computer-based simulation tools are used to support engineering design and analysis activities. Several such activities call for the simulation of various coupled sub-systems in parallel, typically to study the emergent behavior of large, complex systems. Most sub-systems have their own simulation models associated with them, which need to interoperate with each other in a federated fashion to simulate system-level behavior. The run-time exchange of information between federate simulations requires a common information model that defines the representation of simulation concepts shared between federates. However, most federate simulations employ disparate representations of shared concepts. Therefore, it is often necessary to implement transformation stubs that convert concepts between their common representation to those used in federate simulations. The tasks of defining a common representation for shared simulation concepts and building translation stubs around them adds to the cost of performing a system-level simulation. In this thesis, a framework to support automation and reuse in the process of achieving interoperability between federate simulations is developed. This framework uses ontologies as a means to capture the semantics of different simulation concepts shared in a federation in a formal, reusable fashion. Using these semantics, a common representation for shared simulation entities, and a corresponding set of transformation stubs to convert entities from their federate to common representations (and vice-versa) are derived automatically. As a foundation to this framework, a schema to enable the capture of simulation concepts in an ontology is specified. Also, a graph-based algorithm is developed to extract the appropriate common information model and transformation procedures between federate and common simulation entities. As a proof of concept, this framework is applied to support the development of a federated air traffic simulation. To progress with the design of an airport, the combined operation of its individual systems (air traffic control, ground traffic control, and ground-based aircraft services) in handling varying volumes of aircraft traffic is to be studied. To do so, the individual simulation models corresponding to the different sub-systems of the airport need to be federated, for which the ontology-based framework is applied.
APA, Harvard, Vancouver, ISO, and other styles
43

Cavaco, Francisco António Gonçalves. "Ontologies learn by searching." Master's thesis, FCT-UNL, 2011. http://hdl.handle.net/10362/7086.

Full text
Abstract:
Dissertation to obtain the Master degree in Electrical Engineering and Computer Science
Due to the worldwide diversity of communities, a high number of ontologies representing the same segment of reality which are not semantically coincident have appeared. To solve this problem, a possible solution is to use a reference ontology to be the intermediary in the communications between the community enterprises and to outside. Since semantic mappings between enterprise‘s ontologies are established, this solution allows each of the enterprises to keep internally its own ontology and semantics unchanged. However information systems are not static, thus established mappings become obsoletes with time. This dissertation‘s objective is to identify a suitable method that combines semantic mappings with user‘s feedback, providing an automatic learning to ontologies & enabling auto-adaptability and dynamism to the information systems
APA, Harvard, Vancouver, ISO, and other styles
44

Webster, April. "Semantic spatial interoperability framework : a case study in the architecture, engineering and construction (AEC) domain." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/28461.

Full text
Abstract:
The volume of disseminated digital spatial data has exploded, generating demand for tools to support interoperability and the extraction of usable knowledge. Previous work on spatial interoperability has focused on semi-automatically generating the mappings to mediate multi-modal spatial data. We present a case study in the Architecture, Engineering and Construction (AEC) domain that demonstrates that even after this level of semantic interoperability has been achieved, mappings from the integrated spatial data to concepts desired by the domain experts must be articulated. We propose the Semantic Spatial Interoperability Framework to provide the next layer of semantic interoperability: GML provides the syntactic glue for spatial and non-spatial data integration, and an ontology provides the semantic glue for domain-specific knowledge extraction. Mappings between the two are created by extending XQuery with spatial query predicates.
APA, Harvard, Vancouver, ISO, and other styles
45

Liyanage, Harshana. "Semantic interoperability of large complex health datasets requires an ontological approach : a mixed method study." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/807576/.

Full text
Abstract:
The 'connected world' forces us to think about 'interoperability' as a primary requirement when building health care databases in the present day. Whilst semantic interoperability has made a major contribution to data utilisation between systems it often has not been able to integrate some large heterogeneous datasets required for research. As health data gets 'bigger' and complex, we are required to shift to rapid and flexible ways of resolving problems related to semantic interoperability. Ontological approaches accelerate implementing interoperability due to the availability of robust tools and technology frameworks that promote reuse. This thesis reports the results of a mixed methods study that proposes a pragmatic methodology that maximises the use of ontologies across a multilayered research readiness model which can be used in data-driven health care research projects. The research examined evidence for the use of ontologies across a majority of layers in the reference model. The first part of the thesis examines the methods used for assessing readiness to participate in research across six dimensions of health care. It reports on existing ontological elements that boosts research readiness and also proposes ontological extensions for modelling the semantics of data sources and research study requirements. The second part of the thesis presents an ontology toolkit that supports rapid development of ontologies that can be used in health care research projects. It provides details of how an ontology toolkit for creating health care ontologies was developed through the consensus of a panel of informatics experts and clinicians. This toolkit evolved further to include a series of ontological building blocks that assist clinicians to rapidly build ontologies.
APA, Harvard, Vancouver, ISO, and other styles
46

Kilic, Ozgur. "Achieving Electronic Healthcare Record (ehr) Interoperability Across Healthcare Information Systems." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609665/index.pdf.

Full text
Abstract:
Providing an interoperability infrastructure for Electronic Healthcare Records (EHRs) is on the agenda of many national and regional eHealth initiatives. Two important integration profiles have been specified for this purpose: the "
IHE Cross-enterprise Document Sharing (XDS)"
and the "
IHE Cross Community Access (XCA)"
. XDS describes how to share EHRs in a community of healthcare enterprises and XCA describes how EHRs are shared across communities. However, currently no solution addresses some of the important challenges of cross community exchange environments. The first challenge is scalability. If every community joining the network needs to connect to every other community, this solution will not scale. Furthermore, each community may use a different coding vocabulary for the same metadata attribute in which case the target community cannot interpret the query involving such an attribute. Another important challenge is that each community has a different patient identifier domain. Querying for the patient identifiers in another community using patient demographic data may create patient privacy concerns. Yet another challenge in cross community EHR access is the EHR interoperability since the communities may be using different EHR content standards.
APA, Harvard, Vancouver, ISO, and other styles
47

Nilsson, Mikael. "From Interoperability to Harmonization in Metadata Standardization : Designing an Evolvable Framework for Metadata Harmonization." Doctoral thesis, KTH, Medieteknik och grafisk produktion, Media, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26057.

Full text
Abstract:
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata. This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed. The thesis presents an analytical framework of concepts and principles for understanding the issues arising when interfacing multiple standardization communities. The analytical framework focuses on a set of important patterns in metadata specifications and their respective contribution to harmonization issues: Metadata syntaxes as a tool for metadata exchange. Syntaxes are shown to be of secondary importance in harmonization. Metadata semantics as a cornerstone for interoperability. This thesis argues that the incongruences in the interpretation of metadata descriptions play a significant role in harmonization. Abstract models for metadata as a tool for designing metadata standards. It is shown how such models are pivotal in the understanding of harmonization problems. Vocabularies as carriers of meaning in metadata. The thesis shows how portable vocabularies can carry semantics from one standard to another, enabling harmonization. Application profiles as a method for combining metadata standards. While application profiles have been put forward as a powerful tool for interoperability, the thesis concludes that they have only a marginal role to play in harmonization. The analytical framework is used to analyze and compare seven metadata specifications, and a concrete set of harmonization issues is presented. These issues are used as a basis for a metadata harmonization framework where a multitude of metadata specifications with different characteristics can coexist. The thesis concludes that the Resource Description Framework (RDF) is the only existing specification that has the right characteristics to serve as a practical basis for such a harmonization framework, and therefore must be taken into account when designing metadata specifications. Based on the harmonization framework, a best practice for metadata standardization development is developed, and a roadmap for harmonization improvements of the analyzed standards is presented.
QC 20101117
APA, Harvard, Vancouver, ISO, and other styles
48

Heravi, Bahareh Rahmanzadeh. "Ontology-based information standards development." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6267.

Full text
Abstract:
Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for the process of developing and approving standards, they are usually more concerned with administrative aspect of the process. As a consequence, the state-of-the-art lacks practical support for developing the structure and content of a standard specification. In short, there is no systematic development method currently available: (a) For developing the conceptual model underpinning a standard; and/or (b) to guide a group of stakeholders to develop a standard specification. Semantic interoperability is considered to be an essential factor for effective interoperation – the ability to achieve semantic interoperability effectively and efficiently being strongly equated with quality by some. Semantics require that the meaning of terms, their relationships and also the restrictions and rules in the standards should be clearly defined in the early stages of standard development and act as a basis for the latter stages. This research proposes that ontology can help standards developers and stakeholders to address the issues of improving conceptual models and providing a robust and shared understanding of the domain. This thesis presents OntoStanD, a comprehensive ontology-based standards development methodology, which utilises the best practices of the existing ontology creation methods. The potential value of OntoStanD is in providing a comprehensive, clear and unambiguous method for developing robust information standards, which are more test friendly and of higher quality. OntoStanD also facilitates standards conformance testing and change management, impacts interoperability and also assists in improved communication among the standards development team. Last, OntoStanD provides an approach that is repeatable, teachable and potentially general enough for creating any kinds of information standard.
APA, Harvard, Vancouver, ISO, and other styles
49

Lindgren, Ida, and Isabelle Norman. "Semantisk interoperabilitet för hantering av XML." Thesis, Uppsala universitet, Institutionen för informatik och media, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-226732.

Full text
Abstract:
Business Analytics används idag i ökad grad i organisationer som grund till beslutsfattande. Ett av villkoren för att kunna använda sig av Business Analytics för att utföra analyser av data från olika källor är att det finns interoperabilitet mellan dem. Syftet med den här studien är att undersöka om det är möjligt att skapa en IT-artefakt som kan hämta data ifrån flertalet XML-dokument med olika struktur för att uppnå semantisk interoperabilitet och på så vis möjliggöra för Business Analytics. Med olika struktur menar vi att benämningarna på taggarna skiljer sig språkmässigt men har samma semantiska betydelse. Lösningen skapas genom forskningsstrategin Design Science vilket innebär att en IT-artefakt utvecklas som kunskapsbidrag, och visar att en implementation av en lösning är möjlig för de semantiska problem vi identifierat. Resultatet av utvecklingen är en flexibel IT-artefakt där en användare kan koppla samman och hämta data från XML-filer med olika struktur. Denna sammankoppling skapas genom att användaren själv kan bygga upp och använda en ontologi med de ord som används som taggar i XML-filerna. Genom att använda ontologier på det här sättet visar vi med vår forskning att det är möjligt att uppnå semantisk interoperabilitet mellan XML-filer med olika struktur. Utifrån resultatet av den IT-artefakt vi skapar kan vi dra slutsatser om att det går att skapa en generell lösning för denna typ av problematik.
Today Business Analytics is becoming increasingly popular and is utilized by organizations to analyze data that is used as support for decision-making. Business Analytics requires that interoperability exists between the data sources used to gather and compile data for analysis to ensure that data can be correctly interpreted. Therefore, the aim of this study is to investigate the possibility of creating an IT-artifact for querying several XML-documents consisting of various structures in order to achieve semantic interoperability, thus enabling Business Analytics. The structural differences considered in this report focuses on when XML-tags have been given different names that essentially have the same semantic meaning. The research strategy Design Science has been used when creating the solution. As a result of the research strategy the knowledge contribution is an IT-artifact. The IT-artifact is a Proof of concept that demonstrates a possible implementation of a solution that handles the semantic problems identified in this report. The result of the development is a flexible application that users can utilize to gather data from XML-files with different structures. This is made possible by letting the user create an ontology containing the tag names from the XML-files. By using ontologies like this we have given proof that it is possible to accomplish interoperability between XML-files with different structures. The conclusion that can be drawn from the development of the IT-artifact is that it is possible to create a general solution for the identified problem.
APA, Harvard, Vancouver, ISO, and other styles
50

Zuo, Landong. "A semantic and agent-based approach to support information retrieval, interoperability and multi-lateral viewpoints for heterogeneous environmental databases." Thesis, Queen Mary, University of London, 2006. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1770.

Full text
Abstract:
Data stored in individual autonomous databases often needs to be combined and interrelated. For example, in the Inland Water (IW) environment monitoring domain, the spatial and temporal variation of measurements of different water quality indicators stored in different databases are of interest. Data from multiple data sources is more complex to combine when there is a lack of metadata in a computation forin and when the syntax and semantics of the stored data models are heterogeneous. The main types of information retrieval (IR) requirements are query transparency and data harmonisation for data interoperability and support for multiple user views. A combined Semantic Web based and Agent based distributed system framework has been developed to support the above IR requirements. It has been implemented using the Jena ontology and JADE agent toolkits. The semantic part supports the interoperability of autonomous data sources by merging their intensional data, using a Global-As-View or GAV approach, into a global semantic model, represented in DAML+OIL and in OWL. This is used to mediate between different local database views. The agent part provides the semantic services to import, align and parse semantic metadata instances, to support data mediation and to reason about data mappings during alignment. The framework has applied to support information retrieval, interoperability and multi-lateral viewpoints for four European environmental agency databases. An extended GAV approach has been developed and applied to handle queries that can be reformulated over multiple user views of the stored data. This allows users to retrieve data in a conceptualisation that is better suited to them rather than to have to understand the entire detailed global view conceptualisation. User viewpoints are derived from the global ontology or existing viewpoints of it. This has the advantage that it reduces the number of potential conceptualisations and their associated mappings to be more computationally manageable. Whereas an ad hoc framework based upon conventional distributed programming language and a rule framework could be used to support user views and adaptation to user views, a more formal framework has the benefit in that it can support reasoning about the consistency, equivalence, containment and conflict resolution when traversing data models. A preliminary formulation of the formal model has been undertaken and is based upon extending a Datalog type algebra with hierarchical, attribute and instance value operators. These operators can be applied to support compositional mapping and consistency checking of data views. The multiple viewpoint system was implemented as a Java-based application consisting of two sub-systems, one for viewpoint adaptation and management, the other for query processing and query result adjustment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography