Dissertations / Theses on the topic 'Distributed ontology'

To see the other types of publications on this topic, follow the link: Distributed ontology.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Distributed ontology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mutharaju, Raghava. "Distributed Rule-Based Ontology Reasoning." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1472534764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weng, Zumao. "Distributed knowledge based image contents retrieval and exploration." Thesis, University of Ulster, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tenschert, Axel [Verfasser], and Michael [Akademischer Betreuer] Resch. "Ontology matching in a distributed environment / Axel Tenschert ; Betreuer: Michael Resch." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2016. http://d-nb.info/1130148556/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gu, Xuan. "Selective Data Replication for Distributed Geographical Data Sets." Thesis, University of Canterbury. Computer Science and Software Engineering, 2008. http://hdl.handle.net/10092/2545.

Full text
Abstract:
The main purpose of this research is to incorporate additional higher-level semantics into the existing data replication strategies in such a way that their flexibility and performance can be improved in favour of both data providers and consumers. The resulting approach from this research is referred to as the selective data replication system. With this system, the data that has been updated by a data provider is captured and batched into messages known as update notifications. Once update notifications are received by data consumers, they are used to evaluate so-called update policies, which are specified by data consumers containing details on when data replications need to occur and what data needs to be updated during the replications.
APA, Harvard, Vancouver, ISO, and other styles
6

Halilaj, Lavdim [Verfasser]. "An Approach for Collaborative Ontology Development in Distributed and Heterogeneous Environments / Lavdim Halilaj." Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/117773480X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Taekyu. "Ontology/Data Engineering Based Distributed Simulation Over Service Oriented Architecture For Network Behavior Analysis." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/193678.

Full text
Abstract:
As network uses increase rapidly and high quality-of-service (QoS) is required, efficient network managing methods become important. Many previous studies and commercial tools of network management systems such as tcpdump, Ethereal, and other applications have weaknesses: limited size of files, command line execution, and large memory and huge computational power requirement. Researchers struggle to find fast and effective analyzing methods to save maintenance budgets and recover from systematic problems caused by the rapid increment of network traffic or intrusions. The main objective of this study is to propose an approach to deal with a large amount of network behaviors being quickly and efficiently analyzed. We study an ontology/data engineering methodology based network analysis system. We design a behavior, which represents network traffic activity and network packet information such as IP addresses, protocols, and packet length, based on the System Entity Structure (SES) methodology. A significant characteristic of SES, a hierarchical tree structure, enables systems to access network packet information quickly and efficiently. Also, presenting an automated system design is the secondary purpose of this study. Our approach shows adaptive awareness of pragmatic frames (contexts) and makes a network traffic analysis system with high throughput and a fast response time that is ready to respond to user applications. We build models and run simulations to evaluate specific purposes, i.e., analyzing network protocols use, evaluating network throughput, and examining intrusion detection algorithms, based on Discrete Event System Specification (DEVS) formalism. To study speed up, we apply a web-based distributed simulation methodology. DEVS/Service Oriented Architecture (DEVS/SOA) facilitates deploying workloads into multi-servers and consequently increasing overall system performance. In addition to the scalability limitations, both tcpdump and Ethereal have a security issue. As well as basic network traffic information, captured files by these tools contain secure information: user identification numbers and passwords. Therefore, captured files should not allow to be leaked out. However, network analyses need to be performed outside target networks in some cases. The distributed simulation--allocating distributing models inside networks and assigning analyzing models outside networks--also allows analysis of network behaviors out of networks while keeping important information secured.
APA, Harvard, Vancouver, ISO, and other styles
8

Wariyapola, Pubudu C. (Pubudu Chaminda) 1972. "Towards an ontology and metadata structure for a distributed information system for coastal zone management." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Milliner, Stephen William. "Dynamic resolution of conceptual heterogenity in large scale distributed information systems." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ducrou, Amanda Joanne. "Complete interoperability in healthcare technical, semantic and process interoperability through ontology mapping and distributed enterprise integration techniques /." Access electronically, 2009. http://ro.uow.edu.au/theses/3048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Andersson, Richard. "Evaluation of the Security of Components in Distributed Information Systems." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2091.

Full text
Abstract:

This thesis suggests a security evaluation framework for distributed information systems, responsible for generating a system modelling technique and an evaluation method. The framework is flexible and divides the problem space into smaller, more accomplishable subtasks with the means to focus on specific problems, aspects or system scopes. The information system is modelled by dividing it into increasingly smaller parts, evaluate the separate parts and then build up the system “bottom up” by combining the components. Evaluated components are stored as reusable instances in a component library. The evaluation method is focusing on technological components and is based on the Security Functional Requirements (SFR) of the Common Criteria. The method consists of the following steps: (1) define several security values with different aspects, to get variable evaluations (2) change and establish the set of SFR to fit the thesis, (3) interpret evaluated security functions, and possibly translate them to CIA or PDR, (4) map characteristics from system components to SFR and (5) combine evaluated components into an evaluated subsystem. An ontology is used to, in a versatile and dynamic way, structure the taxonomy and relations of the system components, the security functions, the security values and the risk handling. It is also a step towards defining a common terminology for IT security.

APA, Harvard, Vancouver, ISO, and other styles
12

Sarkar, Arkopaul. "Semantic Agent Based Process Planning for Distributed Cloud Manufacturing." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1578585210407386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tessier, Sean Michael. "Ontology-based approach to enable feature interoperability between CAD systems." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41118.

Full text
Abstract:
Data interoperability between computer-aided design (CAD) systems remains a major obstacle in the information integration and exchange in a collaborative engineering environment. The standards for CAD data exchange have remained largely restricted to geometric representations, causing the design intent portrayed through construction history, features, parameters, and constraints to be discarded in the exchange process. In this thesis, an ontology-based framework is proposed to allow for the full exchange of semantic feature data. A hybrid ontology approach is proposed, where a shared base ontology is used to convey the concepts that are common amongst different CAD systems, while local ontologies are used to represent the feature libraries of individual CAD systems as combinations of these shared concepts. A three-branch CAD feature model is constructed to reduce ambiguity in the construction of local ontology feature data. Boundary representation (B-Rep) data corresponding to the output of the feature operation is incorporated into the feature data to enhance data exchange. The Ontology Web Language (OWL) is used to construct a shared base ontology and a small feature library, which allows the use of existing ontology reasoning tools to infer new relationships and information between heterogeneous data. A combination of OWL and SWRL (Semantic Web Rule Language) rules are developed to allow a feature from an arbitrary source system expressed via the shared base ontology to be automatically classified and translated into the target system. These rules relate input parameters and reference types to expected B-Rep objects, allowing classification even when feature definitions vary or when little is known about the source system. In cases when the source system is well known, this approach also permits direct translation rules to be implemented. With such a flexible framework, a neutral feature exchange format could be developed.
APA, Harvard, Vancouver, ISO, and other styles
14

Ishak, Karim. "Architecture distribuée interopérable pour la gestion des projets multi-sites : application à la planification des activités de production." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0075/document.

Full text
Abstract:
Aujourd’hui, la production est souvent multi-site car les entreprises se recentrent sur leurs cœurs de métiers. Dans ce contexte, la gestion des projets est une tâche difficile car il faut prendre en compte la distribution de la décision et l’hétérogénéité qui peut exister entre les différentes applications de production des partenaires. Les Systèmes Multi-Agents, et notamment le modèle SCEP (Superviseur, Client, Environnement, Producteur), apportent une solution satisfaisante au problème de la distribution de la décision, en instaurant une coopération entre des agents responsables de la gestion des projets client et des agents représentant les sites de production distants. Néanmoins, ce modèle présente des limites à cause de sa faible capacité à communiquer et à coopérer avec des modèles et des systèmes de gestion hétérogènes ainsi qu’à sa difficulté à localiser les nouveaux partenaires. Dans ce mémoire, nous proposons une architecture distribuée et interopérable SCEP-SOA intégrant les concepts du modèle SCEP et ceux du modèle SOA (Service Oriented Architecture) qui offre des mécanismes de mise en relation des partenaires et permet des communications entre des systèmes et des applications hétérogènes. Pour garantir la bonne compréhension des informations échangées entre les partenaires, l’architecture SCEP-SOA met en œuvre une stratégie d’interopérabilité sémantique basée sur l’intégration des ontologies. Cette stratégie s’articule autour d’une ontologie globale et commune utilisée pour l’échange des informations, et des mécanismes de correspondances entre cette ontologie globale et les ontologies locales des partenaires. Cette architecture est illustrée sur un cas d’étude où l’on se focalise sur l’interopérabilité entre des applications dédiées à la planification des projets de fabrication multi-sites
Today, the production is often multi-site because companies focus on their core competencies. In this context, projects’ management is a difficult task because it must take into account the distribution of the decision and the heterogeneity which can exist between the various production applications of the partners. The Multi-agents systems, in particular the SCEP model (Supervisor, Customer, Environment, Producer), offer a satisfactory solution for the decision distribution problem, by establishing a cooperation between agents responsible of the management of the customer projects and agents representing the remote production sites. Nevertheless, this model presents limits because of its weak ability to communicate and to cooperate with heterogeneous models and management systems, as well as in its difficulty to localize new partners. In this dissertation, we propose a distributed and interoperable architecture, SCEP-SOA, which integrates concepts of the SCEP model and SOA (Service Oriented Architecture) which offers mechanisms for putting in relation various partners and allows communication between heterogeneous systems and applications. To insure the good understanding of the information exchanged between the partners, the SCEP-SOA architecture implements a strategy of semantic interoperability based on the integration of ontologies. This strategy is based on the use of a shared global ontology for information exchange, and on mechanisms of mappings between the global ontology and the partners’ local ontologies. This architecture is illustrated by a study case in which we focus on the interoperability between applications dedicated to the planning of manufacturing multi-sites projects
APA, Harvard, Vancouver, ISO, and other styles
15

Havlena, Jan. "Distribuovaný informační systém založený na sémantických technologiích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237211.

Full text
Abstract:
This master's thesis deals with the design of a distributed information system, where the data distribution is based on semantic technologies. The project analyzes the semantic web technologies with the focus on information exchange between information systems and the related terms, mainly ontologies, ontology languages and the Resource description framework. Furthermore, there is described a proposal an ontology which is used to describe the data exchanged between the systems and the technologies used to implement distributed information system. The most important of them are Java Server Faces and Sesame.
APA, Harvard, Vancouver, ISO, and other styles
16

Caires, Bruno José de Sales Caires. "Transparent access to relational, autonomous and distributed databases using semantic web and service oriented technologies." Master's thesis, Universidade da Madeira, 2007. http://hdl.handle.net/10400.13/128.

Full text
Abstract:
With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.
Supervisor: António Jorge Silva Cardoso
APA, Harvard, Vancouver, ISO, and other styles
17

Venkatsubramanyan, Shailaja. "Discovering distributed and heterogeneous resources on the Internet: A theoretical foundation for an ontology-driven intelligent agent model. Its design, implementation and validation." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284913.

Full text
Abstract:
The Internet has made it possible for large amounts of data to be made available to users in a variety of areas. This has to lead to users being inundated with lots of information, making it difficult for them to locate data that would be of use to them. One domain that has not been immune to this problem is that of remote sensing. Remotely sensed data is available in abundance and can potentially be of use to many users. But it is difficult for users from different application domains to locate appropriate datasets and process them. Current search tools such as search engines are not adequate for remotely sensed data as most searches using these tools yield an inordinately large number of web sites, each of which has to be explored individually by the user and then the results manually collated. Besides, traditional search techniques are not embedded with the knowledge about the remote sensing domain. The goal of this research is to find out how users with varying backgrounds and levels of expertise can retrieve and access resources over the Internet. This dissertation describes a virtual enterprise model of intelligent agents that deals with the complexities of locating and retrieving remotely sensed data over the Internet. The methodology followed in this research includes (i) agent modeling, (ii) building agent cooperation techniques that would enable agents to understand terminology used at different sites and communicate with each other, (iii) optimizing communication flows between various agents, (iv) validating the model, and (v) verifying the prototype. The important contributions of this research include among others an agent model generalizable to problem domains other than remote sensing, a formally defined ontology (a collection of terms and relationships between those terms) for the remote sensing domain, and a prototype system that implements the model and the ontology.
APA, Harvard, Vancouver, ISO, and other styles
18

Saad, Sawsan. "Conception et Optimisation Distribuée d’un Système d’Information des Services d’Aide à la Mobilité Urbaine Basé sur une Ontologie Flexible dans le Domaine de Transport." Thesis, Ecole centrale de Lille, 2010. http://www.theses.fr/2010ECLI0017/document.

Full text
Abstract:
De nos jours, les informations liées au déplacement et à la mobilité dans un réseau de transport représentent sans aucun doute un potentiel important.Ces travaux visent à mettre en œuvre un Système d’Information de Service d’Aide à la Mobilité Urbaine (SISAMU).Le SISAMU doit pouvoir procéder par des processus de décomposition des requêtes simultanées en un ensemble de tâches indépendantes. Chaque tâche correspond à un service qui peut être proposé par plusieurs fournisseurs d’information en concurrence, avec différents coûts, temps de réponse et formats. Le SISAMU est lié à un Réseau informatique Etendu et distribué de Transport Multimodal (RETM) qui comporte plusieurs sources d’information hétérogènes des différents services proposés aux utilisateurs de transport. L’aspect dynamique, distribué et ouvert du problème, nous a conduits à adopter une modélisation multi-agent pour assurer au système une évolution continue et une flexibilité pragmatique. Pour ce faire, nous avons proposé d’automatiser la modélisation des services en utilisant la notion d’ontologie. Notre SISAMU prend en considération les éventuelles perturbations sur le RETM.Ansi, nous avons créé un protocole de négociation entre les agents. Le protocole de négociation proposé qui utilise l’ontologie de la cartographie se base sur un système de gestion des connaissances pour soutenir l'hétérogénéité sémantique. Nous avons détaillé l’Algorithme de Reconstruction Dynamique des Chemins des Agents (ARDyCA) qui est basé sur l’approche de l’ontologie cartographique. Finalement, les résultats présentés dans cette thèse justifient l’utilisation de l’ontologie flexible et son rôle dans le processus de négociation
Nowadays, information related on displacement and mobility in a transport network represents certainly a significant potential. So, this work aims to modeling, to optimize and to implement an Information System of Services to Aid the Urban Mobility (ISSAUM).The ISSAUM has firstly to decompose each set of simultaneous requests into a set of sub-requests called tasks. Each task corresponds to a service which can be proposed different by several information providers with different. An information provider which aims to propose some services through our ISSAUM has to register its ontology. Indeed, ISSAUM is related to an Extended and distributed Transport Multimodal Network (ETMN) which contains several heterogeneous data sources. The dynamic and distributed aspects of the problem incite us to adopt a multi-agent approach to ensure a continual evolution and a pragmatic flexibility of the system. So, we proposed to automate the modeling of services by using ontology idea. Our ISSAUM takes into account possible disturbance through the ETMN. In order to satisfy user requests, we developed a negotiation protocol between our system agents. The proposed ontology mapping negotiation model based on the knowledge management system for supporting the semantic heterogeneity and it organized as follow: Negotiation Layer (NL), the Semantic Layer (SEL), and the Knowledge Management Systems Layer(KMSL).We detailed also the reassignment process by using Dynamic Reassigned Tasks (DRT) algorithm supporting by ontology mapping approach. Finally, the experimental results presented in this thesis, justify the using of the ontology solution in our system and its role in the negotiation process
APA, Harvard, Vancouver, ISO, and other styles
19

Venturini, Yeda Regina. "MOS - Modelo Ontológico de Segurança para negociação de política de controle de acesso em multidomínios." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-19092006-165220/.

Full text
Abstract:
A evolução nas tecnologias de redes e o crescente número de dispositivos fixos e portáteis pertencentes a um usuário, os quais compartilham recursos entre si, introduziram novos conceitos e desafios na área de redes e segurança da informação. Esta nova realidade estimulou o desenvolvimento de um projeto para viabilizar a formação de domínios de segurança pessoais e permitir a associação segura entre estes domínios, formando um multidomínio. A formação de multidomínios introduziu novos desafios quanto à definição da política de segurança para o controle de acesso, pois é composto por ambientes administrativos distintos que precisam compartilhar seus recursos para a realização de trabalho colaborativo. Este trabalho apresenta os principais conceitos envolvidos na formação de domínio de segurança pessoal e multidomínios, e propõe um modelo de segurança para viabilizar a negociação e composição dinâmica da política de segurança para o controle de acesso nestes ambientes. O modelo proposto é chamado de Modelo Ontológico de Segurança (MOS). O MOS é um modelo de controle de acesso baseado em papéis, cujos elementos são definidos por ontologia. A ontologia define uma linguagem semântica comum e padronizada, viabilizando a interpretação da política pelos diferentes domínios. A negociação da política ocorre através da definição da política de importação e exportação de cada domínio. Estas políticas refletem as contribuições parciais de cada domínio para a formação da política do multidomínio. O uso de ontologia permite a composição dinâmica da política do multidomínio, assim como a verificação e resolução de conflitos de interesses, que refletem incompatibilidades entre as políticas de importação e exportação. O MOS foi validado através da análise de sua viabilidade de aplicação em multidomínios pessoais. A análise foi feita pela definição de um modelo concreto e pela simulação da negociação e composição da política de controle de acesso. Para simulação foi definido um multidomínio para projetos de pesquisa. Os resultados mostraram que o MOS permite a definição de um procedimento automatizável para criação da política de controle de acesso em multidomínios.
The evolution in the network technology and the growing number of portable and fixed devices belonging to a user, which shares resources, introduces new concepts and challenges in the network and information security area. This new reality has motivated the development of a project for personal security domain formation and security association between them, creating a multi-domain. The multi-domain formation introduces new challenges concerning the access control security policy, since multi-domains are composed by independent administrative domains that share resources for collaborative work. This work presents the main concept concerning the personal security domains and multi-domains, and proposes a security model to allow the dynamic security policy negotiation and composition for access control in multi-domain. The proposed model is called MOS, which is an ontological security model. The MOS is a role-based access control model, which elements are defined by an ontology. The ontology defines a semantic language, common and standardized, allowing the policy interpretation by different domains. The policy negotiation is made possible by the definition of the policy importation and exportation in each domain. These policies mean the partial contributions of each domain for the multi-domain policy formation. The use of ontology allows the dynamic multi-domain policy composition, as well as the verification and resolution of interest conflicts. These conflicts mean incompatibilities between the importation and exportation policy. The MOS was validated through the viability analysis for personal multi-domain application. The analysis was made through the definition of a factual model and the simulation of access control policy negotiation and composition. The simulation was taken place through the definition of a collaborative research projects multi-domain. The results demonstrate the MOS is feasible for implementation in automatic procedures for multi-domain access control policy creation.
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Marcel Santos [UNESP]. "Sistemas de informações geográficas: elementos para o desenvolvimento de bibliotecas digitais geográficas distribuídas." Universidade Estadual Paulista (UNESP), 2006. http://hdl.handle.net/11449/93711.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:45Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-08-29Bitstream added on 2014-06-13T20:55:15Z : No. of bitstreams: 1 santos_ms_me_mar.pdf: 2277134 bytes, checksum: 2c1d35de92755006ed8a8f0016328bfe (MD5)
Universidade Estadual Paulista (UNESP)
O desenvolvimento de tecnologias de informação e comunicação aplicadas às informações geográficas cresce de forma considerável e torna mais visível o aumento de Sistemas de Informações Geográficas, principalmente em ambientes governamentais, que buscam disponibilizar a informação geográfica a um número de pessoas cada vez maior. O objetivo deste trabalho é apresentar uma arquitetura com elementos para a construção de uma Biblioteca Digital Geográfica Distribuída, utilizando os padrões e os conceitos da Ciência da Informação juntamente com o Geoprocessamento. Serão apresentados os conceitos de bibliotecas digitais, os padrões de metadados para informações geográficas, além de geo-ontologias que contribuem para melhor organização e recuperação da informação geográfica. Utilizou-se os SIGs e a teoria da Ciência da Informação, focadas em especial para o desenvolvimento de Biblioteca Digital Geográfica Distribuída. A proposta para construção de uma Biblioteca Digital Geográfica Distribuída baseia-se no princípio de cooperação entre sistemas e considera o acesso livre as informações geográficas, a interoperabilidade possibilitada pela padronização dos metadados e das geo-ontologias. A arquitetura proposta para o desenvolvimento de Bibliotecas Digitais Geográficas Distribuídas atende os requisitos de representação da informação, as formas de comunicação e o protocolo de coleta de metadados e objetos digitais, possibilitando assim, o compartilhamento dos acervos informacionais geográficos distribuídos em diferentes Bibliotecas Digitais Geográficas. Apontam-se os elos entre o Geoprocessamento e a Ciência da Informação em relação à estruturação de ambientes de informações geográficas, que possam ser acessadas via rede de computadores.
The development of technologies of information and communication applied to the Geographical information grow in a considerable way and become more visible the increase of Geographic Information Systems, mainly in governments environments, that worry in supplying the geographic information for more and more people. The target of this work is to present an architecture with elements for the construction of a distributed geographical digital library, using patterns and concepts of the Information Science together with geoprocessing. The concepts of digital libraries and the patterns of metadata for geographical information will be presented, besides the geo-ontologies that contribute to better organization and recovery of geographical information. It was used the Geographic Information Systems and the theory of Information Science, focused mainly to the development of distributed geographical digital library.The proposal for construction of the distributed geographical digital library is on the principle of cooperation among systems and it considers the free access to geographical information, the interoperability facilitated by the standardization of the metadatas and geo-ontologies. The architecture proposed for the development of distributed geographical digital libraries assists the requirements of representations of the information, ways of communications and collection protocols for metadatas and digital objects, facilitating thus, the share of collections of geographical informations distributed at several Geographical Digital Libraries. The links between the geoprocessing and Information Science is pointed out with regard to the structuring of geographical information environment that can be accessed through computers network.
APA, Harvard, Vancouver, ISO, and other styles
21

Haase, Peter. "Semantic technologies for distributed information systems." Karlsruhe : Univ.-Verl. Karlsruhe, 2006. http://www.uvka.de/univerlag/volltexte/2007/195/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

ARAÚJO, Tiago Brasileiro. "Uma abordagem em paralelo para matching de grandes ontologias com balanceamento de carga." Universidade Federal de Campina Grande, 2016. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1316.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T19:28:54Z No. of bitstreams: 1 TIAGO BRASILEIRO ARAÚJO - DISSERTAÇÃO PPGCC 2016..pdf: 18742851 bytes, checksum: 92b3eefe5e78ab27784255e850871df9 (MD5)
Made available in DSpace on 2018-08-01T19:28:54Z (GMT). No. of bitstreams: 1 TIAGO BRASILEIRO ARAÚJO - DISSERTAÇÃO PPGCC 2016..pdf: 18742851 bytes, checksum: 92b3eefe5e78ab27784255e850871df9 (MD5) Previous issue date: 2016-03-07
Atualmente, o uso de grandes ontologias em diversos domínios do conhecimento está aumentando. Uma vez que estas ontologias podem apresentar sobreposição de conteúdo, a identificação de correspondências entre seus conceitos se torna necessária. Esse processo é chamado de Matching de Ontologias (MO). Um dos maiores desafios do matching de grandes ontologias é o elevado tempo de execução e o excessivo consumo de recursos de computacionais. Assim, para melhorar a eficiência, técnicas de particionamento de ontologias e paralelismo podem ser empregadas no processo de MO. Este trabalho apresenta uma abordagem para o Matching de Ontologias baseado em Particionamento e Paralelismo (MOPP) que particiona as ontologias de entrada em subontologias e executa as comparações entre conceitos em paralelo, usando o framework MapReduce como solução programável. Embora as técnicas de paralelização possam melhorar a eficiência do processo de MO, essas técnicas apresentam problemas referentes ao desbalanceamento de carga. Por essa razão, o presente trabalho propõe ainda duas técnicas para balanceamento de carga (básica e refinada) para serem aplicadas junto à abordagem MOPP, a fim de orientar a distribuição uniforme das comparações (carga de trabalho) entre os nós de uma infraestrutura computacional. O desempenho da abordagem proposta é avaliado em diferentes cenários (diferentes tamanhos de ontologias e graus de desbalanceamento de carga) utilizando uma infraestrutura computacional e ontologias reais e sintéticas. Os resultados experimentais indicam que a abordagem MOPP é escalável e capaz de reduzir o tempo de execução do processo de MO. No que diz respeito às técnicas de balanceamento de carga, os resultados obtidos mostram que a abordagem MOPP é robusta, mesmo em cenários com elevado grau de desbalanceamento de carga, com a utilização da técnica refinada de balanceamento de carga.
Currently, the use of large ontologies in various áreas of knowledge is increasing. Since, these ontologies can present contents overlap, the identification of correspondences among their concepts is necessary. This process is called Ontologies Matching (OM). One of the major challenges of the large ontologies matching is the high execution time and the computational resources consumption. Therefore, to get the efficiency better, partition and parallel techniques can be employed in the MO process. This work presents a Partition-Parallelbased Ontology Matching (PPOM) approach which partitions the input ontologies in subontologies and executes the comparisons between concepts in parallel, using the framework MapReduce as a programmable solution. Although the parallel techniques can get the MO efficiency process better, these techniques present problems concerning to the load imbalancing. For that reason, our work has proposed two techniques to the load balancing - the basic and the fine-grained one - which are supposed to be applied together with the PPOM approach, in order to orientate the uniform distribution of the comparisons (workload) between the nodes of a computing infrastructure. The performance of the proposed approach is assessed in different settings (different sizes of ontologies and degrees of load imbalancing) using a computing infrastructure and real and synthetic ontologies. The experimental results have indicated that the PPOM approach is scalable and able to reduce the OM process execution time. Referring to the load balancing techniques, the obtained results have shown that the PPOM approach is robust, even in settings with a high load imbalancing, with the fine-grained load balancing technique.
APA, Harvard, Vancouver, ISO, and other styles
23

Silva, Marcel Santos. "Sistemas de informações geográficas : elementos para o desenvolvimento de bibliotecas digitais geográficas distribuídas /." Marília : [s.n.], 2006. http://hdl.handle.net/11449/93711.

Full text
Abstract:
Orientador: Silvana Aparecida Borsetti Gregório Vidotti
Banca: Plácida Leopoldina Ventura Amorim da Costa Santos
Banca: Sérgio Antonio Rohm
Resumo: O desenvolvimento de tecnologias de informação e comunicação aplicadas às informações geográficas cresce de forma considerável e torna mais visível o aumento de Sistemas de Informações Geográficas, principalmente em ambientes governamentais, que buscam disponibilizar a informação geográfica a um número de pessoas cada vez maior. O objetivo deste trabalho é apresentar uma arquitetura com elementos para a construção de uma Biblioteca Digital Geográfica Distribuída, utilizando os padrões e os conceitos da Ciência da Informação juntamente com o Geoprocessamento. Serão apresentados os conceitos de bibliotecas digitais, os padrões de metadados para informações geográficas, além de geo-ontologias que contribuem para melhor organização e recuperação da informação geográfica. Utilizou-se os SIGs e a teoria da Ciência da Informação, focadas em especial para o desenvolvimento de Biblioteca Digital Geográfica Distribuída. A proposta para construção de uma Biblioteca Digital Geográfica Distribuída baseia-se no princípio de cooperação entre sistemas e considera o acesso livre as informações geográficas, a interoperabilidade possibilitada pela padronização dos metadados e das geo-ontologias. A arquitetura proposta para o desenvolvimento de Bibliotecas Digitais Geográficas Distribuídas atende os requisitos de representação da informação, as formas de comunicação e o protocolo de coleta de metadados e objetos digitais, possibilitando assim, o compartilhamento dos acervos informacionais geográficos distribuídos em diferentes Bibliotecas Digitais Geográficas. Apontam-se os elos entre o Geoprocessamento e a Ciência da Informação em relação à estruturação de ambientes de informações geográficas, que possam ser acessadas via rede de computadores.
Abstract: The development of technologies of information and communication applied to the Geographical information grow in a considerable way and become more visible the increase of Geographic Information Systems, mainly in governments environments, that worry in supplying the geographic information for more and more people. The target of this work is to present an architecture with elements for the construction of a distributed geographical digital library, using patterns and concepts of the Information Science together with geoprocessing. The concepts of digital libraries and the patterns of metadata for geographical information will be presented, besides the geo-ontologies that contribute to better organization and recovery of geographical information. It was used the Geographic Information Systems and the theory of Information Science, focused mainly to the development of distributed geographical digital library.The proposal for construction of the distributed geographical digital library is on the principle of cooperation among systems and it considers the free access to geographical information, the interoperability facilitated by the standardization of the metadatas and geo-ontologies. The architecture proposed for the development of distributed geographical digital libraries assists the requirements of representations of the information, ways of communications and collection protocols for metadatas and digital objects, facilitating thus, the share of collections of geographical informations distributed at several Geographical Digital Libraries. The links between the geoprocessing and Information Science is pointed out with regard to the structuring of geographical information environment that can be accessed through computers network.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
24

Amjad, Fahd. "Approche ontologie pour l'intégration des entreprises distribuées." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0334/document.

Full text
Abstract:
Dans cette thèse, nous fournissons un examen complet des technologies du Web sémantique et de leurs utilités dans le contexte actuel des petites et moyennes entreprises (PME). Les approches traditionnelles d'intégration des entreprises favorisent essentiellement les grandes entités. Les obligations contractuelles fortes sur les PME, mais en même temps leur volonté de garder leurs compétences individuelles, et ce, dans un environnement limitant leur choix, les obligent à prendre des décisions stratégiques et de conclure des accords sur le long terme avec leurs partenaires, limitant ainsi leur flexibilité aux fluctuations du marché. Nous proposons, donc, une approche ontologique basée sur Web sémantique pour l'intégration de l'information ainsi que des ressources matérielles de l'entreprise distribuée. Cette approche, basée sur le Web, agit comme un système d'aide à la décision pour utiliser des ressources de meilleure qualité ainsi que pour l'intégration de l'information distribuée. Les travaux relatifs à l'ontologie web, pour l'intégration d'information ne sont pas nouveaux, mais l'approche proposée par nous est une valeur ajoutée pour l'entreprise distribuée. De plus, nous avons également proposé l'ontologie Web sémantique comme un système de configuration pour gérer les ressources distribuées de l'entreprise virtuelle. Puis, nous avons modélisé l'ontologie OWL-DL en nous basant sur la sémantique de la norme ISA-95, relative à l'intégration d'entreprises industrielles. Ensuite, nous utilisons cet artefact ontologique comme un artefact de configuration permettant de gérer le matériel de l'entreprise virtuelle distribuée ainsi que les ressources matérielles. C'est la proposition principale de cette thèse : utiliser l'ontologie Web sémantique comme un système d'aide à la décision pour la configuration de l'utilisation des ressources
In this thesis, we have provided a complete review of the semantic web technologies and their corresponding utility in the current environment for small to medium sized enterprise (MSE). The traditional approaches to enterprise integration favour large enterprise entities and force contractual limitations on smaller partners, but at the same time the pressure to guard the individual enterprise competence is ever increasing, the distributed enterprise (MSE) in such an environment have limited number of choices, which forces them to make strategic decisions and enter into a long term agreements with their partners and this limits their flexibility to the market changes. We, in this thesis, propose a semantic web based ontology approach for integrating the information as well as physical resource of the distributed enterprise. This web based approach acts as a decision support for better resources utility as well as distributed information integration. The work related to web ontology?s for information integration is not new, but the approach proposed in this thesis for distributed enterprise is an added value. Similarly, we have also proposed semantic web ontology as a configuration system to manage the distributed resources of the virtual enterprise, for this we have modelled OWL-DL ontology on the semantic of the industrial integration standard ISA-95, and subsequently used this ontology artefact as a configuration artefact to manage the distributed virtual enterprise material and equipment resources this is the main proposition of the thesis of utilizing semantic web ontology as resource configuration decision support
APA, Harvard, Vancouver, ISO, and other styles
25

Ленько, Василь Степанович. "Методи та засоби управління персональними знаннями в інтелектуальних системах." Diss., Національний університет "Львівська політехніка", 2021. https://ena.lpnu.ua/handle/ntb/56148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Saad, Sawsan. "Conception et Optimisation Distribuée d'un Système d'Information des Services d'Aide à la Mobilité Urbaine Basé sur une Ontologie Flexible dans le Domaine de Transport." Phd thesis, Ecole Centrale de Lille, 2010. http://tel.archives-ouvertes.fr/tel-00586086.

Full text
Abstract:
De nos jours, les informations liées au déplacement et à la mobilité dans un réseau de transport représentent sans aucun doute un potentiel important.Ces travaux visent à mettre en œuvre un Système d'Information de Service d'Aide à la Mobilité Urbaine (SISAMU).Le SISAMU doit pouvoir procéder par des processus de décomposition des requêtes simultanées en un ensemble de tâches indépendantes. Chaque tâche correspond à un service qui peut être proposé par plusieurs fournisseurs d'information en concurrence, avec différents coûts, temps de réponse et formats. Le SISAMU est lié à un Réseau informatique Etendu et distribué de Transport Multimodal (RETM) qui comporte plusieurs sources d'information hétérogènes des différents services proposés aux utilisateurs de transport. L'aspect dynamique, distribué et ouvert du problème, nous a conduits à adopter une modélisation multi-agent pour assurer au système une évolution continue et une flexibilité pragmatique. Pour ce faire, nous avons proposé d'automatiser la modélisation des services en utilisant la notion d'ontologie. Notre SISAMU prend en considération les éventuelles perturbations sur le RETM.Ansi, nous avons créé un protocole de négociation entre les agents. Le protocole de négociation proposé qui utilise l'ontologie de la cartographie se base sur un système de gestion des connaissances pour soutenir l'hétérogénéité sémantique. Nous avons détaillé l'Algorithme de Reconstruction Dynamique des Chemins des Agents (ARDyCA) qui est basé sur l'approche de l'ontologie cartographique. Finalement, les résultats présentés dans cette thèse justifient l'utilisation de l'ontologie flexible et son rôle dans le processus de négociation
APA, Harvard, Vancouver, ISO, and other styles
27

Gandon, Fabien. "INTELLIGENCE ARTIFICIELLE DISTRIBUÉE ET GESTION DES CONNAISSANCES : ONTOLOGIES ET SYSTÈMES MULTI-AGENTS POUR UN WEB SÉMANTIQUE ORGANISATIONNEL." Phd thesis, Université de Nice Sophia-Antipolis, 2002. http://tel.archives-ouvertes.fr/tel-00378201.

Full text
Abstract:
Ce travail considère les systèmes multi-agents pour la gestion d'un web sémantique d'entreprise basé
sur une ontologie. Dans le projet CoMMA, je me suis focalisé sur deux scénarios d'application:
l'assistance aux activités de veille technologique et l'aide à l'insertion d'un nouvel employé dans une organisation. Trois aspects ont été développés dans ce travail :
- la conception d'une architecture multi-agents assistant les deux scénarios, et l'approche organisationnelle descendante adoptée pour identifier les sociétés, les rôles et les interactions des agents ;
- la construction de l'ontologie O'CoMMA et la structuration de la mémoire organisationnelle en exploitant les technologies du Web sémantique ;
- la conception et l'implantation (a) des sous-sociétés d'agents chargées de la maintenance des annotations et de l'ontologie et (b) des protocoles supportant ces deux groupes d'agents, en particulier des techniques pour la distribution des annotations et des requêtes entre les agents.
APA, Harvard, Vancouver, ISO, and other styles
28

Thomas, Cerqueus. "Contributions au problème d'hétérogénéité sémantique dans les systèmes pair-à-pair : application à la recherche d'information." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00763914.

Full text
Abstract:
Nous considérons des systèmes pair-à-pair (P2P) pour le partage de données dans lesquels chaque pair est libre de choisir l'ontologie qui correspond le mieux à ses besoins pour représenter ses données. Nous parlons alors d'hétérogénéité sémantique. Cette situation est un frein important à l'interopérabilité car les requêtes émises par les pairs peuvent être incomprises par d'autres. Dans un premier temps nous nous focalisons sur la notion d'hétérogénéité sémantique. Nous définissons un ensemble de mesures permettant de caractériser finement l'hétérogénéité d'un système suivant différentes facettes. Dans un deuxième temps nous définissons deux protocoles. Le premier, appelé CorDis, permet de réduire l'hétérogénéité sémantique liée aux disparités entre pairs. Il dissémine des correspondances dans le système afin que les pairs apprennent de nouvelles correspondances. Le second protocole, appelé GoOD-TA, permet de réduire l'hétérogénéité sémantique d'un système liée à son organisation. L'objectif est d'organiser le système de sorte que les pairs proches sémantiquement soient proches dans le système. Ainsi deux pairs deviennent voisins s'ils utilisent la même ontologie ou s'il existe de nombreuses correspondances entre leurs ontologies respectives. Enfin, dans un trois temps, nous proposons l'algorithme DiQuESH pour le routage et le traitement de requêtes top-k dans les systèmes P2P sémantiquement hétérogènes. Cet algorithme permet à un pair d'obtenir les k documents les plus pertinents de son voisinage. Nous montrons expérimentalement que les protocoles CorDis et GoOD-TA améliorent les résultats obtenus par DiQuESH.
APA, Harvard, Vancouver, ISO, and other styles
29

Le, Pham Anh. "De l'optimisation à la décomposition de l'ontologique dans la logique de description." Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00507431.

Full text
Abstract:
Le raisonnement efficace dans une grande base de connaissance en logique de description est un défi actuel en raison des inférences "insurmontables", même pour des langages des logiques de description relativement inexpressives. En effet, la présence des axiomes dans la terminologie (TBox) est une des raisons importantes causant une augmentation exponentielle de la taille de l'espace de recherche explorée par les algorithmes d'inférence. Le raisonnement dans la logique de description (LD), c'est essentiellement le test de la relation de la subsomption entre les concepts. Par conséquent, on cherche toujours les expédients pour optimiser ce raisonnement. Des techniques d'optimisation pour améliorer la performance du raisonneur d'une LD se divisent donc naturellement en trois niveaux. Le premier est le niveau conceptuel considérant des techniques pour optimiser les structures d'axiomes dans la TBox. Le deuxième, le niveau algorithmique examinant des techniques pour réduire les exigences de stockage de l'algorithme de tableaux et pour optimiser le test de la relation de subsomption (satisfaisabilité). Le troisième, l'optimisation de requête cherchant des stratégies d'exécution optimales d'une requête d'interrogation dans une base de connaissances. Dans ce mémoire, nous avons étudié une approche de décomposition de l'ontologie s'appelant la "décomposition overlay" qui vise à un deux objectifs principaux : l'optimisation du raisonnement et la méthodologie de conception des ontologies. D'une part l'optimisation pour laquelle nous cherchons à diviser une ontologie dans un ensemble de sous ontologies dont chacune contient une partie de l'ensemble d'axiomes de l'ontologie originale permet d'obtenir ainsi une réduction relative du temps de raisonnement ; d'autre part la méthodologie de conception qui permet de remplacer une ontologie par un ensemble d'ontologies dans une organisation plus ou moins "optimale". Pour le premier objectif, la décomposition overlay d'une ontologie a comme résultat un ensemble de sous-ontologies regroupées dans une ontologie distribuée, appelée ontologie-décomposante (TBox-décomposante) et représentée par la logique de description distribuée. Intuitivement, le fait de pouvoir raisonner parallèlement sur ces sous-ontologies ayant chacune un espace de recherche réduit peut conduire à une réduction du temps relatif du raisonnement. Une propriété importante de cette ontologie est d'être interprétée dans le même domaine que l'ontologie originale. Ceci est une base qui nous suggère la proposition de deux algorithmes de raisonnement pour cette ontologie-décomposante. Concernant l'objectif de méthodologie de conception, nous introduisons deux méthodes de décomposition de l'ontologie reposant sur la décomposition heuristique des graphes. Une méthode repose sur la décomposition selon les séparateurs minimaux des graphes triangulaires et la seconde sur la décomposition selon la mesure des coupes normalisées des régions d'un graphe.
APA, Harvard, Vancouver, ISO, and other styles
30

Besançon, Léo. "Interopérabilité des Systèmes Blockchains." Thesis, Lyon, 2021. https://tel.archives-ouvertes.fr/tel-03789639.

Full text
Abstract:
La Blockchain est une technologie disruptive. Elle s'intègre dans un écosystème décentralisé d’applications aux propriétés intéressantes : la transparence des transactions, l’auditabilité des applications, ou encore la résistance à la censure. Les domaines d'application sont variés, de la finance à la santé ou au jeu vidéo. La technologie a évolué depuis sa création en 2008 et possède de nombreuses perspectives. Néanmoins, le domaine rencontre de nombreux défis. Chaque Blockchain utilisant ses propres standards et modèles économiques, il subit notamment un manque d’interopérabilité à différents niveaux : entre les différents projets d'une Blockchain, entre les différentes Blockchains, ainsi qu’entre les Blockchains et les autres systèmes. Un aspect important de l'interopérabilité des systèmes Blockchains est leur interopérabilité sémantique, qui nécessite de définir formellement les concepts liés. Un autre défi est la conception d'applications Blockchains décentralisées. Ces applications intègrent la technologie Blockchain, mais aussi d'autres services qui permettent de satisfaire les contraintes de l'application pour lesquelles la Blockchain n'est pas adaptée. Cependant, il est complexe de choisir les services Blockchain les plus adaptés à une application donnée. Cette thèse a pour objectif la proposition d’un cadre permettant d’améliorer l’interopérabilité des applications Blockchain décentralisées. Pour cela, nous développons une méthodologie d'aide à la conception de ces applications, ainsi qu'une ontologie Blockchain qui aide à formaliser leurs concepts. Ce cadre est validé dans le domaine des jeux vidéo Blockchain. Cet environnement est complexe, car il nécessite le partage de données volumineuses. De plus, les contraintes de latence doivent être respectées
Blockchains are a disruptive technology. They enable an ecosystem for decentralized applications with interesting properties: transaction transparency, application auditability or censorship resistance. It has applications in various fields, such as finance, healthcare or video games. It has evolved a lot since its creation in 2008, and presents numerous perspectives. However, the field faces many challenges, in particular a lack of interoperability at several levels: between projects on a same Blockchain, between different Blockchains, or between Blockchains and other systems. One important aspect of Blockchain systems interoperability is semantic interoperability. It relies on formal definitions of the related concepts. Another challenge is the design of decentralized Blockchain applications. These applications integrate Blockchain technology, but also other services that satisfy the constraints of the application that Blockchain is not suitable for. However, it is complex to choose the most suited Blockchain service for a given application. With this PhD work, we propose a framework that can improve interoperability for decentralized Blockchain applications. We achieve this with the design of a methodology and a Blockchain ontology that help formalize the concepts related to an application. This framework is validated in the context of Blockchain video game development. It is a complex use case, as it needs to share storage intensive data and satisfy the latency constraints
APA, Harvard, Vancouver, ISO, and other styles
31

Zgaya, Hayfa. "Conception et optimisation distribuée d'un système d'information d'aide à la mobilité urbaine : Une approche multi-agent pour la recherche et la composition des services liés au transport." Phd thesis, Ecole Centrale de Lille, 2007. http://tel.archives-ouvertes.fr/tel-00160802.

Full text
Abstract:
Les travaux de recherche présentés dans cette thèse s'intègrent dans le cadre du projet national VIATIC.MOBILITE du pôle de compétitivité I-TRANS « Le ferroviaire au cœur des systèmes de transports innovants » (http://www.i-trans.org/index.htm). De nos jours, les informations liées au déplacement et à la mobilité dans un réseau de transport représentent sans aucun doute un potentiel important. En effet, on peut imaginer une infinité de services innovants liés à la mobilité, non seulement à destination du grand public, mais également à des entreprises, dans le conseil en mobilité pour leurs plans de déplacement. Le but de cette thèse est donc de fournir un système d'aide à la mobilité qui s'articule autour des motifs de déplacements quotidiens, occasionnels, de tourisme, de culture, etc. avec la possibilité de pouvoir bénéficier d'une information pertinente et exploitable.
Ces travaux de recherche visent à mettre en œuvre un Système d'Information de Transport Multimodal (SITM) pour optimiser la gestion de flux des requêtes utilisateurs qui peuvent être nombreuses et simultanées. Dans ce cas, le SITM doit pouvoir procéder par des processus de décomposition des requêtes simultanées en un ensemble de tâches indépendantes. Chaque tâche correspond à un service qui peut être proposé par plusieurs fournisseurs d'information, en concurrence, avec différents couts, formats et temps de réponse. Un fournisseur d'information voulant proposer ses services via le SITM, doit d'abord y enregistrer son système d'information, en assumant la responsabilité des aspects juridiques et qualitatifs de ses données. Le SITM est donc lié à un Réseau informatique Etendu et distribué de Transport Multimodal (RETM) qui comporte plusieurs sources d'information hétérogènes des différents services proposés aux utilisateurs de transport.
L'aspect dynamique, distribué et ouvert du problème, nous a conduits à adopter une modélisation multi-agent pour assurer au système une évolution continue et une flexibilité pragmatique. Le système multi-agent proposé s'appuie sur les métaheuristiques pour la recherche et la composition des services; la recherche des services se base sur le paradigme Agent Mobile (AM) utilisant un algorithme d'optimisation dynamique de construction des Plans De Routes (PDR). Cette première étape d'optimisation prépare les parcours des AMs en prenant en considération l'état du RETM. La composition des services utilise les algorithmes évolutionnistes pour optimiser les réponses en termes de coût et de temps, sachant qu'une réponse à une requête utilisateur ne doit pas dépasser un temps maximum autorisé et qu'un utilisateur cherche toujours à avoir le meilleur rapport qualité prix pour les services qu'il demande.
Enfin, le SITM prend en considération les éventuelles perturbations sur le RETM (pannes, goulets d'étranglements, etc.) pour satisfaire les requêtes utilisateurs dans tous les cas de figure. Dans ce contexte, nous avons créé un protocole de négociation entre les agents mobiles et les agents responsables des choix des fournisseurs d'information pour les services demandés, ces agents sont appelés agents Ordonnanceurs. Le protocole proposé dépasse les limites d'une communication agent traditionnelle, ce qui nous a incités à associer au système une ontologie flexible qui permet d'automatiser les différents types d'échanges entre les agents grâce à un vocabulaire approprié.
Les résultats expérimentaux présentés dans cette thèse justifient l'utilisation du paradigme agent mobile dans notre système qui remplace parfaitement bien les paradigmes classiques telle que l'architecture client/serveur. Les simulations présentées montrent différents scénarios de gestion d'un nombre des requêtes simultanées plus ou moins important. En effet, quelque soit le nombre de requêtes utilisateurs formulées pendant un court laps de temps , le système se charge de leur décomposition, de l'identification des services demandés et des fournisseurs d'information susceptibles d'y répondre.
APA, Harvard, Vancouver, ISO, and other styles
32

Tzou, Meng-Shiun, and 鄒孟訓. "Ontology Alignment System with Adaptable and Distributed Matching Strategy." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/prg3fv.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
94
When an agent does not understand a similar, but different ontology instance, an ontology alignment system (OAS) can align different ontologies so that the agent can understand it. The benefits of this OAS are: 1) adjustable (to add or to delete) matching strategy and 2) distributed processing that follows XML-RPC architecture and uses XML message format and HTTP POST transmission protocol, which makes OAS communication across platform and language. Different OAS can thus share different ontologies.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yi-Bin, and 王怡斌. "Development of Mechanism for Ontology-Based Distributed Case-Based Reasoning." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/71605094936202387746.

Full text
Abstract:
碩士
國立成功大學
製造工程研究所碩博士班
94
Due to the advent of knowledge-based economy and distributed enterprises, the enterprises get the knowledge not only from themselves but also others. In order to support knowledge integration in the distributed enterprises, the distributed case-based reasoning systems(DCBRs) plays an important role in the knowledge and experience sharing. Up to the present, researches on DCBRs focus merely on retrieving cases in the same system and using a pre-defined standard of domain knowledge for knowledge sharing, but the needs of sharing knowledge among heterogeneous CBR systems have not been considered. In addition, traditional CBR systems only provide similar cases without performing case adaptation. The objective of the research is to develop a mechanism for ontology-based distributed case-based reasoning using characteristic of ontology and a proposed multistage algorithm. This thesis proposes a distributed CBR system architecture and uses ontology to solve the semantic mismatch problems between heterogeneous cases as well as the problems of case adaptation without involvement of domain experts. The results of this study will enable heterogeneous knowledge retrieval in distributed enterprises and thus facilitate knowledge sharing.
APA, Harvard, Vancouver, ISO, and other styles
34

Wen, Chiun-Cheng, and 溫俊誠. "Ontology-Based Distributed Case-Based Reasoning in Virtual R&D." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/81913456409021010438.

Full text
Abstract:
博士
國立成功大學
製造工程研究所碩博士班
97
With the advent of knowledge economy and virtual R&D models, enterprises get the knowledge not only from themselves but also from others. Distributed case-based reasoning system (DCBRS) plays an important role in virtual R&D by supporting knowledge sharing. This study develops a novel mechanism for ontology-based distributed case-based reasoning using ontology and a proposed multistage algorithm to effectively support knowledge sharing within a virtual R&D environment. Tasks involved in this study are as follows: (i) design an ontology-based distributed case-based reasoning architecture and procedure, (ii) develop techniques related to the ontology-based distributed case-based reasoning, and (iii) implement an ontology-based distributed case-based reasoning mechanism. Developing methods associated with ontology-based distributed case-based reasoning involves the definition and representation of a user query model, definition and representation of a knowledge case model, definition and establishment of a knowledge case index structure, and development of a distributed knowledge case retrieval and knowledge case adaptation methods. Study results will facilitate heterogeneous knowledge sharing among enterprises participating in a virtual R&D.
APA, Harvard, Vancouver, ISO, and other styles
35

Senik, Michał. "Ontology adaptation for the distributed control systems management and integration purposes." Rozprawa doktorska, 2017. https://repolis.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=45464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Senik, Michał. "Ontology adaptation for the distributed control systems management and integration purposes." Rozprawa doktorska, 2017. https://delibra.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=45464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hung, Tzung-Liang, and 洪宗良. "Optimizing Data Allocation for Distributed Databases on Intranet by Ontology Schema." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/39368252913637698405.

Full text
Abstract:
碩士
朝陽科技大學
資訊管理系碩士班
91
Along with the development of network and database technology, distributed databases are widely used in most of the companies. For the sake of increasing the efficiency of data processing and reducing the cost of data transmission in the distributed databases, the data should be placed in the most appropriate server sites. Most of previous researches assume that the database design is already there and then the data allocation problem is investigated on the existent distributed databases by analyzing the system operation history, such as query frequency and data affinity. In this thesis, an ontology schema will be proposed to analyze and represent the data requirements for a new distributed database system construction, and then two data allocation techniques are proposed to minimize the total transmission cost between server sites with the consideration of usage. In addition, the total transmission cost of a distributed database depends on not only the way of data allocation, but also the intranet topological design. Most of researches were addressed on the topological design problem on particular network architectures and used heuristic algorithms to achieve optimal network connections. In this thesis, a method for intranet design is proposed by establishing an ontology schema for the organizational data, and then sketching an intranet topology to achieve a deterministic solution. An intranet based on this method can not only fulfill the requirements of the enterprise information systems but also provide the minimum total transmission cost.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Peter, and 林煥國. "A Study of Ontology-Based distributed agent society – An example for election prediction." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/02950904538175430637.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
93
In the developing of knowledge-based applications, we are facing the knowledge management issues in document creation, knowledge discovery, sharing and transferring. How to create a suitable platform to making easy access is a big challenge in recent agent-based system. This research presents a provider-consumer information model to demonstrate the knowledge exchanging process under the Ontology-based message structure. We demonstrate the information flow among agents by using Nash Equilibrium model, specific for Election Prediction application. This multi-agent system is implemented and confirmed to Ontology of FIPA (Foundation for Intelligent Physical Agents) standard.   In this paper, a distributed platform is implemented to demonstrate the integration process. Ontology, Agents, and Knowledge Base technologies are got involved to make communication between agents more productive. Agents would be designed to play the specified roles and show up the desirable characteristics such as autonomous, common sense and social ability. Information generated from the agents are well formed to satisfy the FIPA standard. Our case studies demonstrate the communication and their interaction effectively via storing the XML-based information about the definition and relation characteristics into the Knowledge Base.
APA, Harvard, Vancouver, ISO, and other styles
39

Chang, Chun-Fu, and 張淳甫. "Design and Implementation of an Ontology-based Distributed RDF Store Based on Chord Network." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/09631080434308445619.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
97
RDF and OWL are used as the data model and schema, respectively, to build the distributed knowledge base of the Semantic Web. The components in RDF models, subjects, predicates, and objects, are identified universally on the web, which makes it suitable for distributed operations. In this paper, we employ the distributed hash table (DHT) technology on peer-to-peer network to develop distributed RDF store. To take into account of ontology in RDF, we extend the chord ring to be two-level ring, where the first level is based on the ontology schema and the next one is on the RDF itself. The extension retains the complexity of O(log N) in maintaining and lookup message in an N-node system. The simulation results show that adding an additional level reduces the path length of message lookup. We design three-layered system architecture for the ontology-based distributed RDF store. We are developing a prototype according to the design in this paper to show how the two-level ring works.
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Yang-Yin, and 李昂穎. "On Utilization of Ontology and Retrofitting Techniques for Better Distributed Representations of Words and Senses." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/y7bt2k.

Full text
Abstract:
博士
國立臺灣大學
資訊工程學研究所
107
With the increasing number of natural language processing tasks, the need for better representation of words (word embedding) and senses (sense embedding) is getting higher in recent years. In this study, we firstly discuss the problem of abnormal dimensions in word embeddings, and then propose models that combine word embedding with ontology. The combination is discussed in three ways: directly combination approach, support vector regression approach and retrofitting approach. In sense embedding, we firstly propose a joint sense retrofitting model that learns better sense embedding from contextual and ontological information, and then generalize the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
41

Witherell, Paul W. "Semantic methods for intelligent distributed design environments." 2009. https://scholarworks.umass.edu/dissertations/AAI3380041.

Full text
Abstract:
Continuous advancements in technology have led to increasingly comprehensive and distributed product development processes while in pursuit of improved products at reduced costs. Information associated with these products is ever changing, and structured frameworks have become integral to managing such fluid information. Ontologies and the Semantic Web have emerged as key alternatives for capturing product knowledge in both a human-readable and computable manner. The primary and conclusive focus of this research is to characterize relationships formed within methodically developed distributed design knowledge frameworks to ultimately provide a pervasive real-time awareness in distributed design processes. Utilizing formal logics in the form of the Semantic Web’s OWL and SWRL, causal relationships are expressed to guide and facilitate knowledge acquisition as well as identify contradictions between knowledge in a knowledge base. To improve the efficiency during both the development and operational phases of these “intelligent” frameworks, a semantic relatedness algorithm is designed specifically to identify and rank underlying relationships within product development processes. After reviewing several semantic relatedness measures, three techniques, including a novel meronomic technique, are combined to create AIERO, the Algorithm for Identifying Engineering Relationships in Ontologies. In determining its applicability and accuracy, AIERO was applied to three separate, independently developed ontologies. The results indicate AIERO is capable of consistently returning relatedness values one would intuitively expect. To assess the effectiveness of AIERO in exposing underlying causal relationships across product development platforms, a case study involving the development of an industry-inspired printed circuit board (PCB) is presented. After instantiating the PCB knowledge base and developing an initial set of rules, FIDOE, the Framework for Intelligent Distributed Ontologies in Engineering, was employed to identify additional causal relationships through extensional relatedness measurements. In a conclusive PCB redesign, the resulting “intelligent” framework demonstrates its ability to pass values between instances, identify inconsistencies amongst instantiated knowledge, and identify conflicting values within product development frameworks. The results highlight how the introduced semantic methods can enhance the current knowledge acquisition, knowledge management, and knowledge validation capabilities of traditional knowledge bases.
APA, Harvard, Vancouver, ISO, and other styles
42

Γεωργουδάκης, Εμμανουήλ. "Εφαρμογή πολυ-πρακτορικού συστήματος με σημασιολογική οντολογία για κάθετη ολοκλήρωση περιβάλλοντος παραγωγής, έμφαση στο επίπεδο ελέγχου παραγωγής." Thesis, 2009. http://hdl.handle.net/10889/4957.

Full text
Abstract:
Η παρούσα εργασία εστιάζεται και παρουσιάζει μια ολοκληρωμένη λύση στο πρόβλημα της κάθετης επιχειρησιακής ολοκλήρωσης, της διαφανούς δηλαδή ολοκλήρωσης εφαρμογών και συστημάτων, τα οποία είναι δυνατό να εκτελούνται σε διαφορετικά επίπεδα όσον αφορά στην κλασσική ιεραρχία του βιομηχανικού / κατασκευαστικού περιβάλλοντος, από το Επιχειρησιακό Επίπεδο στο οποίο εκτελείται το σύστημα Διαχείρισης Επιχειρησιακών Πόρων (Enterprise Resource Planning – ERP) ως το Επίπεδο Ελέγχου Πεδίου (field control επίπεδο). Η λύση η οποία προτείνεται είναι ένα λογισμικό συνδετικότητας (middleware) που δημιουργεί την αναγκαία υποδομή για ένα πιο ευέλικτο και ευφυές βιομηχανικό περιβάλλον.
The present dissertation, focuses and presents an integrated solution to the problem of vertical business integration that is the transparent integration of applications and systems that is possible to operate in different levels within the classical hierarchy of the industrial / production environment. This hierarchy comprises of the ERP, field control and the device layer. The industrial environment is characterized by particular complexity and is highly heterogeneous. As a result, any attempt to modify the existing production processes is particularly difficult. This project, combines standards, established and emerging technologies to address two contradicting requirements: integration and flexibility.
APA, Harvard, Vancouver, ISO, and other styles
43

Rockwell, Justin A. "A Semantic Framework for Reusing Decision Making Knowledge in Engineering Design." 2009. https://scholarworks.umass.edu/theses/329.

Full text
Abstract:
A semantic framework to improve automated reasoning, retrieval, reuse and communication of engineering design knowledge is presented in this research. We consider design to be a process involving a sequence of decisions informed by the current state of information. As such, the information model developed is structured to reflect the conceptualizations of engineering design decisions with a particular emphasis on semantically capturing design rationale. Based on a description logic formalism, the information model was implemented using the Web Ontology Language (OWL), which provides a semantically rich and sufficiently broad specification of design decisions capable of supporting the application of any specific decision-making method. Through this approach knowledge reuse is achieved by communicating design rationale and facilitating semantic-based retrieval of knowledge. A case study is presented to illustrate three key features of the approach: 1) seamless integration of separate modular domain ontologies and instance knowledge related to engineering design that are needed to support decision making, 2) the explicit documentation of design rationale through design decisions, and 3) the application of an automated method for matching and retrieving stored knowledge. The automated retrieval method is implemented using the Semantic Web Rule Language (SWRL) and serves as an example of the type of reasoning services that can easily be achieved by formally and semantically representing design knowledge.
APA, Harvard, Vancouver, ISO, and other styles
44

Zaza, Imad. "Ontological knowledge-base for railway control system and analytical data platform for Twitter." Doctoral thesis, 2018. http://hdl.handle.net/2158/1126141.

Full text
Abstract:
The scope of this thesis is Railway signaling and Social Media Analysis. In particular, with regard to the first theme, it has been conducted an investigation into the domain of railway signaling, it has been defined the objectives of the research or the development and verification of an ontological model for the management of railway signaling and finally having discussed results and inefficiencies. As far as for SMA it have been studied and discussed the state of the art of SMA tools including Twitter Vigilance developed within the DISIT laboratory at the University of Florence. It has been proposed a distributed architecture porting analysis, where it have been also highlighted the problems associated with migrating single host applications to distributed architectures and possible mitigation.
APA, Harvard, Vancouver, ISO, and other styles
45

HASSAN, Ali. "Ajouter de l'information spatiale aux modèles de composant logiciel - l'effet de localisation." Phd thesis, 2012. http://tel.archives-ouvertes.fr/tel-00785897.

Full text
Abstract:
Highly distributed environments (HDEs) are deployment environments that include powerful and robust machines in addition to resource-constrained and mobile devices such as laptops, personal digital assistants (or PDAs), smart-phones, GPS devices, sensors, etc. Developing software for HDEs is fundamentally different from the software development for central systems and stable distributed systems. This argument is discussed deeply and in-details throughout this dissertation. HDE applications are challenged by two problems: unreliable networks, and heterogeneity of hardware and software. Both challenges need careful handling, where the system must continue functioning and delivering the expected QoS. This dissertation is a direct response to the mentioned challenges of HDEs. The contribution of this dissertation is the cloud component model and its related formal language and tools. This is the general title. However, and to make this contribution clear, we prefer to present it in the following detailed form: (1) We propose a paradigm shift from distribution transparency to localization acknowledgment being the first class concern. (2) To achieve the above mentioned objective, we propose a novel component model called cloud component (CC). (3) In this dissertation we propose a new approach to assemble CCs using systematic methodology that maintains the properties of CC model. (4) Cloud component development process and cloud component based systems development process. (5) Location modeling and advanced localization for HDEs are the pivotal key in our contribution. (6) Formal language to model single CC, CC assembly, CC development process, and CC based systems. (7) We finally present our fully-developed supporting tools: the cloud component management system CCMS, and the Registry utility. To respond to the challenges posed by HDEs, and to maintain expected software quality at the user endpoint, we think we need to pass a ¿paradigm shift¿ from the way software is designed and implemented currently to our new vision that this dissertation is devoted to. This is a paradigm shift from distribution transparency to localization acknowledgment being the first class concern. The contribution in this thesis has several faces as explained above, but still, these faces are cohesive. Each of these faces form a partial contribution, however, this partial contribution does not mean anything if isolated from the overall proposal. Moreover, the merit of the overall proposal can not be grasped by reading one partial contribution. The merit of the proposal is evident only if all parts of this work are cohesively organized. Finally, we claim that our proposal spans the entire software development process for HDEs, from requirements to deployment and runtime management.
APA, Harvard, Vancouver, ISO, and other styles
46

Αγγελόπουλος, Παναγιώτης. "Σχεδιασμός και ανάπτυξη διεπαφής πελάτη-εξυπηρετητή για υποστήριξη συλλογισμού σε κατανεμημένες εφαρμογές του σημαντικού ιστού." Thesis, 2009. http://nemertes.lis.upatras.gr/jspui/handle/10889/3740.

Full text
Abstract:
Η έρευνα αναφορικά με την εξέλιξη του Παγκόσμιου Ιστού (WWW) κινείται τα τελευταία χρόνια προς πιο ευφυείς και αυτοματοποιημένους τρόπους ανακάλυψης και εξαγωγής της πληροφορίας. Ο Σημαντικός Ιστός (Semantic Web) είναι μία επέκταση του σημερινού Ιστού, όπου στην πληροφορία δίνεται σαφώς προσδιορισμένη σημασία, δίνοντας έτσι τη δυνατότητα στις μηχανές να μπορούν πλέον να επεξεργάζονται καλύτερα και να «κατανοούν» τα δεδομένα, τα οποία μέχρι σήμερα απλώς παρουσιάζουν. Για να λειτουργήσει ο Σημαντικός Ιστός, οι υπολογιστές θα πρέπει να έχουν πρόσβαση σε οργανωμένες συλλογές πληροφοριών, που καλούνται οντολογίες (ontologies). Οι οντολογίες παρέχουν μια μέθοδο αναπαράστασης της γνώσης στο Σημαντικό Ιστό και μπορούν επομένως να αξιοποιηθούν από τα υπολογιστικά συστήματα για τη διεξαγωγή αυτοματοποιημένου συλλογισμού (automated reasoning). Για την περιγραφή και την αναπαράσταση των οντολογιών του Σημαντικού Ιστού σε γλώσσες αναγνώσιμες από τη μηχανή, έχουν προταθεί και βρίσκονται υπό εξέλιξη διάφορες πρωτοβουλίες, με πιο σημαντική τη Γλώσσα Οντολογίας Ιστού (Web Ontology Language – OWL). H γλώσσα αυτή αποτελεί πλέον τη βάση για την αναπαράσταση γνώσης στο Σημαντικό Ιστό, λόγω της προώθησής της από το W3C, και του αυξανόμενου βαθμού υιοθέτησής της στις σχετικές εφαρμογές. Το βασικότερο εργαλείο για την υλοποίηση εφαρμογών που διαχειρίζονται OWL οντολογίες, είναι το OWL API. Το OWL API αποτελείται από προγραμματιστικές βιβλιοθήκες και μεθόδους, οι οποίες παρέχουν μια υψηλού επιπέδου διεπαφή για την πρόσβαση και τον χειρισμό OWL οντολογιών. Το θεωρητικό υπόβαθρο που εγγυάται την εκφραστική και συλλογιστική ισχύ των οντολογιών, παρέχεται από τις Λογικές Περιγραφής (Description Logics). Οι Λογικές Περιγραφής αποτελούν ένα καλώς ορισμένο αποφασίσιμο υποσύνολο της Λογικής Πρώτης Τάξης και καθιστούν δυνατή την αναπαράσταση και ανακάλυψη γνώσης στο Σημαντικό Ιστό. Για την ανακάλυψη άρρητης πληροφορίας ενδείκνυται, επομένως, να αξιοποιηθούν συστήματα βασισμένα σε Λογικές Περιγραφής. Τα συστήματα αυτά ονομάζονται και εργαλεία Συλλογισμού (Reasoners). Χαρακτηριστικά παραδείγματα τέτοιων εργαλείων αποτελούν τα FaCT++ και Pellet. Από τα παραπάνω γίνεται προφανής ο λόγος για τον οποίο, τόσο το OWL API, όσο και τα εργαλεία Συλλογισμού, χρησιμοποιούνται από προτεινόμενα μοντέλα υλοποίησης εφαρμογών του Σημαντικού Ιστού επόμενης γενιάς (WEB 3.0), για την επικοινωνία και την υποβολή «έξυπνων» ερωτημάτων σε βάσεις γνώσης (knowledge bases). Στα μοντέλα αυτά προτείνεται, επίσης, η χρήση κατανεμημένης αρχιτεκτονικής 3-επιπέδων (3-tier distributed architecture), για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. Σκοπός της διπλωματικής αυτής είναι ο σχεδιασμός και η ανάπτυξη μιας διεπαφής Πελάτη – Εξυπηρετητή (Client – Server interface) για την υποστήριξη υπηρεσιών Συλλογισμού (reasoning) σε κατανεμημένες εφαρμογές του Σημαντικού Ιστού. Πιο συγκεκριμένα, η διεπαφή που θα υλοποιήσουμε αποτελείται από δύο μέρη. Το πρώτο παρέχει τα απαραίτητα αρχεία για την εκτέλεση ενός εργαλείου Συλλογισμού σε κάποιο απομακρυσμένο μηχάνημα (Server). Με τον τρόπο αυτό, το συγκεκριμένο μηχάνημα θα παρέχει απομακρυσμένες (remote) υπηρεσίες Συλλογισμού. Το δεύτερο μέρος (Client) περιέχει αρχεία, που δρουν συμπληρωματικά στις βιβλιοθήκες του OWL API, και του δίνουν νέες δυνατότητες. Συγκεκριμένα, δίνουν την δυνατότητα σε μια εφαρμογή, που είναι υλοποιημένη με το OWL API, να χρησιμοποιήσει τις υπηρεσίες που προσφέρονται από κάποιο απομακρυσμένο εργαλείο Συλλογισμού. Συνεπώς, η διεπαφή μας θα δώσει την δυνατότητα υιοθέτησης της χρήσης του OWL API και των εργαλείων Συλλογισμού από κατανεμημένες αρχιτεκτονικές για την υλοποίηση εφαρμογών του Σημαντικού Ιστού.
In the past few years, the research that focus on the development of the World Wide Web (WWW) has moved towards more brilliant and automated ways of discovering and exporting the information. The Semantic Web is an extension of the current Web, that explicitly defines the information, thus providing the machines with the possibility to better process and “comprehend” the data, which until now they simply present. For the Semantic Web to function properly, computers must have access to organized collections of information, that are called ontologies. Ontologies provide a method of representing knowledge in the Semantic Web and, consequently, they can be used by computing systems in order to conduct automated reasoning. In order to describe and represent the ontologies of the Semantic Web in machine-readable language, various initiatives have been proposed and are under development, most important of which is the Web Ontology Language - OWL. This language constitutes the base for representing knowledge in the Semantic Web, due to its promotion from the W3C, and its increasing degree of adoption from relative applications. The main tool for the development of applications that manages OWL ontologies, is the OWL API. The OWL API consists of programming libraries and methods, that provide a higher-level interface for accessing and handling OWL ontologies. The theoretical background that guarantees the expressivity and the reasoning of ontologies, is provided from Description Logics. Description Logics constitute a well defined and decidable subset of First Order Logic and make possible the representation and discovery of knowledge in the Semantic Web. As a consequence, in order to discover “clever” information, we have to develop and use systems that are based in Description Logics. These systems are also called Reasoners. Characteristic examples of such tools are FaCT++ and Pellet. From above, it must be obvious why both the OWL API and the Reasoners are used by proposed models of developing next generation (WEB 3.0) Semantic Web applications, for the communication and the submission of “intelligent” questions in knowledge bases. These models also propose the use of a 3-level distributed architecture (3-tier distributed architecture), for the development of Semantic Web applications. Aim of this diploma thesis is to design and implement a Client-Server interface to support Reasoning in distributed applications of the Semantic Web. Specifically, the interface that we will implement consists of two parts. First part provides the essential files for a Reasoner to run in a remote machine (Server). As a result, this machine will provide remote Reasoning services. Second part (Client) contains files, that act additionally to (enhance) the libraries of the OWL API, and give them new features. More precisely, they provide an application, that is implemented with OWL API, with the possibility of using the services that are offered by a remote Reasoner. Consequently, our interface will make possible the use of the OWL API and the Reasoners from proposed distributed architectures for the development of Semantic Web applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Pirrò, Giuseppe, and Domenico Talia. "Ontologies and Semantic Interoperability in Distributed Systems." Thesis, 2014. http://hdl.handle.net/10955/409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Le. "Data stream mining in medical sensor-cloud." Thesis, 2016. https://vuir.vu.edu.au/31032/.

Full text
Abstract:
Data stream mining has been studied in diverse application domains. In recent years, a population aging is stressing the national and international health care systems. Along with the advent of hundreds and thousands of health monitoring sensors, the traditional wireless sensor networks and anomaly detection techniques cannot handle huge amounts of information. Sensor-cloud makes the processing and storage of big sensor data much easier. Sensor-cloud is an extension of Cloud by connecting the Wireless Sensor Networks (WSNs) and the cloud through sensor and cloud gateways, which consistently collect and process a large amount of data from various sensors located in different areas. In this thesis, I will focus on analysing a large volume of medical sensor data streams collected from Sensor-cloud. To analyse the Medical data streams, I propose a medical data stream mining framework, which is targeted on tackling four main challenges ...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography