Dissertations / Theses on the topic 'Ontologies (artificial intelligence)'

To see the other types of publications on this topic, follow the link: Ontologies (artificial intelligence).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Ontologies (artificial intelligence).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tamma, Valentina A. M. "An ontology model supporting multiple ontologies for knowledge sharing." Thesis, University of Liverpool, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gandon, Fabien. "Distributed artificial intelligence and knowledge management : ontologies and multi-agent systems for a corporate semantic web." Nice, 2002. http://www.theses.fr/2002NICE5773.

Full text
Abstract:
This work concerns multi-agents systems for the management of a corporate semantic web based on an ontology. It was carried out in the context of the European project CoMMA focusing on two application scenarios: support technology monitoring activities and assist the integration of a new employee to the organisation. Three aspects were essentially developed in this work: the design of a multi-agents architecture supporting both scenarios, and the organisational top-down approach followed to identify the societies, the roles and the interactions of agents. The construction of the ontology O'CoMMA and the structuring of a corporate memory exploiting semantic Web technologies. The design and implementation of the sub-societies of agents dedicated to the management of the annotations and the ontology and of the protocols underlying these groups of agents, in particular techniques for distributing annotations and queries between the agents
Ce travail concerne les systèmes multi-agents pour la gestion d'un web sémantique d'entreprise basé sur une ontologie. Il a été effectué dans le cadre du projet Européen CoMMA se focalisant sur deux scénarios d'application: l'assistance aux activités de veille technologique et l'aide à l'insertion d'un nouvel employé dans une organisation. Trois aspects ont essentiellement été développés dans ce travail: la conception d'une architecture multi-agents assistant les deux scénarios, et l'approche organisationnelle descendante adoptée pour identifier les sociétés, les rôles et les interactions des agents. La construction de l'ontologie O'CoMMA et la structuration de la mémoire organisationnelle en exploitant les technologies du Web sémantique. La conception et l'implantation (a) des sous-sociétés d'agents chargées de la maintenance des annotations et de l'ontologie et (b) des protocoles supportant ces deux groupes d'agents, en particulier des techniques pour la distribution des annotations et des requêtes entre les agents
APA, Harvard, Vancouver, ISO, and other styles
3

Bate, Andrew. "Consequence-based reasoning for SRIQ ontologies." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6b35e7d0-199c-4db9-ac8a-7f78256e5fb8.

Full text
Abstract:
Description logics (DLs) are knowledge representation formalisms with numerous applications and well-understood model-theoretic semantics and computational properties. SRIQ is a DL that provides the logical underpinning for the semantic web language OWL 2, which is the W3C standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. Consequence-based calculi are a family of reasoning techniques for DLs. Such calculi have proved very effective in practice and enjoy a number of desirable theoretical properties. Up to now, however, they were proposed for either Horn DLs (which do not support disjunctive reasoning), or for DLs without cardinality constraints. In this thesis we present a novel consequence-based algorithm for TBox reasoning in SRIQ - a DL that supports both disjunctions and cardinality constraints. Combining the two features is non-trivial since the intermediate consequences that need to be derived during reasoning cannot be captured using DLs themselves. Furthermore, cardinality constraints require reasoning over equality, which we handle using the framework of ordered paramodulation - a state-of-the-art method for equational theorem proving. We thus obtain a calculus that can handle an expressive DL, while still enjoying all the favourable properties of existing consequence-based algorithms, namely optimal worst-case complexity, one-pass classification, and pay-as-you-go behaviour. To evaluate the practicability of our calculus, we implemented it in Sequoia - a new DL reasoning system. Empirical results show substantial robustness improvements over well-established algorithms and implementations, and performance competitive with closely related work.
APA, Harvard, Vancouver, ISO, and other styles
4

Marinica, Claudia. "Association Rule Interactive Post-processing using Rule Schemas and Ontologies - ARIPSO." Phd thesis, Université de Nantes, 2010. http://tel.archives-ouvertes.fr/tel-00912580.

Full text
Abstract:
This thesis is concerned with the merging of two active research domains: Knowledge Discovery in Databases (KDD), more precisely the Association Rule Mining technique, and Knowledge Engineering (KE) with a main interest in knowledge representation languages developed around the Semantic Web. In Data Mining, the usefulness of association rule technique is strongly limited by the huge amount and the low quality of delivered rules. Experiments show that rules become almost impossible to use when their number exceeds 100. At the same time, nuggets are often represented by those rare (low support) unexpected association rules which are surprising to the user. Unfortunately, the lower the support is, the larger the volume of rules becomes. Thus, it is crucial to help the decision maker with an efficient technique to reduce the number of rules. To overcome this drawback, several methods have been proposed in the literature such as itemset concise representations, redundancy reduction, filtering, ranking and post-processing. Even though rule interestingness strongly depends on user knowledge and goals, most of the existing methods are generally based on data structure. For instance, if the user looks for unexpected rules, all the already known rules should be pruned. Or, if the user wants to focus on specific family of rules, only this subset of rules should be selected. In this context, we address two main issues: the integration of user knowledge in the discovery process and the interactivity with the user. The first issue requires defining an adapted formalism to express user knowledge with accuracy and flexibility such as ontologies in the Semantic Web. Second, the interactivity with the user allows a more iterative mining process where the user can successively test different hypotheses or preferences and focus on interesting rules. The main contributions of this work can be summarized as follows: (i) A model to represent user knowledge. First, we propose a new rule-like formalism, called Rule Schema, which allows the user to define his/her expectations regarding the rules through ontology concepts. Second, ontologies allow the user to express his/her domain knowledge by means of a high semantic model. Last, the user can choose among a set of Operators for interactive processing the one to be applied over each Rule Schema (i.e. pruning, conforming, unexpectedness, . . . ). (ii) A new post-processing approach, called ARIPSO (Association Rule Interactive Post-processing using rule Schemas and Ontologies), which helps the user to reduce the volume of the discovered rules and to improve their quality. It consists in an interactive process integrating user knowledge and expectations by means of the proposed model. At each step of ARIPSO, the interactive loop allows the user to change the provided information and to reiterate the post-processing phase which produces new results. (iii) The implementation in post-processing of the proposed approach. The developed tool is complete and operational, and it implements all the functionalities described in the approach. Also, it makes the connection between different elements like the set of rules and rule schemas stored in PMML/XML files, and the ontologies stored in OWL files and inferred by the Pellet reasoner. (iv) An adapted implementation without post-processing, called ARLIUS (Association Rule Local mining Interactive Using rule Schemas), consisting in an interactive local mining process guided by the user. It allows the user to focus on interesting rules without the necessity to extract all of them, and without minimum support limit. In this way, the user may explore the rule space incrementally, a small amount at each step, starting from his/her own expectations and discovering their related rules. (v) The experimental study analyzing the approach efficiency and the discovered rule quality. For this purpose, we used a real-life and large questionnaire database concerning customer satisfaction. For ARIPSO, the experimentation was carried out in complete cooperation with the domain expert. For different scenarios, from an input set of nearly 400 thousand association rules, ARIPSO filtered between 3 and 200 rules validated by the expert. Clearly, ARIPSO allows the user to significantly and efficiently reduce the input rule set. For ARLIUS, we experimented different scenarios over the same questionnaire database and we obtained reduced sets of rules (less than 100) with very low support.
APA, Harvard, Vancouver, ISO, and other styles
5

Marinica, Claudia. "Association Rule Interactive Post-processing using Rule Schemas and Ontologies : aripso." Phd thesis, Nantes, 2010. https://archive.bu.univ-nantes.fr/pollux/show/show?id=90a57cc4-245f-420d-ac2b-f9ad7929e0f7.

Full text
Abstract:
Cette thèse s'inscrit à la confluence de deux domaines actifs de recherche: l'Extraction de Connaissances à partir des Données - la fouille de Règles
This thesis is concerned with the merging of two active research domains: Knowledge Discovery in Databases - Association Rule Mining technique, and Knowledge Engineering - representation languages of Semantic Web. The usefulness of association rule technique is strongly limited by the huge amount and the low quality of delivered rules. To overcome this drawback, several methods have been proposed in the literature such as itemset concise representations, redundancy reduction, filtering, ranking and post-processing, and most of them are based on data structure. However, rule interestingness strongly depends on user knowledge and goals. In this context, it is crucial to help the user with an efficient technique to reduce the number of rules while keeping interesting ones. This work addresses two main issues: the integration of user knowledge in the discovery process and the interactivity with the user. The first issue requires an accurate and flexible formalism to express user knowledge such as ontologies in the Semantic Web. The second one proposes a more iterative mining process allowing the user to explore the rule space incrementally focusing on interesting rules. The main contributions of this work can be summarized as follows: (i) A model to represent user knowledge. First, we propose to represent user domain knowledge by means of ontologies. Second, we develop a new formalism, called "Rule Schema", which allows the user to define his/her expectations throughout ontology concepts. Last, we suggest the user a set of "mining Operators" to be applied over Rule Schemas. (ii) A new post-processing approach, ARJPSO. Lt allows the user to reduce the volume of the discovered rules by keeping only the interesting rules. ARIPSO is an interactive process integrating user knowledge by means of the proposed model. The interactive loop allows at each step the user to change the provided information and to reiterate the post-processing phase. (iii) The implementation in post-processing of ARJPSO. The developed tool is complete and operational, and it implements all the functionalities described in the approach. An alternative implementation, without post-processing, was proposed (ARLIUS). It consists in an interactive local mining process. (iv) An experimental study analyzing the approach efficiency and the discovered rule quality. For this purpose, we used a large real-life database; for ARJPSO, the experimentation was carried out in complete cooperation with the domain expert. From an input set of nearly 400 thousand rules, for different scenarios, ARIPSO filtered between 3 and 200 rules validated by the expert
APA, Harvard, Vancouver, ISO, and other styles
6

Gherasim, Toader. "Détection de problèmes de qualité dans les ontologies construites automatiquement à partir de textes." Phd thesis, Université de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00982126.

Full text
Abstract:
La démocratisation de l'utilisation des ontologies dans des domaines très variés a stimulé le développement d'approches proposant différents degrés d'automatisation du processus de construction d'une ontologie. Cependant, malgré le réel intérêt de ces approches, parfois les résultats obtenus peuvent être d'une faible qualité. L'objectif des travaux présentés dans cette thèse est de contribuer à l'amélioration de la qualité des ontologies construites automatiquement à partir de textes. Nos principales contributions sont : (1) une démarche pour la comparaison des approches, (2) une typologie des problèmes qui affectent la qualité les ontologies, et (3) une première réflexion sur l'automatisation de la détection des problèmes. Notre démarche de comparaison des approches comporte trois étapes complémentaires : (1) sur la base de leur degré de complétude et d'automatisation ; (2) puis sur la base de leurs caractéristiques techniques et fonctionnelles, et (3) expérimentalement par comparaison de leurs résultats avec une ontologie construite manuellement. La typologie proposée organise les problèmes de qualité selon deux dimensions : les erreurs versus les situations indésirables et les aspects logiques versus les aspects sociaux. Notre typologie contient 24 classes de problèmes qui recouvrent, en les complétant, les problèmes décrits dans la littérature. Pour la détection automatique nous avons recensé quelques unes des méthodes existantes pour chaque problème de notre typologie et nous avons mis en évidence les problèmes qui semblent encore ouverts. Et, nous avons proposé une heuristique pour un problème qui apparaît fréquemment dans nos expérimentations (étiquettes polysémiques).
APA, Harvard, Vancouver, ISO, and other styles
7

Fortineau, Virginie. "Contribution à une modélisation ontologique des informations tout au long du cycle de vie du produit." Phd thesis, Ecole nationale supérieure d'arts et métiers - ENSAM, 2013. http://pastel.archives-ouvertes.fr/pastel-01064598.

Full text
Abstract:
Les travaux de recherche de cette thèse portent sur la modélisation sémantique des informations industrielles, dans une approche og cycle de vie fg , de gestion des informations. Dans ce type d'approche, lever le verrou lié à l'interopérabilité des modèles d'information est une condition sine qua non à un échange d'information sans perte de flux sémantique. Jusqu'alors, des méthodes d'unification étaient envisagées, reposant sur l'utilisation mutuelle de modèles standards, tels que la norme STEP par exemple. Cependant, l'unification fait face à des limites en termes d'expressivité des modèles, de rigidité, et de perte de sémantique. Afin de lever ces limites, les paradigmes de modélisation évoluent et se tournent vers les ontologies d'inférence, outils du web sémantique.Dans cette thèse, nous proposons un cadre de modélisation sémantique général et une méthodologie de mise en place de ce cadre, qui reposent sur l'utilisation d'ontologies d'inférence. L'application du cadre de modélisation à un cas d'étude industriel, issu de l'ingénierie nucléaire (plus particulièrement l'expression et l'exécution des règles métier), permet alors d'évaluer les apports et limites des ontologies en tant que paradigme de modélisation. Les limites les plus importantes que nous identifions sur l'Open World Assumption, le manque de langage de règles performant et le manque d'outils d'implémentation robustes pour des applications à large échelle. Le développement d'un démonstrateur pour le cas d'étude industriel permet finalement de tendre vers une solution mixte, où les ontologies sont utilisées localement, afin d'en exploiter les divers avantages de manière optimale.
APA, Harvard, Vancouver, ISO, and other styles
8

Aimé, Xavier. "Gradients de prototypicalité, mesures de similarité et de proximité sémantique : une contribution à l'Ingénierie des Ontologies." Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00660916.

Full text
Abstract:
En psychologie cognitive, la notion de prototype apparaît de manière centrale dans les représentations conceptuelles. Dans le cadre de nos travaux, nous proposons d'introduire cette notion au sein des activités relevant de l'Ingénierie des Ontologies et de ses modèles de représentation. L'approche sémiotique que nous avons développée est fondée sur les trois dimensions d'une conceptualisation que sont l'intension (les propriétés), l'expression (les termes), et l'extension (les instances). Elle intègre, en sus de l'ontologie, des connaissances supplémentaires propres à l'utilisateur (pondération des propriétés, corpus, instances). Pratiquement, il s'agit de pondérer les liens "is-a", les termes et les instances d'une hiérarchie de concepts, au moyen de gradients de prototypicalité respectivement conceptuelle, lexicale et extensionnelle. Notre approche a été mise en oeuvre dans un système industriel de gestion documentaire et de recherche d'information pour la société Tennaxia - société de veille juridique dans le domaine de l'Environnement. Elle a conduit au développement d'une ontologie du domaine Hygiène-Sécurité-Environnement, et de deux applications logicielles : l'application TooPrag dédiée au calcul des différents gradients de prototypicalité, et le moteur de Recherche d'Information sémantique Theseus qui exploite les gradients de prototypicalité. Nous avons enfin étendu notre approche à la définition de deux nouvelles mesures sémantiques, en nous inspirant des lois de similarité et de proximité de la théorie de la perception : Semiosem, une mesure de similarité, et Proxem, une mesure de proximité.
APA, Harvard, Vancouver, ISO, and other styles
9

Chniti, Amina. "Gestion des dépendances et des interactions entre Ontologies et Règles Métier." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00820671.

Full text
Abstract:
Vu la rapidité de l'évolution des connaissances des domaines, la maintenance des systèmes d'information est devenue de plus en plus difficile à gérer. Afin d'assurer une flexibilité de ces systèmes, nous proposons une approche qui permet de représenter les connaissances des domaines dans des modèles de représentation des connaissances plutôt que de les coder, dans un langage de programmation informatique, dans l'application du domaine. Ceci assurerait une meilleure flexibilité des systèmes d'information, faciliterait leur maintenance et permettrait aux experts métier de gérer eux même l'évolution des connaissances de leur domaine. Pour cela, nous proposons une approche qui permet d'intégrer des ontolo- gies et des règles métier. Les ontologies permettent de modéliser les connais- sances d'un domaine. Les règles permettent aux experts métier de définir et d'automatiser, dans un langage naturel contrôlé, des décisions du métier en se fondant sur les connaissances représentées dans l'ontologie. Ainsi, les règles dépendent des entités modélisées dans l'ontologie. Vu cette dépendance, il est nécessaire d'étudier l'impact de l'évolution des ontologies sur les règles. Pour cela, nous proposons l'approche MDR (Modéliser - Détecter - Réparer) qui permet de modéliser des changements d'ontologies, de détecter les problèmes de cohérence qu'ils peuvent causer sur les règles métier et de proposer des solutions pour réparer ces problèmes. L'approche proposée est une approche orientée experts métier et est fondée sur les systèmes de gestion des règles métier.
APA, Harvard, Vancouver, ISO, and other styles
10

Gong, Jian, and 龔劍. "Managing uncertainty in schema matchings." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46076116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Stefanoni, Giorgio. "Evaluating conjunctive and graph queries over the EL profile of OWL 2." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:232978e9-90a2-41cc-afd5-319518296894.

Full text
Abstract:
OWL 2 EL is a popular ontology language that is based on the EL family of description logics and supports regular role inclusions,axioms that can capture compositional properties of roles such as role transitivity and reflexivity. In this thesis, we present several novel complexity results and algorithms for answering expressive queries over OWL 2 EL knowledge bases (KBs) with regular role inclusions. We first focus on the complexity of conjunctive query (CQ) answering in OWL 2 EL and show that the problem is PSpace-complete in combined complexity, the complexity measured in the total size of the input. All the previously known approaches encode the regular role inclusions using finite automata that can be worst-case exponential in size, and thus are not optimal. In our PSpace procedure, we address this problem by using a novel, succinct encoding of regular role inclusions based on pushdown automata with a bounded stack. Moreover, we strengthen the known PSpace lower complexity bound and show that the problem is PSpace-hard even if we consider only the regular role inclusions as part of the input and the query is acyclic; thus, our algorithm is optimal in knowledge base complexity, the complexity measured in the size of the KB, as well as for acyclic queries. We then study graph queries for OWL 2 EL and show that answering positive, converse- free conjunctive graph queries is PSpace-complete. Thus, from a theoretical perspective, we can add navigational features to CQs over OWL 2 EL without an increase in complexity. Finally, we present a practicable algorithm for answering CQs over OWL 2 EL KBs with only transitive and reflexive composite roles. None of the previously known approaches target transitive and reflexive roles specifically, and so they all run in PSpace and do not provide a tight upper complexity bound. In contrast, our algorithm is optimal: it runs in NP in combined complexity and in PTime in KB complexity. We also show that answering CQs is NP-hard in combined complexity if the query is acyclic and the KB contains one transitive role, one reflexive role, or nominals—concepts containing precisely one individual.
APA, Harvard, Vancouver, ISO, and other styles
12

Silva, Eunice Palmeira da. "Classificação de informação usando ontologias." Universidade Federal de Alagoas, 2006. http://repositorio.ufal.br/handle/riufal/852.

Full text
Abstract:
Although the positive aspects that Internet possesses and the potential it permits, there is a problematic that consists on finding needed pieces of information among the deluge of available documents on the web. Tools that are able to semantically treat the information contained in the documents which follows a structure only focused on data presentation are still lacking. The MASTER-Web system solves the problem of integrated extraction of content-pages that belong to classes which form a cluster. In this context, we propose the extension of this tool to the scientific articles classification based on ontologies. To achieve this goal, an ontology for the Artificial Intelligence domain was constructed and rule-based classification strategies were adopeted. The approach presented here employs this ontology and textual classification techniques to extract useful pieces of information from the articles in order to infer to which themes it is about. This combination led to significative results: e.g. in the texts, the system is able to identify the specific subdivisions of AI and entails conclusions, distinguishing correctlly the themes of the articles from the ones that are briefiy mentioned in the texts. The application of simple techniques and a detailed ontology lead to promising classification results, independently of the document structure, proposing an eficient and plausible solution.
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Apesar dos aspectos positivos que a Internet possui e do potencial que permite, existe a problemática, que consiste em encontrar a informação necessária em meio a uma enorme quantidade de documentos disponíveis na rede. Faltam, ainda, ferramentas capazes de tratar semanticamente a informação contida em documentos que seguem uma estrutura preocupada apenas com a exibição dos seus dados. O sistema MASTERWeb, resolve o problema da extração integrada de páginas-conteúdo pertencentes às classes que integram um grupo (cluster ). Neste contexto propomos a extensão dessa ferramenta para a classificação de artigos científicos baseada em ontologias. Para isso foi construída uma ontologia do domínio de Inteligência Artificial e adotadas estratégias de classificação utilizando sistemas de regras. A abordagem apresentada aqui, emprega esta ontologia e técnicas de classificação textual para extrair dos artigos informações úteis, e daí inferir sobre os temas tratados nestes artigos. Essa combinação conduziu a resultados bastante significativos: por exemplo, o sistema é capaz de identificar no texto as subáreas de IA que ele aborda e deriva conclusões, distinguindo os assuntos tratados pelo artigo daqueles que são brevemente citados no texto. A aplicação de técnicas simples e uma ontologia bem formada levam a resultados de classificação promissores, independentemente da estrutura do documento, propondo uma solução eficiente e plausível.
APA, Harvard, Vancouver, ISO, and other styles
13

Armas, Romero Ana. "Ontology module extraction and applications to ontology classification." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:4ec888f4-b7c0-4080-9d9a-3c46c91f67e3.

Full text
Abstract:
Module extraction is the task of computing a (preferably small) fragment M of an ontology O that preserves a class of entailments over a signature of interest ∑. Existing practical approaches ensure that M preserves all second-order entailments of O over ∑, which is a stronger condition than is required in many applications. In the first part of this thesis, we propose a novel approach to module extraction which, based on a reduction to a datalog reasoning problem, makes it possible to compute modules that are tailored to preserve only specific kinds of entailments. This leads to obtaining modules that are often significantly smaller than those produced by other practical approaches, as shown in an empirical evaluation. In the second part of this thesis, we consider the application of module extraction to the optimisation of ontology classification. Classification is a fundamental reasoning task in ontology design, and there is currently a wide range of reasoners that provide this service. Reasoners aimed at so-called lightweight ontology languages are much more efficient than those aimed at more expressive ones, but they do not offer completeness guarantees for ontologies containing axioms outside the relevant language. We propose an original approach to classification based on exploiting module extraction techniques to divide the workload between a general purpose reasoner and a more efficient reasoner for a lightweight language in such a way that the bulk of the workload is assigned to the latter. We show how the proposed approach can be realised using two particular module extraction techniques, including the one presented in the first part of the thesis. Furthermore, we present the results of an empirical evaluation that shows that this approach can lead to a significant performance improvement in many cases.
APA, Harvard, Vancouver, ISO, and other styles
14

Rickels, Christopher A. "Inherited Ontologies and the Relations between Philosophy of Mind and the Empirical Cognitive Sciences." University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1365012314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lemaignan, Séverin. "Ancrer l'interaction: Gestion des connaissances pour la robotique interactive." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00728775.

Full text
Abstract:
Ancrer l'interaction: Gestion des connaissances pour la robotique interactive Avec le développement de la robotique cognitive, le besoin d'outils avancés pour représenter, manipuler, raisonner sur les connaissances acquises par un robot a clairement été mis en avant. Mais stocker et manipuler des connaissances requiert tout d'abord d'éclaircir ce que l'on nomme connaissance pour un robot, et comment celle-ci peut-elle être représentée de manière intelligible pour une machine. Ce travail s'efforce dans un premier temps d'identifier de manière systématique les besoins en terme de représentation de connaissance des applications robotiques modernes, dans le contexte spécifique de la robotique de service et des interactions homme-robot. Nous proposons une typologie originale des caractéristiques souhaitables des systèmes de représentation des connaissances, appuyée sur un état de l'art détaillé des outils existants dans notre communauté. Dans un second temps, nous présentons en profondeur ORO, une instanciation particulière d'un système de représentation et manipulation des connaissances, conçu et implémenté durant la préparation de cette thèse. Nous détaillons le fonctionnement interne du système, ainsi que son intégration dans plusieurs architectures robotiques complètes. Un éclairage particulier est donné sur la modélisation de la prise de perspective dans le contexte de l'interaction, et de son interprétation en terme de théorie de l'esprit. La troisième partie de l'étude porte sur une application importante des systèmes de représentation des connaissances dans ce contexte de l'interaction homme-robot : le traitement du dialogue situé. Notre approche et les algorithmes qui amènent à l'ancrage interactif de la communication verbale non contrainte sont présentés, suivis de plusieurs expériences menées au Laboratoire d'Analyse et d'Architecture des Systèmes au CNRS à Toulouse, et au groupe Intelligent Autonomous System de l'université technique de Munich. Nous concluons cette thèse sur un certain nombre de considérations sur la viabilité et l'importance d'une gestion explicite des connaissances des agents, ainsi que par une réflexion sur les éléments encore manquant pour réaliser le programme d'une robotique "de niveau humain".
APA, Harvard, Vancouver, ISO, and other styles
16

Coursey, Kino High. "An Approach Towards Self-Supervised Classification Using Cyc." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5470/.

Full text
Abstract:
Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work.
APA, Harvard, Vancouver, ISO, and other styles
17

Chahuara, Pedro. "Contrôle intelligent de la domotique à partir d'informations temporelles multisources imprécises et incertaines." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00956372.

Full text
Abstract:
La Maison Intelligente est une résidence équipée de technologie informatique qui assiste ses habitants dans les situations diverses de la vie domestique en essayant de gérer de manière optimale leur confort et leur sécurité par action sur la maison. La détection des situations anormales est un des points essentiels d'un système de surveillance à domicile. Ces situations peuvent être détectées en analysant les primitives générées par les étages de traitement audio et par les capteurs de l'appartement. Par exemple, la détection de cris et de bruits sourds (chute d'un objet lourd) dans un intervalle de temps réduit permet d'inférer l'occurrence d'une chute. Le but des travaux de cette thèse est la réalisation d'un contrôleur intelligent relié à tous les périphériques de la maison et capable de réagir aux demandes de l'habitant (par commande vocale) et de reconnaître des situations à risque ou de détresse. Pour accomplir cet objectif, il est nécessaire de représenter formellement et raisonner sur des informa- tions, le plus souvent temporelles, à des niveaux d'abstraction différents. Le principal défi est le traitement de l'incertitude, de l'imprécision, et de l'incomplétude, qui caractérisent les informations dans ce domaine d'application. Par ailleurs, les décisions prises par le contrôleur doivent tenir compte du contexte dans lequel un ordre est donné, ce qui nous place dans l'informatique sensible au contexte. Le contexte est composé des informations de haut niveau telles que la localisation, l'activité en cours de réalisation, la période de la journée. Les recherches présentées dans ce manuscrit peuvent être divisées principalement en trois axes : la réalisation des méthodes d'inférence pour acquérir les informations du contexte (notamment, la localisation de l'habitant et l'activité en cours) à partir des informations incertaines, la représentation des connaissances sur l'environnement et les situations à risque, et finalement la prise de décision à partir des informations contextuelles. La dernière partie du manuscrit expose les résultats de la validation des méthodes proposées par des évaluations menées sur la plateforme expérimentale Domus.
APA, Harvard, Vancouver, ISO, and other styles
18

Rocher, Swan. "Querying existential rule knowledge bases : decidability and complexity." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT291/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème d'interrogation de bases de connaissances composées de données et d'une ontologie, qui représente des connaissances générales sur le domaine d'application. Parmi les différents formalismes permettant de représenter les connaissances ontologiques, nous considérons ici un fragment de la logique du premier ordre appelé règles existentielles (aussi connues sous le nom de ``tuple generating dependencies'' et Datalog+/-). Le problème fondamental de conséquence logique au cœur de cette thèse demande si une requête conjonctive est conséquence d'une base de connaissances. Les règles existentielles étant très expressives, ce problème est indécidable. Toutefois, différentes restrictions sur les ensembles de règles ont été proposées afin d'obtenir sa décidabilité.La contribution de cette thèse est double. Premièrement, nous proposons un outil qui nous permet d'unifier puis d'étendre la plupart des classes de règles connues reposant sur des notions d'acyclicité assurant la finitude du chaînage avant. Deuxièmement, nous étudions la compatibilité des classes décidables de règles existentielles connues avec un type de connaissance souvent nécessaire dans les ontologies: la transitivité de relations binaires. Nous aidons à clarifier le paysage des résultats positifs et négatifs liés à cette question et fournissons une approche permettant de combiner la transitivité avec les règles existentielles linéaires
In this thesis we investigate the issue of querying knowledge bases composed of data and general background knowledge, called an ontology. Ontological knowledge can be represented under different formalisms and we consider here a fragment of first-order logic called existential rules (also known as tuple-generating dependencies and Datalog+/-).The fundamental entailment problem at the core of this thesis asks if a conjunctive query is entailed by an existential rule knowledge base. General existential rules are highly expressive, however at the cost of undecidability. Various restrictions on sets of rules have been proposed to regain the decidability of the entailment problem.Our specific contribution is two-fold. First, we propose a new tool that allows to unify and extend most of the known existential rule classes that rely on acyclicity conditions to tame infinite forward chaining, without increasing the complexity of the acyclicity recognition. Second, we study the compatibility of known decidable rule classes with a frequently required modeling construct, namely transitivity of binary relations. We help clarifying the picture of negative and positive results on this question, and provide a technique to safely combine transitivity with one of the simplest, yet useful, decidable rule classes, namely linear rules
APA, Harvard, Vancouver, ISO, and other styles
19

Ferré, Arnaud. "Représentations vectorielles et apprentissage automatique pour l’alignement d’entités textuelles et de concepts d’ontologie : application à la biologie." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS117/document.

Full text
Abstract:
L'augmentation considérable de la quantité des données textuelles rend aujourd’hui difficile leur analyse sans l’assistance d’outils. Or, un texte rédigé en langue naturelle est une donnée non-structurée, c’est-à-dire qu’elle n’est pas interprétable par un programme informatique spécialisé, sans lequel les informations des textes restent largement sous-exploitées. Parmi les outils d’extraction automatique d’information, nous nous intéressons aux méthodes d’interprétation automatique de texte pour la tâche de normalisation d’entité qui consiste en la mise en correspondance automatique des mentions d’entités de textes avec des concepts d’un référentiel. Pour réaliser cette tâche, nous proposons une nouvelle approche par alignement de deux types de représentations vectorielles d’entités capturant une partie de leur sens : les plongements lexicaux pour les mentions textuelles et des “plongements ontologiques” pour les concepts, conçus spécifiquement pour ce travail. L’alignement entre les deux se fait par apprentissage supervisé. Les méthodes développées ont été évaluées avec un jeu de données de référence du domaine biologique et elles représentent aujourd’hui l’état de l’art pour ce jeu de données. Ces méthodes sont intégrées dans une suite logicielle de traitement automatique des langues et les codes sont partagés librement
The impressive increase in the quantity of textual data makes it difficult today to analyze them without the assistance of tools. However, a text written in natural language is unstructured data, i.e. it cannot be interpreted by a specialized computer program, without which the information in the texts remains largely under-exploited. Among the tools for automatic extraction of information from text, we are interested in automatic text interpretation methods for the entity normalization task that consists in automatically matching text entitiy mentions to concepts in a reference terminology. To accomplish this task, we propose a new approach by aligning two types of vector representations of entities that capture part of their meanings: word embeddings for text mentions and concept embeddings for concepts, designed specifically for this work. The alignment between the two is done through supervised learning. The developed methods have been evaluated on a reference dataset from the biological domain and they now represent the state of the art for this dataset. These methods are integrated into a natural language processing software suite and the codes are freely shared
APA, Harvard, Vancouver, ISO, and other styles
20

Ben, Messaoud Montassar. "SemCaDo : une approche pour la découverte de connaissances fortuites et l'évolution ontologique." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00716128.

Full text
Abstract:
En réponse au besoin croissant de réutiliser les connaissances déjà existantes lors de l'apprentissage des réseaux bayésiens causaux, les connaissances sémantiques contenues dans les ontologies de domaine présentent une excellente alternative pour assister le processus de découverte causale avec le minimum de coût et d'eff ort. Dans ce contexte, la présente thèse s'intéresse plus particulièrement au crossing-over entre les réseaux bayésiens causaux et les ontologies et établit les bases théoriques d'une approche cyclique intégrant les deux formalismes de manière interchangeable. En premier lieu, on va intégrer les connaissances sémantiques contenues dans les ontologies de domaine pour anticiper les meilleures expérimentations au travers d'une stratégie fortuite (qui, comme son nom l'indique, mise sur l'imprévu pour dégager les résultats les plus impressionnants). En e et, les connaissances sémantiques peuvent inclure des relations causales en plus de la structure hiérarchique. Donc au lieu de refaire les mêmes efforts qui ont déjà été menés par les concepteurs et éditeurs d'ontologies, nous proposons de réutiliser les relations (sémantiquement) causales en les adoptant comme étant des connaissances à priori. Ces relations seront alors intégrées dans le processus d'apprentissage de structure (partiellement) causale à partir des données d'observation. Pour compléter l'orientation du graphe causal, nous serons en mesure d'intervenir activement sur le système étudié. Nous présentons également une stratégie décisionnelle basée sur le calcul de distances sémantiques pour guider le processus de découverte causale et s'engager davantage sur des pistes inexplorées. L'idée provient principalement du fait que les concepts les plus rapprochés sont souvent les plus étudiés. Pour cela, nous proposons de renforcer la capacité des ordinateurs à fournir des éclairs de perspicacité en favorisant les expérimentations au niveau des concepts les plus distants selon la structure hiérarchique. La seconde direction complémentaire concerne un procédé d'enrichissement par lequel il sera possible de réutiliser ces découvertes causales et soutenir le caractère évolutif de l'ontologie. Une étude expérimentale a été conduite en utilisant les données génomiques concernant Saccharomyces cerevisiae et l'Ontologie des Gènes pour montrer les potentialités de l'approche SemCaDo dans des domaines ou les expérimentations sont généralement très coûteuses, complexes et fastidieuses.
APA, Harvard, Vancouver, ISO, and other styles
21

Flycht-Eriksson, Annika. "Design and use of ontologies in information-providing dialogue systems /." Linköping : Univ, 2004. http://www.ep.liu.se/diss/science_technology/08/74/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zarebski, David. "Ontologie naturalisée et ingénierie des connaissances." Thesis, Paris 1, 2018. http://www.theses.fr/2018PA01H232/document.

Full text
Abstract:
«Qu’ai-je besoin de connaître minimalement d’une chose pour la connaître ?» Le fait que cette question aux allures de devinette s’avère cognitivement difficile à appréhender de par son degré de généralité explique sans peine la raison pour laquelle son élucidation demeura plusieurs millénaires durant l’apanage d’une discipline unique : la Philosophie. Dans ce contexte, énoncer des critères à même de distinguer les composants primitifs de la réalité – ou le "mobilier du monde" – ainsi que leurs relations revient à produire une Ontologie. Cet ouvrage s’attelle à la tâche d’élucider le tournant historique curieux, en apparence anodin, que constitue l’émergence de ce type de questionnement dans le champ de deux disciplines connexes que constituent l’Intelligence Artificielle et l’Ingénierie des Connaissances. Nous montrons plus particulièrement ici que leur import d’une forme de méthodologie ontologique appliquée à la cognition ou à la représentation des connaissances ne relève pas de la simple analogie mais soulève un ensemble de questions et d’enjeux pertinents tant sur un plan appliqué que spéculatif. Plus spécifiquement, nous montrons ici que certaines des solutions techniques au problème de la data-masse (Big Data) – i.e. la multiplication et la diversification des données en ligne – constitue un point d’entrée aussi nouveau qu’inattendu dans de nombreuses problématiques traditionnellement philosophiques relatives à la place du langage et des raisonnements de sens commun dans la pensée ou encore l’existence d’une structuration de la réalité indépendante de l’esprit humain
«What do I need to know about something to know it ?». It is no wonder that such a general, hard to grasp and riddle-like question remained the exclusive domain of a single discipline for centuries : Philosophy. In this context, the distinction of the primitive components of reality – the so called "world’s furniture" – and their relations is called an Ontology. This book investigates the emergence of similar questions in two different though related fields, namely : Artificial Intelligence and Knowledge Engineering. We show here that the way these disciplines apply an ontological methodology to either cognition or knowledge representation is not a mere analogy but raises a bunch of relevant questions and challenges from both an applied and a speculative point of view. More specifically, we suggest that some of the technical answers to the issues addressed by Big Data invite us to revisit many traditional philosophical positions concerning the role of language or common sense reasoning in the thought or the existence of mind-independent structure in reality
APA, Harvard, Vancouver, ISO, and other styles
23

Weinert, Luciana Vieira Castilho. "Ontologias e técnicas de inteligência artificial aplicadas ao diagnóstico em fisioterapia neuropediátrica." Universidade Tecnológica Federal do Paraná, 2010. http://repositorio.utfpr.edu.br/jspui/handle/1/1331.

Full text
Abstract:
CAPES
Esta tese propõe uma metodologia baseada em Ontologias e técnicas de Inteligência Artificial para apoio ao diagnóstico e ao processo de ensino-aprendizagem em Fisioterapia Neuropediátrica. Nesta área são escassas as medidas objetivas que permitam quantificar o diagnóstico e a evolução de um paciente. O diagnóstico é limitado a informar em quais meses do desenvolvimento motor normal um paciente pode ser classificado, baseando-se na experiência subjetiva do fisioterapeuta. Neste trabalho foram utilizados métodos formais para a aquisição e representação do conhecimento de especialistas da área. Conflitos de opiniões foram tratados sistematicamente e o conhecimento foi representado por uma Ontologia. Esta gerou um conjunto de regras de classificação a partir do qual três abordagens foram desenvolvidas: um sistema especialista crisp, um fuzzy e um baseado em modelos determinísticos. O primeiro teve um desempenho não condizente com a realidade do problema. O segundo se mostrou também inadequado. A abordagem com modelos determinísticos se mostrou adequada para classificar um paciente com diferentes graus de pertinência a múltiplos meses do desenvolvimento motor. Os resultados utilizando esta metodologia sugerem que o mesmo é capaz de simular objetivamente o diagnóstico fornecido por especialistas ao analisarem casos reais, em 90% dos casos. Uma extensão do trabalho foi a utilização da Ontologia em uma ferramenta de suporte ao processo de ensino-aprendizagem deste conteúdo em Fisioterapia. Esta abordagem mostrou resultados satisfatórios, tendo sido utilizada tanto por profissionais quanto por alunos, mostrando o seu potencial como recurso multimídia de ensino. 85% dos profissionais entrevistados concordaram fortemente sobre o potencial da ontologia para se tornar uma nova forma de contribuição ao processo de ensino-aprendizagem deste conteúdo. As principais contribuições desta tese são: a gestão eficiente do conhecimento em um domínio cuja característica é a fraca sistematização e a subjetividade; metodologias para apoio à quantificação do diagnóstico do paciente neuropediátrico; e o desenvolvimento de uma ferramenta para suporte ao ensino baseado em uma Ontologia.
This thesis proposes a new methodology based on ontologies and artificial intelligence techniques to support the diagnosis and the teaching-learning process in neuropediatric physiotherapy. In this area, standardized and objective measurements to quantify the diagnosis are difficultly found. The diagnosis is limited to inform in which months of the normal motor development a patient can be classified, based upon only on the subjective experience of the physiotherapist. In this work formal methods for knowledge acquisition and representations were used. Possible divergences of opinions between experts were systematically treated, and the acquired knowledge was represented as an ontology. Such ontology generated a set of classification rules from which three different approaches for diagnosis were developed: a crisp expert system, a fuzzy system, and another approach based on deterministic models. The crisp expert system did not accomplish to the problem. The fuzzy approach was not adequate too. The last approach was shown to be adequate for classifying a given patient with different degrees of membership to several months of the motor development. Results using this methodology suggested that it is capable of simulating objectively the diagnosis from human experts when analyzing real-world cases, in 90% of the cases. An extension of this work is the use of the developed ontology in a tool to support the teaching-learning process of neuropediatric physiotherapy. Such approach revealed fairly satisfactory. It was tested by professionals and students, and both found it promising as a multimedia educational resource. 85% strongly agreed about the ontology potential to be used as a tool for teaching-learning process. Overall, the main contributions of this thesis are: efficient knowledge management in a domain with weak standardization and high subjectivity of expert knowledge; methodologies for supporting the quantification of the diagnosis of a neuropediatric patient; and the development of an ontology-based multimedia tool for educational purposes.
APA, Harvard, Vancouver, ISO, and other styles
24

Lisena, Pasquale. "Knowledge-based music recommendation : models, algorithms and exploratory search." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS614.

Full text
Abstract:
Représenter l'information décrivant la musique est une activité complexe, qui implique différentes sous-tâches. Ce manuscrit de thèse porte principalement sur la musique classique et étudie comment représenter et exploiter ses informations. L'objectif principal est l'étude de stratégies de représentation et de découverte des connaissances appliquées à la musique classique, dans des domaines tels que la production de base de connaissances, la prédiction de métadonnées et les systèmes de recommandation. Nous proposons une architecture pour la gestion des métadonnées de musique à l'aide des technologies du Web Sémantique. Nous introduisons une ontologie spécialisée et un ensemble de vocabulaires contrôlés pour les différents concepts spécifiques à la musique. Ensuite, nous présentons une approche de conversion des données, afin d’aller au-delà de la pratique bibliothécaire actuellement utilisée, en s’appuyant sur des règles de mapping et sur l’interconnexion avec des vocabulaires contrôlés. Enfin, nous montrons comment ces données peuvent être exploitées. En particulier, nous étudions des approches basées sur des plongements calculés sur des métadonnées structurées, des titres et de la musique symbolique pour classer et recommander de la musique. Plusieurs applications de démonstration ont été réalisées pour tester les approches et les ressources précédentes
Representing the information about music is a complex activity that involves different sub-tasks. This thesis manuscript mostly focuses on classical music, researching how to represent and exploit its information. The main goal is the investigation of strategies of knowledge representation and discovery applied to classical music, involving subjects such as Knowledge-Base population, metadata prediction, and recommender systems. We propose a complete workflow for the management of music metadata using Semantic Web technologies. We introduce a specialised ontology and a set of controlled vocabularies for the different concepts specific to music. Then, we present an approach for converting data, in order to go beyond the librarian practice currently in use, relying on mapping rules and interlinking with controlled vocabularies. Finally, we show how these data can be exploited. In particular, we study approaches based on embeddings computed on structured metadata, titles, and symbolic music for ranking and recommending music. Several demo applications have been realised for testing the previous approaches and resources
APA, Harvard, Vancouver, ISO, and other styles
25

Marroquín, Cortez Roberto Enrique. "Context-aware intelligent video analysis for the management of smart buildings." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCK040/document.

Full text
Abstract:
Les systèmes de vision artificielle sont aujourd'hui limités à l'extraction de données issues de ce que les caméras « voient ». Cependant, la compréhension de ce qu'elles voient peut être enrichie en associant la connaissance du contexte et la connaissance d'interprétation d'un humain.Dans ces travaux de thèse, nous proposons une approche associant des algorithmes de vision atificielle à une modélisation sémantique du contexte d'acquisition.Cette approche permet de réaliser un raisonnement sur la connaissance extraite des images par les caméras en temps réel. Ce raisonnement offre une réponse aux problèmes d'occlusion et d'erreurs de détections inhérents aux algorithmes de vision artificielle. Le système complet permet d'offrir un ensemble de services intelligents (guidage, comptage...) tout en respectant la vie privée des personnes observées. Ces travaux forment la première étape du développement d'un bâtiment intelligent qui peut automatiquement réagir et évoluer en observant l'activité de ces usagers, i.e., un bâtiment intelligent qui prend en compte les informations contextuelles.Le résultat, nommé WiseNET, est une intelligence artificielle en charge des décisions au niveau du bâtiment (qui pourrait être étendu à un groupe de bâtiments ou même a l'échelle d'un ville intelligente). Elle est aussi capable de dialoguer avec l'utilisateur ou l'administrateur humain de manière explicite
To date, computer vision systems are limited to extract digital data of what the cameras "see". However, the meaning of what they observe could be greatly enhanced by environment and human-skills knowledge.In this work, we propose a new approach to cross-fertilize computer vision with contextual information, based on semantic modelization defined by an expert.This approach extracts the knowledge from images and uses it to perform real-time reasoning according to the contextual information, events of interest and logic rules. The reasoning with image knowledge allows to overcome some problems of computer vision such as occlusion and missed detections and to offer services such as people guidance and people counting. The proposed approach is the first step to develop an "all-seeing" smart building that can automatically react according to its evolving information, i.e., a context-aware smart building.The proposed framework, named WiseNET, is an artificial intelligence (AI) that is in charge of taking decisions in a smart building (which can be extended to a group of buildings or even a smart city). This AI enables the communication between the building itself and its users to be achieved by using a language understandable by humans
APA, Harvard, Vancouver, ISO, and other styles
26

Carvalheira, Luiz Carlos da Cruz. "Método semi-automático de construção de ontologias parciais de domínio com base em textos." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-10012008-094436/.

Full text
Abstract:
Os recentes desenvolvimentos relacionados à gestão do conhecimento, à web semântica e à troca de informações eletrônicas por meio de agentes têm suscitado a necessidade de ontologias para descrever de modo formal conceituações compartilhadas à respeito dos mais variados domínios. Para que computadores e pessoas possam trabalhar em cooperação é necessário que as informações por eles utilizadas tenham significados bem definidos e compartilhados. Ontologias são instrumentos viabilizadores dessa cooperação. Entretanto, a construção de ontologias envolve um processo complexo e longo de aquisição de conhecimento, o que tem dificultado a utilização desse tipo de solução em mais larga escala. Este trabalho apresenta um método de criação semi-automática de ontologias a partir do uso de textos de um domínio qualquer para a extração dos conceitos e relações presentes nesses textos. Baseando-se na comparação da freqüência relativa dos termos extraídos com os escritos típicos da língua e na extração de padrões lingüísticos específicos, este método identifica termos candidatos a conceitos e relações existentes entre eles, apresenta-os a um ontologista para validação e, ao final, disponibiliza a ontologia ratificada para publicação e uso especificando-a na linguagem OWL.
The recent developments related to knowledge management, the semantic web and the exchange of electronic information through the use of agents have increased the need for ontologies to describe, in a formal way, shared understanding of a given domain. For computers and people to work in cooperation it is necessary that information have well defined and shared definitions. Ontologies are enablers of that cooperation. However, ontology construction remains a very complex and costly process, which has hindered its use in a wider scale. This work presents a method for the semi-automatic construction of ontologies using texts of any domain for the extraction of concepts and relations. By comparing the relative frequency of terms in the text with their expected use and extracting specific linguistic patterns, the method identifies concepts and relations and specifies the corresponding ontology using OWL for further use by other applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Flycht-Eriksson, (Silvervarg) Annika. "Design and use of ontologies in information-providing dialogue systems." Doctoral thesis, Linköpings universitet, NLPLAB - Laboratoriet för databehandling av naturligt språk, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5007.

Full text
Abstract:
In this thesis, the design and use of ontologies as domain knowledge sources in information-providing dialogue systems are investigated. The research is divided into two parts, theoretical investigations that have resulted in a requirements specifications on the design of ontologies to be used in information-providing dialogue systems, and empirical work on the development of a framework for use of ontologies in information-providing dialogue systems. The framework includes three models: A model for ontology-based semantic analysis of questions. A model for ontology-based dialogue management, specifically focus management and clarifications. A model for ontology-based domain knowledge management, specifically transformation of user requests to system oriented concepts used for information retrieval. In this thesis, it is shown that using ontologies to represent and reason on domain knowledge in dialogue systems has several advantages. A deeper semantic analysis is possible in several modules and a more natural and efficient dialogue can be achieved. Another important aspect is that it facilitates portability; to be able to reuse adapt the dialogue system to new tasks and domains, since the domain-specific knowledge is separated form generic features in the dialogue system architecture. Other advantages are that it reduces the complexity of linguistic produced in various domains.
APA, Harvard, Vancouver, ISO, and other styles
28

Tsatcha, Dieudonné. "Contribution à l'extraction et à la représentation des connaissances de l'environnement maritime : proposition d'une architecture dédiée aux applications de navigation." Thesis, Brest, 2014. http://www.theses.fr/2014BRES0118/document.

Full text
Abstract:
De nos jours, les applications informatiques autonomes sont au centre de grandes préoccupations de la recherche scientifique. Ces dernières sont destinées initialement à des systèmes d'aide à la décision dans des environnements contraints et dynamiques, communément appelés environnements complexes. Elles peuvent dès à présent, à l'aide des avancées de la recherche, permettre de construire et déduire leurs connaissances propres afin d'interagir en temps réel avec leur environnement. Cependant, elles sont confrontées à la difficulté d'avoir une modélisation fidèle du monde réel et des entités qui le composent. L'un des principaux objectifs de nos recherches est de capturer et modéliser la sémantique associée aux entités spatio-temporelles afin d'enrichir leur expressivité dans les SIG ou les systèmes d'aide à la décision. Un service de routage maritime dynamique a été déployé en exploitant cette modélisation. Cet algorithme a été démontré comme optimal en termes d'espace mémoire et de temps de calcul. La sémantique capturée se compose de l'affordance et de la saillance visuelle de l'entité spatiale. Les connaissances associées à cette sémantique sont par la suite représentées par une ontologie computationnelle qui intègre des approches spatio-temporelles. Ces connaissances sont soit déduites du savoir de l'expert du domaine, soit extraites de gros volumes de données textuelles en utilisant des techniques de traitement automatique du langage. L'ontologie computationnelle proposée nous a permis de définir un algorithme de routage maritime dynamique (fonction des évènements ou objets présents dans l'environnement) fondé sur une heuristique itérative monocritère de plus courte distance et bidirectionnelle. L'algorithme BIDA* proposé s'applique sur un graphe itératif qui est une conceptualisation d'une grille hexagonale itérative recouvrant la zone de navigation. Cet algorithme permet aussi la gestion de différents niveaux de résolution. Toujours dans l'initiative de produire un modèle aussi proche que possible du monde réel, l'algorithme BIDA* a été enrichi des stratégies multicritères afin de prendre en compte les différentes contraintes de la navigation maritime. Les contraintes globales et locales auxquelles nous nous sommes intéressés sont la profondeur des eaux, la distance de navigation et la direction de navigation. Le modèle proposé permet ainsi d'enrichir les capacités cognitives des utilisateurs évoluant dans les environnements maritimes et peut aussi être utilisé pour construire des systèmes complètement autonomes explorant ces environnements. Un prototype expérimental de navigation intelligente mettant en oeuvre cette modélisation et proposant un service de routage maritime a été développé dans le cadre de cette thèse
No
APA, Harvard, Vancouver, ISO, and other styles
29

Sadoun, Driss. "Des spécifications en langage naturel aux spécifications formelles via une ontologie comme modèle pivot." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01060540.

Full text
Abstract:
Le développement d'un système a pour objectif de répondre à des exigences. Aussi, le succès de sa réalisation repose en grande partie sur la phase de spécification des exigences qui a pour vocation de décrire de manière précise et non ambiguë toutes les caractéristiques du système à développer.Les spécifications d'exigences sont le résultat d'une analyse des besoins faisant intervenir différentes parties. Elles sont généralement rédigées en langage naturel (LN) pour une plus large compréhension, ce qui peut mener à diverses interprétations, car les textes en LN peuvent contenir des ambiguïtés sémantiques ou des informations implicites. Il n'est donc pas aisé de spécifier un ensemble complet et cohérent d'exigences. D'où la nécessité d'une vérification formelle des spécifications résultats.Les spécifications LN ne sont pas considérées comme formelles et ne permettent pas l'application directe de méthodes vérification formelles.Ce constat mène à la nécessité de transformer les spécifications LN en spécifications formelles.C'est dans ce contexte que s'inscrit cette thèse.La difficulté principale d'une telle transformation réside dans l'ampleur du fossé entre spécifications LN et spécifications formelles.L'objectif de mon travail de thèse est de proposer une approche permettant de vérifier automatiquement des spécifications d'exigences utilisateur, écrites en langage naturel et décrivant le comportement d'un système.Pour cela, nous avons exploré les possibilités offertes par un modèle de représentation fondé sur un formalisme logique.Nos contributions portent essentiellement sur trois propositions :1) une ontologie en OWL-DL fondée sur les logiques de description, comme modèle de représentation pivot permettant de faire le lien entre spécifications en langage naturel et spécifications formelles; 2) une approche d'instanciation du modèle de représentation pivot, fondée sur une analyse dirigée par la sémantique de l'ontologie, permettant de passer automatiquement des spécifications en langage naturel à leur représentation conceptuelle; et 3) une approche exploitant le formalisme logique de l'ontologie, pour permettre un passage automatique du modèle de représentation pivot vers un langage de spécifications formelles nommé Maude.
APA, Harvard, Vancouver, ISO, and other styles
30

Ponciano, Jean-Jacques. "Object detection in unstructured 3D data sets using explicit semantics." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES059.

Full text
Abstract:
Avec l’évolution des technologies et de la robotique, les possibilités offertes par les systèmes d’acquisition 3D ont augmenté. Aujourd’hui, ces systèmes sont utilisés dans différents domaines comme par exemple pour les véhicules autonomes,les robots de sauvetage, le patrimoine culturel. Ces champs d’application nécessitent souvent la reconnaissance d’objets à partir de données acquises. C’est pourquoi diverses méthodologies ont été étudiées pour traiter automatiquementles données 3D des nuages de points afin de détecter les objets contenus. Les meilleures méthodologies dépendent du contexte, c’est-à-dire qu’elles sont spécifiques aux données à traiter et aux objets à reconnaître. Elles produisent une reconnaissance performante, ce qui est essentiel quel que soit le domaine d’application. Toutefois, l’adaptation des méthodologies à un domaine d’application ou à un cas d’utilisation particulier limite la possibilité d’étendre l’utilisation d’une méthode à d’autres domaines. Ces observations soulignent l’importance de développer des méthodologies de reconnaissance d’objets spécifiques à un contexte de détection, mais aussi la limitation des méthodes existantes pour préserver leur capacité dans des contextes de détection changeants. Un excellent exemple d’un degré élevé de flexibilité face à l’évolution des contextes est l’intelligence humaine et la capacité de l’homme à concevoir des méthodologies ad hoc. L’homme peut analyser le contexte en fonction de ses connaissances et combiner différentes caractéristiques ou stratégies en fonction de l’objectif à atteindre. Il serait donc utile que les outils de vision par ordinateur intègrent des éléments d’intelligence artificielle permettant de s’adapter au contexte d’un domaine d’application et de guider le processus de détection à cet égard. Cette thèse de doctorat présente une approche de la reconnaissance d’objets basée sur la connaissance qui peut être utilisée dans tous les domaines d’application. Son architecture est basée sur des technologies sémantiques pour permettre à un module de gestion des connaissances de guider le processus de détection d’objets à travers une procédure étape par étape effectuant la sélection, le paramétrage et l’exécution des algorithmes. Le processus de détection est réalisé grâce à une approche d’intelligence artificielle qui utilise des connaissances explicites pour concevoir une solution de reconnaissance d’objets en fonction du contexte. Sa force réside dans son adaptabilité au contexte, mais aussi dans sa capacité d’analyse et de compréhension d’une scène et d’objets contenus ainsi que dans les spécificités des données à traiter. Cette capacité de compréhension est réalisée par un processus d’auto-apprentissage capable de définir et de valider des hypothèses concernant le contexte, permettant ainsi d’enrichir la base de connaissances et d’améliorer le processus de reconnaissance des objets. L’efficacité de cette capacité d’adaptation sera démontrée dans quatre cas d’utilisation de différents domaines d’application. Le premier cas d’utilisation est l’intérieur d’un bâtiment. Il est utilisé à des fins de surveillance. Le second cas d’utilisation se situe dans le domaine de l’archéologie représenté par des ruines anciennes contenant une maison en terrasse avec un moulin à eau. Le troisième cas d’utilisation est un extérieur représentant une partie de la ville de Fribourg en Allemagne. Il est utilisé à des fins industrielles. Enfin, le dernier cas d’utilisation est un intérieur acquis par Kinect de Microsoft. Il est utilisé à des fins robotiques
With the evolution of technologies and robotics, the possibilities offered by 3D acquisition systems have increased. Nowadays, these systems are used in different domains as for autonomous vehicles, rescue robots, cultural heritage, for example. These application fields often require to perform object recognition from acquired data. Therefore, various methodologies have been investigated to automatically process 3D point cloud data in order to detect contained objects. The best methodologiesdepend on the context, that means they are specific to the data to be processed and the objects to be recognized. They produce efficient recognition, which is essential whatever the application field. However, adapting methodologies to a particular application field or use case limits the flexibility to extend the use of a method to other fields. These observations highlight the importance of developing object recognition methodologies specific to a detection context, but also the limitation of existing methods to preserve their capacity within changing detection contexts. An excellent example of a high degree of flexibility to changing contexts is human intelligence and human’s ability to design ad hoc methodologies. Humans can analyze the context according to their knowledge and combine different characteristics or strategies according to the objective to be achieved. It would, therefore, be helpful for Computer Vision tools to integrate elements of artificial intelligence, allowing to adapt to the context of an application fields and to guide the detection process in this respect. This Ph.D. thesis presents a knowledge-based approach for object recognition that can be used whatever the application field. Its architecture is based on semantic technologies to allow a knowledge management module to guide the objects detection process through a step by step procedure performing the selection, parameterization, and execution of algorithms. The detection process is performed thanks to an artificial intelligence approach that uses explicit knowledge to design a context-dependent object recognition solution. Its strength is its adaptability to the context, but also its capability to analyze and understand a scene and contained objects and the specificities of the data to be processed. This understanding capability is realized through a self-learning process able to define and validate hypotheses concerning the context, also enabling to enrich the knowledge base and to improve the objects recognition process. The efficiency of this adaptation capability will be demonstrated in four use cases from different application fields. The first use case is an indoor of a building. It is used for a monitoring purpose. The second use case is located in the field of Archaeology represented by ancient ruins containing a terrace house with a watermill. The third use case is an outdoor representing a part of the city of Freiburg in Germany. It is used for an industrial purpose. Finally, the last use case is an indoor acquired by Microsoft’s Kinect. It is used for a robotic purpose
APA, Harvard, Vancouver, ISO, and other styles
31

Oliverio, Vinicius. "Detecção de contradições em um sistema de aprendizado sem fim." Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/505.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:58Z (GMT). No. of bitstreams: 1 4650.pdf: 1170921 bytes, checksum: a9b0c215ae5f2804a8bbdd8bd2267ca8 (MD5) Previous issue date: 2012-06-29
Universidade Federal de Sao Carlos
NELL (Never Ending Language Learning) is a system that seeks to learn in an infinite way, extracting structured information from unstructured web pages using the semi-supervised learning paradigm as one of its basic principles. The Read the Web (RTW) project is the project where the NELL system is contained, actually it consists of 5 modules, all of them working independently where one of the modules is called Rule Learner (RL). The RL is responsible for inducing first order rules, which are used by the system to identify patterns in the knowledge generated by the other four components of the system. These rules are induced and then represented in a syntax that has Horn clauses as base. These rules can present contradictions, and in this context this paper proposes investigate, develop and implement methods to detect and solve these contradictions so that the system can learn in a more efficient way
O NELL (Never Ending Language Learning) é um sistema que busca aprender de uma maneira contínua, extraindo informação estruturada de páginas web desestruturadas utilizando o paradigma de aprendizagem semissupervisionado como um de seus princípios básicos. O Read the Web (RTW) é o projeto no qual o sistema NELL se insere. Atualmente o NELL possui cinco módulos, todos eles trabalhando independentemente onde um desses módulos é chamado Rule Learner (RL). O RL é responsável por induzir regras de primeira ordem, as quais são utilizadas pelo sistema para identificar padrões presentes no conhecimento gerado pelos outros quatro componentes do sistema. Estas regras são induzidas e, na sequência, representadas através de uma sintaxe que tem cláusulas de Horn como base. Tais regras podem apresentar contradições, e neste contexto o presente trabalho propõe a investigação, desenvolvimento e implementação de métodos para detectar e resolver estas contradições de maneira a fazer a aprendizagem mais eficiente.
APA, Harvard, Vancouver, ISO, and other styles
32

Araújo, Cláudia Josimar Abrão de. "Um modelo para interoperabilidade entre instituições heterogêneas." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-08022013-111002/.

Full text
Abstract:
A interação entre instituições heterogêneas tem sido cada vez mais necessária para obter e disponibilizar informações e serviços para seus usuários internos e externos. Esta interação tem sido sustentada principalmente pelo uso das novas tecnologias da informação e comunicação. A interoperabilidade entre instituições heterogêneas garante esta interação e proporciona vários benefícios como, por exemplo, utilizar toda a plataforma legada das instituições e ainda permitir a interação entre os sistemas. Entretanto, para que esta interoperabilidade seja possível é necessária a definição de conceitos comuns que padronizam e orientam as interações entre as instituições. Através destes conceitos comuns, as instituições podem trocar informações entre si e ainda manter sua independência e as particularidades em seus sistemas internos. Em nosso trabalho, propomos um Modelo para Interoperabilidade entre Instituições Heterogêneas (MIIH). A especificação das regras de interação e, especificamente, os protocolos de interoperabilidade entre as instituições são escritas usando JamSession, que é uma plataforma para a coordenação de serviços de software heterogêneos e distribuídos. O modelo também define uma arquitetura baseada em Artefatos do Conhecimento Institucionais para lidar com as conexões com os sistemas das instituições. Estes Artefatos do Conhecimento Institucionais são baseados no conceito geral de Artefatos do Conhecimento, ou seja, \"objetos que contêm e transmitem uma representação utilizável do conhecimento\". Os Artefatos do Conhecimento Institucionais são padrões arquitetônicos recorrentes que são observados no projeto de mecanismos de interoperabilidade para conectar instituições heterogêneas e são usados como uma descrição de alto nível da arquitetura para um projeto de sistema. Eles funcionam como padrões arquiteturais pré-concebidos que norteiam e padronizam as interações e, portanto, a interoperabilidade organizacional e semântica entre as instituições. Os Artefatos do Conhecimento Institucionais são fundamentados sobre uma ontologia de conceitos relevantes para os serviços destas instituições, cujo nível de abstração pode variar, dependendo do nível de integração necessário para as instituições - quanto mais sofisticada a interação, mais detalhes devem ser representados explicitamente na ontologia. Os Artefatos do Conhecimento Institucionais implementados também se comunicam com a camada de interação com o usuário, baseada em mundos virtuais, para garantir a comunicação adequada com estes usuários. Além do modelo conceitual proposto, apresentamos como resultado deste trabalho, um exemplo de uso do MIIH no contexto das instituições relacionadas à herança cultural (museus, galerias, colecionadores, etc.). Tendo reconhecido que este contexto dos museus é importante para toda a sociedade, verificamos mais profundamente o funcionamento dos museus e suas interações entre si e com seus usuários. Identificamos neste cenário a aplicação direta de nosso projeto, uma vez que a interoperabilidade entre os museus é fundamental para o desempenho de suas funções e a interoperabilidade com seus usuários define a razão de sua existência, conforme identificamos na definição de museu apresentada pela UNESCO. Este exemplo de uso é construído seguindo a metodologia proposta neste trabalho e serve para mostrar a utilização do nosso modelo no desenvolvimento de uma aplicação concreta para uso em instituições de arte e também por seus usuários.
Heterogeneous institutions interactions have increasingly been required to obtain and provide information and services to their internal and external users. This interaction has been sustained mainly by the use of new information and communication technologies. Interoperability between heterogeneous institutions ensures this interaction and provides various benefits such as, use the legacy platforms of the institutions and still allow the interaction between their systems. However, to make this interoperability possible it is necessary to define common concepts that standardize and guide the interactions between institutions. Through these common concepts, institutions can exchange information with each other and maintain the independence and particularities in their internal systems. In our work, we propose a Model for Interoperability between Heterogeneous Institutions (MIHI). The specification of the interaction rules and specifically the protocols for interoperability between institutions are written using JamSession, which is a platform for the coordination of heterogeneous and distributed software services. The model also defines an architecture based on Institutional Knowledge Artifacts to handle the connections to the systems of the institutions. These Institutional Knowledge Artifacts are based on the general concept of Knowledge Artifacts, i.e., \"objects that convey and hold usable representation of knowledge\". The Institutional Knowledge Artifacts are recurring architectural patterns that are observed in the design of mechanisms for interoperability to connect heterogeneous institutions and are used as a high-level description of the architecture for a system design. They function as pre-designed architectural patterns that guide and standardize the interactions and therefore the organizational and semantic interoperability between institutions. The Institutional Knowledge Artifacts are based on ontology of concepts relevant to these services institutions whose level of abstraction can vary, depending on the level of integration necessary for institutions - the more sophisticated interaction, more details should be represented explicitly in the ontology. The Institutional Knowledge Artifacts implemented also communicate with the use interface layer, based on virtual worlds, to ensure proper communication with these users. Besides the proposed conceptual model, we present as a result of this work, an example of the use of MIIH in the context of institutions related to cultural heritage (museums, galleries, collectors, etc.). Having recognized that the context of museums is important for the society, we studied more deeply the operation of museums and their interactions with each other and their users. We identified in this scenario the direct application of our project, since interoperability among museums is vital to the performance of its functions and interoperability with its users define the reason for its existence, as we verified in the definition of museum presented by UNESCO. This example of use is constructed following the methodology proposed in this work and serves to show the use of our model in the development of a practical application for use in art institutions and also by its users.
APA, Harvard, Vancouver, ISO, and other styles
33

Gonçalves, Bernardo Nunes. "An ontological theory of the electrocardiogram with applications." Universidade Federal do Espírito Santo, 2009. http://repositorio.ufes.br/handle/10/6409.

Full text
Abstract:
Made available in DSpace on 2016-12-23T14:33:46Z (GMT). No. of bitstreams: 1 Dissertacao de Bernardo Nunes Goncalves.pdf: 2977057 bytes, checksum: 91c4900f7fa617b248afcbef29b0193d (MD5) Previous issue date: 2009-05-13
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The fields of Medical- and Bio-informatics are bearing witness of the application of the discipline of Formal Ontology to the representation of biomedical entities and (re-)organization of medical terminologies also in view of advancing electronic health records (EHR). In this context, the electrocardiogram (ECG) defines one of the prominent kinds of biomedical data. As a vital sign, it is an important piece in the composition of the EHR of today, as likely in the EHR of the future. This thesis introduces an ontological analysis of the ECG grounded in the Unified Foundational Ontology (UFO) and axiomatized in First-Order Logic (FOL). With the goal of investigating the phenomena underlying this cardiological exam, we deal with the sub-domains of human heart electrophysiology and anatomy. We then outline an ECG ontology meant to represent what the ECG is on both sides of the patient and of the physician. The ontology is implemented in the semantic web technology OWL with its SWRL extension. The ECG Ontology makes use of basic relations standardized in the OBO Relation Ontology for the biomedical domain. In addition, it takes inspiration in the Foundational Model of Anatomy (FMA) and applies the Ontology of Functions (OF). Besides the ECG ontological theory itself, two applications of the ECG Ontology are also presented here. The first one is concerned with the off-line integration of ECG data standards, a relevant endeavor for the progress of Medical Informatics. The second one in turn comprises a reasoning-based web system that can be used to offer support for interactive learning in electrocardiography / heart electrophysiology. Overall, we also reflect on the ECG Ontology as well as on its two applications to provide evidence for benefits achieved with the employment of methodological principles - in terms of both ontological foundations and ontology engineering - in building a domain ontology
APA, Harvard, Vancouver, ISO, and other styles
34

Ceccaroni, Luigi. "OntoWEDSS - An Ontology-based Environmental Decision-Support System for the management of Wastewater treatment plants." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/6639.

Full text
Abstract:
Les contribucions d'aquesta tesi uneixen dues disciplines: ciències ambientals (específicament, gestió d'aigües residuals) i informàtica (específicament, intel·ligència artificial). El tractament d'aigües residuals com a disciplina opera fent servir un rang de diferents enfocaments i mètodes que inclouen: control manual, control automàtic on-line, modelat numèric o no-numèric, models estadístics i simulacions. La tesi caracteritza la recerca interdisciplinària de tècniques d'intel·ligència artificial (raonament basat en regles, raonament basat en casos, ontologies i planificació) a sistemes de suport a la decisió a l'entorn ambiental. El disseny de l'arquitectura d'aquesta aplicació, el sistema OntoWEDSS, augmenta els sistemes clàsics de raonament existents (raonament basat en regles i basat en casos) amb una ontologia de domini per a la gestió de plantes de tractament d'aigües residuals. La integració de l'ontologia WaWO recentment creada proporciona a OntoWEDSS una major flexibilitat en la capacitat de gestió. La construcció del sistema de suport a la decisió OntoWEDSS es basa en l'estudi d'un cas específic, però el sistema també és d'interès general ja que l'arquitectura basada en l'ontologia pot aplicar-se a qualsevol estació depuradora i, a un nivell apropiat d'abstracció, a altres dominis ambientals. El sistema OntoWEDSS millora la diagnosi de l'estat de l'estació depuradora, proporciona suport a la solució de complexes problemes relacionats amb aigües residuals, i facilita el modelatge del coneixement i la seva reutilització mitjançant l'ontologia WaWO.
En particular, a la investigació s'han aconseguit els següents objectius: (1) la millora del modelatge de la informació sobre processos de tractament d'aigües residuals i la clarificació de part de la confusió existent en la terminologia del domini, (2) la incorporació de coneixement microbiològic (referent al procés del tractament i modelat mitjançant una ontologia) dins del procés de raonament, (3) la creació d'un sistema de suport a la decisió amb tres nivells (percepció, diagnosi i suport a la decisió) que combina coneixement mitjançant una nova integració entre KBSs i ontologies, proporcionant millors resultats, (4) la eliminació d'obstacles existents en el raonament, obtinguda utilitzant el nou coneixement microbiològic codificat a l'estructura jeràrquica i a les relacions de l'ontologia, (5) la representació de relacions causa-efecte, degut a la implementació d'un conjunt de relacions que permeten a l'ontologia deduir automàticament la resposta a qüestions sobre el domini d'aigües residuals.
OntoWEDSS està implementada en el llenguatge de programació LISP, fent servir el software Allegro Common LISP. S'ha dut a terme una avaluació focalitzada del sistema, basada en la valoració de la capacitat de resposta a situacions problemàtiques específiques, obtenint-se bons resultats.
Las contribuciones de esta tesis unen dos disciplinas: ciencias ambientales (específicamente, gestión de aguas residuales) e informática (específicamente, inteligencia artificial). El tratamiento de aguas residuales como disciplina opera utilizando un rango de diferentes enfoques y métodos que incluye: control automático on-line, modelado numérico o no-numérico, razonamiento basado en reglas, razonamiento basado en casos, soporte a la decisión y planificación. La tesis caracteriza una aplicación interdisciplinaria de técnicas de inteligencia artificial a sistemas de soporte a la decisión en el dominio ambiental. El diseño de la arquitectura de esta aplicación, el sistema OntoWEDSS, aumenta los sistemas híbridos de razonamiento ya existentes (razonamiento basado en reglas y basado en casos) con una ontología de dominio para la gestión de plantas de tratamiento de aguas residuales. La integración de la ontología WaWO, de nueva creación, proporciona a OntoWEDSS una mayor flexibilidad en la capacidad de gestión. La construcción del sistema de soporte a la decisión OntoWEDSS se basa en el estudio de un caso específico, pero el sistema resulta también es de interés general puesto que la arquitectura basada en ontologías puede aplicarse a cualquier planta de tratamiento de aguas residuales y, a un nivel apropiado de abstracción, a otros dominios ambientales. El sistema OntoWEDSS mejora la diagnosis del estado de la planta de tratamiento, proporciona soporte a la resolución de complejos problemas relacionados con aguas residuales, y facilita el modelado del conocimiento y su reutilización mediante la ontología WaWO.
En particular, la investigación ha alcanzado los siguientes objetivos: (1) la mejora del modelado de la información sobre procesos de tratamiento de aguas residuales y la clarificación de parte de la confusión existente en la terminología relacionada, (2) la incorporación de conocimiento microbiológico (referente al proceso del tratamiento y modelado mediante una ontología) dentro del proceso de razonamiento, (3) la creación de un sistema de soporte a la decisión con tres estratos (percepción, diagnosis y soporte a la decisión) que combina conocimiento mediante una novedosa integración entre KBSs y ontologías, proporcionando mejores resultados, (4) la eliminación de obstáculos existentes en el razonamiento, hallada utilizando el nuevo conocimiento microbiológico codificado en la estructura jerárquica y las relaciones de la ontología, (5) la representación de relaciones causa-efecto, debido a la implementación de un conjunto de relaciones que permiten a la ontología deducir automáticamente la respuesta a cuestiones sobre el dominio de aguas residuales.
OntoWEDSS está implementada en el lenguaje de programación LISP, usando el software Allegro Common LISP. Se ha llevado a cabo una evaluación enfocada del sistema, basada en la valoración de la capacidad de respuesta a situaciones problemáticas específicas, obteniéndose buenos resultados.
The contributions of this thesis bridge two disciplines: environmental science (specifically, wastewater management) and computer science (specifically, artificial intelligence). Wastewater management as a discipline operates using a range of different approaches and methods which include: manual control, on-line automatic control, numerical or non-numerical models, statistical models and simulation models. The thesis characterizes an interdisciplinary research on artificial intelligence techniques (rule-based reasoning, case-based reasoning, ontologies and planning) applied to environmental decision-support systems. The integrated architecture's design of this application, the OntoWEDSS system, augments classic reasoning systems (rule-based reasoning and case-based reasoning) with a domain ontology about the management of wastewater treatment plants. The integration of the newly created WaWO ontology provides a more flexible management capability to OntoWEDSS. The construction of the OntoWEDSS decision support system is based on a specific case study but the system is also of general interest, given that its ontology-underpinned architecture can be applied to any wastewater treatment plant and, at an appropriate level of abstraction, to other environmental domains. The OntoWEDSS system improves the diagnosis of the state of a treatment plant, provides support for wastewater-related complex problem-solving, and facilitates knowledge modeling and reuse by means of the WaWO ontology.
The following research targets have been achieved in particular: (1) the improvement of the modeling of the information about wastewater treatment processes and the clarification of a part of the existing terminological confusion in the domain, (2) the incorporation of ontology-modeled microbiological knowledge related to the treatment process into the reasoning process, (3) the creation of a decision support system with three layers (perception, diagnosis and decision support) which combines knowledge through a novel integration between KBSs and ontologies, providing better results, (4) the solution of existing reasoning-impasses, found using the new microbiological knowledge encoded in the hierarchical structure and the relations of the ontology, (5) the representation of cause-effect relations, due to the implementation of a set of relations that enable the ontology to automatically deduce the answer to questions about the wastewater domain.
OntoWEDSS is implemented in the LISP programming language, using Allegro Common LISP software. A focused evaluation of the system, founded on the assessment of the capacity of response to specific problematic situations, has been carried out and has given fine results.
Questa tesi contribuisce alla intersezione di due discipline: le scienze ambientali (specificamente, la gestione delle acque di rifiuto) e la informatica (specificamente, la intelligenza artificiale). Nel trattamento delle acque di rifiuto come disciplina si utilizzano diversi metodi, che includono: controllo manuale, controllo automatico on-line, modelli numerici o non-numerici e simulazioni. La tesi caratterizza un'applicazione interdisciplinare di tecniche di intelligenza artificiale a sistemi di aiuto alla decisione in campo ambientale. L'architettura di questa applicazione, il sistema OntoWEDSS, amplia i sistemi di ragionamento ibrido esistenti (ragionamento basato su un sistema di regole, ragionamento basato sull'esperienza, aiuto alla decisione e pianificazione) con un'ontologia di dominio per la gestione di depuratori di acque di rifiuto. L'integrazione dell'ontologia WaWO, di nuova creazione, fornisce a OntoWEDSS una maggiore flessibilità nella sua capacità di gestione. La costruzione del sistema OntoWEDSS si basa sullo studio di un caso specifico, però il sistema risulta anche di interesse generale dato che l'architettura basata su un'ontologia può essere applicata a un qualsiasi depuratore e, considerando un adeguato livello d'astrazione, ad altri domini ambientali. Il sistema OntoWEDSS migliora la diagnosi dello stato del depuratore, fornisce aiuto alla soluzione di problemi complessi relazionati con le acque di rifiuto e facilita la modellizzazione della conoscenza e la sua riutilizzazione mediante l'ontologia WaWO.
In particolare, la ricerca realizzata ha raggiunto i seguenti obiettivi: (1) il miglioramento dell'informazione sui processi di depurazione e il chiarimento di parte della confusione esistente nella terminologia relativa, (2) l'incorporazione di conoscenza microbiologica (riguardo al processo di depurazione e mediante la modellizzazione ontologica) nel processo di ragionamento, (3) la creazione di un sistema di aiuto alla decisione con tre livelli (percezione, diagnosi e aiuto alla decisione) che combina la informazione mediante un nuovo tipo d'integrazione tra classici sistemi basati sulla conoscenza e ontologie, proporzionando risultati migliori, (4) l'eliminazione di alcuni ostacoli esistenti nel ragionamento, ottenuta utilizzando la nuova conoscenza microbiologica codificata nella struttura gerarchica e nelle relazioni dell'ontologia, (5) la rappresentazione di relazioni causa-effetto del mondo reale attraverso l'implementazione di un insieme di relazioni ontologiche che permettono di dedurre automaticamente le risposte a domande sul dominio delle acque di rifiuto.
OntoWEDSS è implementata nel linguaggio di programmazione LISP, usando il software Allegro Common LISP. È stata realizzata una valutazione del sistema basata sulla stima della capacità di risposta a situazioni problematiche specifiche e si sono ottenuti risultati soddisfacenti.
APA, Harvard, Vancouver, ISO, and other styles
35

Toscano, Wagner. "Minerador WEB: um estudo sobre mecanismos de descoberta de informações na WEB." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-17122003-150851/.

Full text
Abstract:
A Web (WWW - World Wide Web) possui uma grande quantidade e variedade de informações. Isso representa um grande atrativo para que as pessoas busquem alguma informação desejada na Web. Por outo lado, dessa grande quantidade de informações resulta o problema fundamental de como descobrir, de uma maneira eficaz, se a informação desejada está presente na Web e como chegar até ela. A existência de um conjunto de informações que não se permitem acessar com facilidade ou que o acesso é desprovido de ferramentas eficazes de busca da informção, inviabiliza sua utilização. Soma-se às dificuldades no processo de pesquisa, a falta de estrutura das informações da Web que dificulta a aplicação de processos na busca da informação. Neste trabalho é apresentado um estudo de técnicas alternativas de busca da informação, pela aplicação de diversos conceitos relacionados à recuperação da informação e à representação do conhecimento. Mais especificamente, os objetivos são analisar a eficiência resultante da utilização de técnicas complementares de busca da informação, em particular mecanismos de extração de informações a partir de trechos explícitos nos documentos HTML e o uso do método de Naive Bayes na classificação de sites, e analisar a eficácia de um processo de armazenamento de informações extraídas da Web numa base de conhecimento (descrita em lógica de primeira ordem) que, aliada a um conhecimento de fundo, permita respomder a consultas mais complexas que as possíveis por meio do uso de expressões baseadas em palavras-chave e conectivos lógicos.
The World Wide Web (Web) has a huge amount and a large diversity of informations. There is a big appeal to people navigate on the Web to search for a desired information. On the other hand, due to this huge amount of data, we are faced with the fundamental problems of how to discover and how to reach the desired information in a efficient way. If there is no efficient mechanisms to find informations, the use of the Web as a useful source of information becomes very restrictive. Another important problem to overcome is the lack of a regular structure of the information in the Web, making difficult the use of usual information search methods. In this work it is presented a study of alternative techniques for information search. Several concepts of information retrieval and knowledge representation are applied. A primary goal is to analyse the efficiency of information retrieval methods using analysis of extensional information and probabilistic methods like Naive Bayes to classify sites among a pre-defined classes of sites.Another goal is to design a logic based knowledhe base, in order to enable a user to apply more complex queries than queries based simply on expressions using keywouds and logical connectives
APA, Harvard, Vancouver, ISO, and other styles
36

Pinto, Ig Ibert Bittencourt Santana. "Plataforma para construção de ambientes interativos de aprendizagem baseados em agentes." Universidade Federal de Alagoas, 2006. http://repositorio.ufal.br/handle/riufal/807.

Full text
Abstract:
This dissertation presents the design and development of an agent-based interactive learning environment based on the Mathema Model. Such model adopts a problem-based learning approach as a pedagogical method that is achieved in an interaction environment that is populated by software and human agents (students and teachers), working together in favour of the human learners. This interaction occurs aiming at helping human learners to solve problems into given knowledge domain. This model aims to help learners during several phases involved in the build solution process, including the problem analysis. With this sense, the interaction between three agent categories was modeled: human learner agent, teacher and artificial tutoring agent. This work proposes a platform for building agent-based interactive learning environments, supporting as framework to developers/software engineers as authoring to non developers users like teachers and knowledge engineers. In addition, the platform has collaborative tools, an ontology-based infrastructure and an agent society that help at the domain modeling and problem solving process (through artificial intelligence techniques). It was conducted two case studies (at legal domain and health domain) and an initial investment at mathematical domain that demonstrate that the proposed platform is feasible.
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O presente trabalho aborda a concepção e desenvolvimento de ambientes interativos de aprendizagem (AIAs) baseados em agentes, mais especificamente, seguindo o modelo de arquitetura multiagente do Mathema. Tal modelo é definido com base em uma estratégia de aprendizagem baseada em problemas, disposta em um cenário onde agentes artificiais e humanos (alunos e professores) interagem com vistas a ajudar aprendizes humanos a resolver problemas em um determinado domínio de conhecimento. Nesse sentido, esse modelo propõe-se a ajudar os aprendizes durante as várias fases para a construção de uma solução, incluindo a análise do problema. Para tanto, modelou-se três categorias de interações: o agente aprendiz humano, o professor e o agente tutor artificial. Neste trabalho, propõe-se uma plataforma para construção de ambientes interativos de aprendizagem baseados em Agentes, dando suporte tanto de um framework, para os engenheiros de software/desenvolvedores, quanto de um sistema de autoria para os usuários não programadores poderem configurar o domínio de ensino. Além disso, tal plataforma possui ferramentas de colaboração, infraestrutura baseada em ontologia e uma sociedade de agentes que auxiliam na modelagem do domínio e resolução de problemas (através de técnicas de Inteligência Artificial). A fim de experimentar e validar a plataforma proposta foram desenvolvidos dois estudos de casos, um na área de direito e o outro em medicina, além de ter sido iniciado um investimento em matemática.
APA, Harvard, Vancouver, ISO, and other styles
37

Albuquerque, Andréa Corrêa Flôres. "Um Framework conceitual para integrar conhecimento tácito científico." Universidade Federal do Amazonas, 2016. http://tede.ufam.edu.br/handle/tede/5302.

Full text
Abstract:
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-12-01T13:49:44Z No. of bitstreams: 1 Tese - Andrea C. F. Albuquerque.pdf: 4570257 bytes, checksum: 55a3bbe1076ec44491c4ecfbed0c08e6 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-12-01T13:51:29Z (GMT) No. of bitstreams: 1 Tese - Andrea C. F. Albuquerque.pdf: 4570257 bytes, checksum: 55a3bbe1076ec44491c4ecfbed0c08e6 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-12-01T13:51:58Z (GMT) No. of bitstreams: 1 Tese - Andrea C. F. Albuquerque.pdf: 4570257 bytes, checksum: 55a3bbe1076ec44491c4ecfbed0c08e6 (MD5)
Made available in DSpace on 2016-12-01T13:51:58Z (GMT). No. of bitstreams: 1 Tese - Andrea C. F. Albuquerque.pdf: 4570257 bytes, checksum: 55a3bbe1076ec44491c4ecfbed0c08e6 (MD5) Previous issue date: 2016-07-15
During the development of OntoBio, a formal biodiversity ontology, it was observed that much of the knowledge of the expert, which was not included in the structured databases and allow ontology to be more expressive (tacit knowledge), was not represented, and thus ignored. Empirical evidences indicate that this knowledge is essential to help in generating knew scientific knowledge and consequently in the decision making process. In this highly connected environment, where data availability is massive, the use of ontologies is a recommended solution for allowing knowledge acquisition/generation. More specific issues such as representation of tacit scientific knowledge, are not satisfactorily elucidated. In order to contribute with solutions for such questions, it is necessary to investigate critical aspects of knowledge representation, modelling and formalization of tacit knowledge, and also to consider different views on the domain. This research proposes a method to agregate tacit knowledge to formal ontologies, incorporating semantic and expressivity to support generation of scientific knowledge. The method comprises the process of elicitation and formalization of scientific tacit knowledge of biodiversity, and the integration of this knowledge to the structure described in OntoBio.
Durante o desenvolvimento da OntoBio, uma ontologia formal de biodiversidade, observou-se que muito do conhecimento do especialista, que não estava contido nas bases de dados estruturadas e que tornam a ontologia mais expressiva (conhecimento tácito), não era representado, e com isso ignorado. Evidências empíricas indicam que este conhecimento é essencial para auxiliar na geração de novos conhecimentos científicos e consequentemente, nos processos de tomada de decisão. Neste ambiente de intensa conectividade, onde a disponibilidade de dados é massiva, a utilização de ontologias é uma solução recomendada, por permitir a aquisição/geração de conhecimento. Questões mais especificas, como a representação do conhecimento científico tácito, ainda não estão satisfatoriamente elucidadas. Para contribuir com soluções para tais questões, faz-se necessário investigar aspectos críticos de aquisição e representação do conhecimento, modelagem e formalização de conhecimento tácito, e considerar diferentes pontos de vista sobre o domínio. Esta pesquisa propõe um método para agregar conhecimento tácito à ontologias formais, incorporando semântica e expressividade para apoiar a geração de conhecimento científico. O método compreende o processo de elicitação e formalização do conhecimento científico tácito de biodiversidade, e a integração deste conhecimento à estrutura descrita na OntoBio.
APA, Harvard, Vancouver, ISO, and other styles
38

Dalmau, Espert J. Luis. "Sistema multiagente para el diseño, ejecución y seguimiento del proceso de planificación estratégica ágil en las organizaciones inteligentes." Doctoral thesis, Universidad de Alicante, 2015. http://hdl.handle.net/10045/54217.

Full text
Abstract:
Desde finales de la década de 1980 e inicio del siglo XXI han acaecido una serie de hechos que, tomados en conjunto, dibujan un nuevo panorama dentro del mundo de las organizaciones. Estos nuevos tiempos están caracterizados por la incertidumbre y la complejidad del entorno en el que éstas tienen que desarrollarse e interactuar. Bajo estas condiciones las organizaciones han visto la necesidad de cambiar su modelo y su estructura, evolucionando hacia otros modelos y estructuras que garanticen una mayor participación de los grupos implicados de la organización, que posibiliten la toma de decisiones de forma distribuida y que, además, proporcionen a la organización un incremento de la flexibilidad y de la agilidad para adaptarse rápidamente a los constantes cambios que sufre el entorno que le rodea. El conocimiento y el aprendizaje son hoy en día las piezas fundamentales de este nuevo modelo por ser el medio clave a través del cual es posible reducir la complejidad e incertidumbre que caracteriza este entorno organizacional. Este cambio conlleva la necesidad de una revisión del resto de los procesos vinculados a la organización tradicional en aras de adaptarlos al nuevo enfoque. Entre estos procesos destacan, por su relación con el conocimiento y el empleo del mismo para la dirección estratégica, el proceso de Aprendizaje Organizacional y el de Planificación Estratégica. En este trabajo de tesis se propone una revisión al proceso de planificación estratégica que esté en consonancia con el nuevo modelo de organización y que dé solución a la problemática actual que tienen las organizaciones para llevarlo a cabo de forma eficaz y eficiente. En esta revisión se propone un modelo que permite diseñar el proceso de planificación estratégica, ejecutarlo para obtener el plan estratégico y, posteriormente, poder realizar el seguimiento de dicho plan. Dada la vinculación e importancia que dentro del proceso de planificación estratégica tiene el proceso de aprendizaje organizacional éste es integrado también dentro del modelo. Debido a la necesidad en la actualidad de ajuste del plan estratégico por los cambios constantes en el entorno y la dificultad para hacerlo, el modelo permite el rediseño ágil y flexible del proceso de planificación estratégica para obtener nuevos planes estratégicos en periodos de tiempo más cortos, logrando así que la organización tenga una mejor sincronía con su entorno. A nivel de diseño el modelo que se propone está basado en la tecnología multiagente que emplea un sistema de pizarra multicapa para el almacén de la información que se genera durante el diseño y ejecución del proceso y posterior seguimiento del plan estratégico. Dicha información es descrita empleando una ontología que permite la formalización tanto de los pasos del proceso de planificación estratégica que se va a realizar y sus dependencias como de la información que se maneja en cada uno de ellos. La combinación de ambos elementos posibilita que en la ejecución del proceso de planificación estratégica sea posible que los participantes en el mismo puedan participar, interactuar, generar información y conocimiento que sirva para realizar cada uno de los pasos del proceso y para constituir una experiencia aprovechable para la realización de futuros procesos de planificación estratégica. Esta experiencia constituye además el elemento clave que el modelo propone que puede ser empleado para automatizar los pasos del proceso y mejorar la toma de decisiones. En definitiva, el modelo propuesto es una solución formal, integral, ágil y flexible para llevar a cabo el proceso de planificación estratégica en las organizaciones actuales y bajo las condiciones que rodean a éstas.
APA, Harvard, Vancouver, ISO, and other styles
39

Thovex, Christophe. "Réseaux de Compétences : de l'Analyse des Réseaux Sociaux à l'Analyse Prédictive de Connaissances." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00697798.

Full text
Abstract:
En 1977, Freeman formalisait les premières mesures génériques d'Analyse de Réseaux Sociaux (ARS). Puis, les réseaux sociaux du Web " 2.0 " sont devenus planétaires (e.g., FaceBook, MSN). Cette thèse définit un modèle sémantique, non probabiliste et prédictif, pour l'analyse décisionnelle de réseaux sociaux professionnels et institutionnels. Ce modèle, en parallèle à la sociophysique de Galam, intègre des méthodes de traitement sémantique du langage naturel et d'ingénierie des connaissances, des mesures de sociologie statistique et des lois électrodynamiques, appliquées à l'optimisation de la performance économique et du climat social. Il a été développé et expérimenté dans le cadre du projet Socioprise, financé par le Secrétariat d'Etat à la prospective et au développement de l'économie numérique.
APA, Harvard, Vancouver, ISO, and other styles
40

Benabderrahmane, Sidahmed. "Prise en compte des connaissances du domaine dans l'analyse transcriptomique : Similarité sémantique, classification fonctionnelle et profils flous : application au cancer colorectal." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00653169.

Full text
Abstract:
L'analyse bioinformatique des données de transcriptomique a pour but d'identifier les gènes qui présentent des variations d'expression entre différentes situations, par exemple entre des échantillons de tissu sain et de tissu malade et de caractériser ces gènes à partir de leurs annotations fonctionnelles. Dans ce travail de thèse, je propose quatre contributions pour la prise en compte des connaissances du domaine dans ces méthodes. Tout d'abord je définis une nouvelle mesure de similarité sémantique et fonctionnelle (IntelliGO) entre les gènes, qui exploite au mieux les annotations fonctionnelles issues de l'ontologie GO ('Gene Ontology'). Je montre ensuite, grâce à une méthodologie d'évaluation rigoureuse, que la mesure IntelliGO est performante pour la classification fonctionnelle des gènes. En troisième contribution je propose une approche différentielle avec affectation floue pour la construction de profils d'expression différentielle (PED). Je définis alors un algorithme d'analyse de recouvrement entre classes fonctionnelles et ensemble des références, ici les PEDs, pour mettre en évidence des gènes ayant à la fois les mêmes variations d'expression et des annotations fonctionnelles similaires. Cette méthode est appliquée à des données expérimentales produites à partir d'échantillons de tissus sains, de tumeur colo-rectale et de lignée cellulaire cancéreuse. Finalement, la mesure de similarité IntelliGO est généralisée à d'autres vocabulaires structurés en graphe acyclique dirigé et enraciné (rDAG) comme l'est l'ontologie GO, avec un exemple d'application concernant la réduction sémantique d'attributs avant la fouille.
APA, Harvard, Vancouver, ISO, and other styles
41

Ribeiro, Manuel António de Melo Chinopa de Sousa. "Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies." Master's thesis, 2020. http://hdl.handle.net/10362/113651.

Full text
Abstract:
Artificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address this issue by leveraging on ontologies and building small classifiers that map a neural network’s internal representations to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural networks. Using two image classification problems as testing ground, we discuss how to map the internal representations of a neural network to the concepts of an ontology, exam ine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.
APA, Harvard, Vancouver, ISO, and other styles
42

Paiva, Luis Miguel Sintra Salvo. "Semantic relations extraction from unstructured information for domain ontologies enrichment." Master's thesis, 2015. http://hdl.handle.net/10362/16550.

Full text
Abstract:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.
APA, Harvard, Vancouver, ISO, and other styles
43

Remolona, Miguel Francisco Miravite. "HOLMES: A Hybrid Ontology-Learning Materials Engineering System." Thesis, 2018. https://doi.org/10.7916/D8WH46P7.

Full text
Abstract:
Designing and discovering novel materials is challenging problem in many domains such as fuel additives, composites, pharmaceuticals, and so on. At the core of all this are models that capture how the different domain-specific data, information, and knowledge regarding the structures and properties of the materials are related to one another. This dissertation explores the difficult task of developing an artificial intelligence-based knowledge modeling environment, called Hybrid Ontology-Learning Materials Engineering System (HOLMES) that can assist humans in populating a materials science and engineering ontology through automatic information extraction from journal article abstracts. While what we propose may be adapted for a generic materials engineering application, our focus in this thesis is on the needs of the pharmaceutical industry. We develop the Columbia Ontology for Pharmaceutical Engineering (COPE), which is a modification of the Purdue Ontology for Pharmaceutical Engineering. COPE serves as the basis for HOLMES. The HOLMES framework starts with journal articles that are in the Portable Document Format (PDF) and ends with the assignment of the entries in the journal articles into ontologies. While this might seem to be a simple task of information extraction, to fully extract the information such that the ontology is filled as completely and correctly as possible is not easy when considering a fully developed ontology. In the development of the information extraction tasks, we note that there are new problems that have not arisen in previous information extraction work in the literature. The first is the necessity to extract auxiliary information in the form of concepts such as actions, ideas, problem specifications, properties, etc. The second problem is in the existence of multiple labels for a single token due to the existence of the aforementioned concepts. These two problems are the focus of this dissertation. In this work, the HOLMES framework is presented as a whole, describing our successful progress as well as unsolved problems, which might help future research on this topic. The ontology is then presented to help in the identification of the relevant information that needs to be retrieved. The annotations are next developed to create the data sets necessary for the machine learning algorithms to perform. Then, the current level of information extraction for these concepts is explored and expanded. This is done through the introduction of entity feature sets that are based on previously extracted entities from the entity recognition task. And finally, the new task of handling multiple labels for tagging a single entity is also explored by the use of multiple-label algorithms used primarily in image processing.
APA, Harvard, Vancouver, ISO, and other styles
44

"Bridging the Gap between Classical Logic Based Formalisms and Logic Programs." Doctoral diss., 2012. http://hdl.handle.net/2286/R.I.14557.

Full text
Abstract:
abstract: Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription and default logic, are expressive but lack efficient implementations. The nonmonotonic formalisms that are based on the declarative logic programming approach, such as Answer Set Programming (ASP), have efficient implementations but are not expressive enough for representing and reasoning with open domains. This dissertation uses the first-order stable model semantics, which extends both first-order logic and ASP, to relate circumscription to ASP, and to integrate DLs and ASP, thereby partially overcoming the limitations of the formalisms. By exploiting the relationship between circumscription and ASP, well-known action formalisms, such as the situation calculus, the event calculus, and Temporal Action Logics, are reformulated in ASP. The advantages of these reformulations are shown with respect to the generality of the reasoning tasks that can be handled and with respect to the computational efficiency. The integration of DLs and ASP presented in this dissertation provides a framework for integrating rules and ontologies for the semantic web. This framework enables us to perform nonmonotonic reasoning with DL knowledge bases. Observing the need to integrate action theories and ontologies, the above results are used to reformulate the problem of integrating action theories and ontologies as a problem of integrating rules and ontologies, thus enabling us to use the computational tools developed in the context of the latter for the former.
Dissertation/Thesis
Ph.D. Computer Science 2012
APA, Harvard, Vancouver, ISO, and other styles
45

Lombard, Orpha Cornelia. "The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraft." Diss., 2014. http://hdl.handle.net/10500/14411.

Full text
Abstract:
This dissertation describes a research study conducted to determine the benefits and use of ontology technologies to support a simulation environment that evaluates countermeasures employed to protect military aircraft. Within the military, aircraft represent a significant investment and these valuable assets need to be protected against various threats, such as man-portable air-defence systems. To counter attacks from these threats, countermeasures are deployed, developed and evaluated by utilising modelling and simulation techniques. The system described in this research simulates real world scenarios of aircraft, missiles and countermeasures in order to assist in the evaluation of infra-red countermeasures against missiles in specified scenarios. Traditional ontology has its origin in philosophy, describing what exists and how objects relate to each other. The use of formal ontologies in Computer Science have brought new possibilities for modelling and representation of information and knowledge in several domains. These advantages also apply to military information systems where ontologies support the complex nature of military information. After considering ontologies and their advantages against the requirements for enhancements of the simulation system, an ontology was constructed by following a formal development methodology. Design research, combined with the adaptive methodology of development, was conducted in a unique way, therefore contributing to establish design research as a formal research methodology. The ontology was constructed to capture the knowledge of the simulation system environment and the use of it supports the functions of the simulation system in the domain. The research study contributes to better communication among people involved in the simulation studies, accomplished by a shared vocabulary and a knowledge base for the domain. These contributions affirmed that ontologies can be successfully use to support military simulation systems
Computing
M. Tech. (Information Technology)
APA, Harvard, Vancouver, ISO, and other styles
46

Горбуляк, Юстина Іванівна, and Yustyna Ivanivna Horbuliak. "Огляд методів аналітичної обробки текстових даних з Web-джерел для технологій Web 3.0." Master's thesis, 2021. http://elartu.tntu.edu.ua/handle/lib/36848.

Full text
Abstract:
Semantic Web Mining має на меті поєднання двох напрямків досліджень, які швидко розвиваються – Semantic Web і Web Mining. У цій роботі аналізується зближення тенденцій в обох областях: все більше і більше дослідників працюють над покращенням результатів веб-майнінгу, використовуючи семантичні структури в Інтернеті. Важливим є те, що ці методи можна використовувати для розробки самої семантичної мережі. Ці технології дають можливість здійснити перехід до Web 3.0, тобто нового рівня організації документів у всесвітній мережі, роблячи їх придатними для машинного опрацювання. Semantic Web Mining aims to combine two areas of rapidly evolving research – Semantic Web and Web Mining. This work analyzes the convergence of trends in both areas: more and more researchers are working to improve web mining outcomes using semantic structures on the Internet. It is important that these methods can be used to develop the semantic network itself. These technologies make it possible to make the transition to Web 3.0, a new level of document organization on the World Wide Web, making them suitable for machine processing.
Вступ 7 1 Основи семантичної павутини та веб-майнінгу 10 1.1 Шари семантичної мережі 11 1.2. Онтології: мови та інструменти 16 1.3 Супутні області досліджень та сфери застосування 17 1.4 Веб-майнінг 18 1.5 Вміст/текст веб-сторінок 19 1.6 Структура зв'язків між веб-сторінками 20 1.7 Використання веб-сторінок 20 2 Отримання семантики з мережі інтернет 23 2.1 Семантика, створена вмістом і структурою 23 2.1.1 Навчання онтології 23 2.1.2 Відображення та злиття онтологій 24 2.1.3 Навчання на прикладі 25 2.1.4 Використання існуючих концептуалізацій як онтологій і для автоматичного анотування 26 2.1.5 Семантика, створена структурою 27 2.2 Семантика, створена використанням 28 3 Використання семантики для веб-майнінгу 30 3.1 Семантична мережа 30 3.1 Зміст і структура майнінгу 30 3.2 Використання веб-майнінгу 32 3.2.1 Події застосування 33 3.2.2 Використання знань про події програми використовуються для майнінгу 35 3.3 Сумісне використання Semantic Web і Web Mining 39 3.4 Semantic Web Mining та інші цикли зворотного зв’язку 44 4 Охорона праці та безпека в надзвичайних cитуаціях 47 4.1 Фактори виникнення явища професійного вигорання та його запобігання для працівників в ІТ-сфері 47 4.2 Створення і функціонування системи моніторингу довкілля з метою інтеграції екологічних інформаційних систем, що охоплюють певні території 49 Висновки 56 Список використаних джерел 57 Додатки
APA, Harvard, Vancouver, ISO, and other styles
47

Barlatier, Patrick. "Conception et implantation d'un modèle de raisonnement sur les contextes basée sur une théorie des types et utilisant une ontologie de domaine." Phd thesis, 2009. http://tel.archives-ouvertes.fr/tel-00678447.

Full text
Abstract:
Dans ce mémoire, nous proposons une solution possible à la question suivante : comment formaliser des environnements associés à un processus (quelconque) et comment utiliser les informations qu'ils contiennent pour produire des actions pertinentes ? Cette question nous a amené à introduire la notion de contexte ainsi qu'une représentation réutilisable des connaissances pour le formaliser. Nous nous sommes donc intéressés aux notions d'ontologies, de contextes et d'actions. Pour la représentation et le raisonnement sur les contextes et les actions, nous proposons une solution appelée DTF. Celle-ci étend une théorie constructive des types existante permettant ainsi de disposer d'une grande expressivité, de la décidabilité de la véri cation de types et d'un mécanisme de sous-typage e cace. Nous montrons comment modéliser les contextes et les actions sous la forme de types dépendants à partir des données fournies sur un problème et des actions à entreprendre pour le résoudre. En n, pour tester la faisabilité et pouvoir juger de la complexité d'une telle solution, un "démonstrateur de contexte " est réalisé avec un langage fonctionnel. Puis, une application test appelée " le monde du Wumpus " où un agent logiciel se déplace dans un environnement inconnu, est alors implantée en LISP.
APA, Harvard, Vancouver, ISO, and other styles
48

Zemmouri, El Moukhtar. "Représentation et gestion des connaissances dans un processus d'Extraction de Connaissances à partir de Données multi-points de vue." Phd thesis, 2013. http://tel.archives-ouvertes.fr/tel-00940780.

Full text
Abstract:
Les systèmes d'information des entreprises actuelles sont de plus en plus " submergés " par des données de tous types : structurées (bases de données, entrepôts de données), semi-structurées (documents XML, fichiers log) et non structurées (textes et multimédia). Ceci a créé de nouveaux défis pour les entreprises et pour la communauté scientifique, parmi lesquels comment comprendre et analyser de telles masses de données afin d'en extraire des connaissances. Par ailleurs, dans une organisation, un projet d'Extraction de Connaissances à partir de Données (ECD) est le plus souvent mené par plusieurs experts (experts de domaine, experts d'ECD, experts de données...), chacun ayant ses préférences, son domaine de compétence, ses objectifs et sa propre vision des données et des méthodes de l'ECD. C'est ce que nous qualifions de processus d'ECD multi-vues (ou processus multi-points de vue). Notre objectif dans cette thèse est de faciliter la tâche de l'analyste d'ECD et d'améliorer la coordination et la compréhensibilité entre les différents acteurs d'une analyse multi-vues, ainsi que la réutilisation du processus d'ECD en termes de points de vue. Aussi, nous proposons une définition qui rend explicite la notion de point de vue en ECD et qui tient compte des connaissances de domaine (domaine analysé et domaine de l'analyste) et du contexte d'analyse. A partir de cette définition, nous proposons le développement d'un ensemble de modèles sémantiques, structurés dans un Modèle Conceptuel, permettant la représentation et la gestion des connaissances mises en œuvre lors d'une analyse multi-vues. Notre approche repose sur une caractérisation multi-critères du point de vue en ECD. Une caractérisation qui vise d'abord à capturer les objectifs et le contexte d'analyse de l'expert, puis orienter l'exécution du processus d'ECD, et par la suite garder, sous forme d'annotations, la trace du raisonnement effectué pendant un travail multi-experts. Ces annotations sont partagées, comparées et réutilisées à l'aide d'un ensemble de relations sémantiques entre points de vue.
APA, Harvard, Vancouver, ISO, and other styles
49

Venter, Jade Anthony. "ADLOA : an architecture description language for artificial ontogenetic architectures." Thesis, 2014. http://hdl.handle.net/10210/12433.

Full text
Abstract:
M.Com. (Information Technology)
ADLOA is an Architecture Description Language (ADL) proposed to describe biologicallyinspired complex adaptive architectures such as ontogenetic architectures. The need for an ontogenetic ADL stems from the lack of support from existing ADLs. This dissertation further investigates the similarities between existing intelligent architectures and ontogenetic architectures. The research conducted on current ADLs, artificial ontogeny and intelligent architectures reveals that there are similarities between ontogenetic architectures and other intelligent architectures. However, the dynamism of artificial ontogeny indicates a lack of support for architecture description. Therefore, the dissertation proposes two core mechanisms to address ontogenetic architecture description. Firstly, the ADLOA process is defined as a systematisation of artificial ontogeny. The process specifies a uniform approach to defining ontogenetic architectures. Secondly, a demonstration of the implemented ADLOA process is used, in conjunction with the ADLOA model, mechanisms and Graphical User Interface (GUI), to present a workable description environment for software architects. The result of the dissertation is a standalone ADL that has the ability to describe ontogenetic architectures and to produce language-dependent code frameworks using the Extensible Markup Language (XML) and Microsoft Visual Studio platform.
APA, Harvard, Vancouver, ISO, and other styles
50

Shields, Philip John. "Nurse-led ontology construction: A design science approach." Thesis, 2016. https://vuir.vu.edu.au/32620/.

Full text
Abstract:
Most nursing quality studies based on the structure-process-outcome paradigm have concentrated on structure-outcome associations and have not explained the nursing process domain. This thesis turns the spotlight on the process domain and visualises nursing processes or ‘what nurses do’ by using ‘semantics’ which underpin Linking Of Data (LOD) technologies such as ontologies. Ontology construction has considerable limitations that make direct input of nursing process semantics difficult. Consequently, nursing ontologies being constructed to date use nursing process semantics collected by non-clinicians. These ontologies may have undesirable clinical implications when they are used to map nurse processes to patient outcomes. To address this issue, this thesis places nurses at the centre of semantic collection and ontology construction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography