Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Formal ontologies.

Thèses sur le sujet « Formal ontologies »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Formal ontologies ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Lieto, Antonio. « Non classical concept representation and reasoning in formal ontologies ». Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/346.

Texte intégral
Résumé :
2010 - 2011
Formal ontologies are nowadays widely considered a standard tool for knowledge representation and reasoning in the Semantic Web. In this context, they are expected to play an important role in helping automated processes to access information. Namely: they are expected to provide a formal structure able to explicate the relationships between different concepts/terms, thus allowing intelligent agents to interpret, correctly, the semantics of the web resources improving the performances of the search technologies. Here we take into account a problem regarding Knowledge Representation in general, and ontology based representations in particular; namely: the fact that knowledge modeling seems to be constrained between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In particular, most common sense concepts seem not to be captured by the stringent semantics expressed by such formalisms as, for example, Description Logics (which are the formalisms on which the ontology languages have been built). The aim of this work is to analyse this problem, suggesting a possible solution suitable for formal ontologies and semantic web representations. The questions guiding this research, in fact, have been: is it possible to provide a formal representational framework which, for the same concept, combines both the classical modelling view (accounting for compositional information) and defeasible, prototypical knowledge ? Is it possible to propose a modelling architecture able to provide different type of reasoning (e.g. classical deductive reasoning for the compositional component and a non monotonic reasoning for the prototypical one)? We suggest a possible answer to these questions proposing a modelling framework able to represent, within the semantic web languages, a multilevel representation of conceptual information, integrating both classical and non classical (typicality based) information. Within this framework we hypothesise, at least in principle, the co-existence of multiple reasoning processes involving the different levels of representation. This works is organized as follows: in chapter 1 the semantic web languages and the description logics formalisms on which they are based are briefly presented. Then, in chapter 2, the problem on which this work is focused (e.g. conceptual representation) is illustrated and the general idea of the proposed multi-layer framework is sketched. In chapter 3 the psychological theories about concepts based on prototypes and exemplars are surveyed. In this chapter we argue that such distinction can be useful in our approach because it allows (i) to have a more complete representation of the concepts and (ii) to hypothesise different types of non monotonic reasoning processes (e.g. non monotonic categorization). In chapter 4 the proposed modeling architecture is presented and, in chapter 5, it is evaluated on particular information retrieval tasks. The chapter 6 is dedicated to the conclusions. [edited by author]
X n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Linck, Ricardo Ramos. « Conceptual modeling of formal and material relations applied to ontologies ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/108626.

Texte intégral
Résumé :
Ontologias representam uma conceitualização compartilhada de uma comunidade de conhecimento. São construídas a partir da descrição dos significados dos conceitos, descritos através de seus atributos e dos relacionamentos entre os conceitos. Conceitos se referem ao objeto da conceitualização, o universo do discurso. São caracterizados por seus atributos e domínios de valores possíveis. Relacionamentos são utilizados para descreverem de que forma os conceitos se estruturam no mundo. Nas ontologias todos os conceitos são hierarquicamente definidos, porém existem outros relacionamentos que são definicionais, dando identidade aos conceitos e sentido ao mundo. Além dos relacionamentos de subsunção que constroem as taxonomias de conceitos, outras relações formais e materiais auxiliam na estruturação do domínio e na definição conceitual. As ferramentas de modelagem, no entanto, ainda são falhas em diferenciar os vários tipos de relacionamentos formais e materiais para atribuir as possibilidades de raciocínio automático. Em especial, relacionamentos mereológicos e partonômicos carecem de opções de implementação que permitam extrair o potencial semântico da modelagem. Este projeto de pesquisa tem como ponto de partida o estudo da literatura sobre ontologias e relações, em especial sobre relações formais e materiais, incluindo relações mereológicas e partonômicas, revisando os princípios encontrados nas ontologias. Além disso, nós identificamos os fundamentos teóricos das relações e analisamos a aplicação dos conceitos das relações sobre as principais ontologias de fundamentação em prática na atualidade. Na sequência, a partir das propostas levantadas, este trabalho propõe uma alternativa para a modelagem conceitual destas relações em uma ontologia de domínio visual. Esta alternativa foi disponibilizada na ferramenta de construção de ontologias do Projeto Obaitá, a qual está sendo desenvolvida pelo Grupo de Pesquisa de Bancos de Dados Inteligentes (BDI) da UFRGS.
Ontologies represent a shared conceptualization of a knowledge community. They are built from the description of the meaning of concepts, expressed through their attributes and their relationships. Concepts refer to the object of conceptualization, the universe of discourse. They are characterized by their attributes and domains of possible values. Relationships are used to describe how the concepts are structured in the world. In ontologies all concepts are hierarchically defined, however there are other relationships that are definitional, giving identity to the concepts and meaning to the world. In addition to the subsumption relationships that build the taxonomies of concepts, other formal and material relations assist in structuring the domain and the conceptual definition. The modeling tools, however, are still deficient in differentiating the various types of formal and material relationships in order to assign the possibilities of automated reasoning. In particular, mereological and partonomic relationships lack of implementation options that allow extracting the semantic potential when modeling. This research project takes as a starting point the study of the literature on ontologies and relations, especially on formal and material relations, including mereological and partonomic relations, reviewing the principles found on ontologies. Furthermore, we identify the theoretical foundations of the relations and analyze the application of the relations concepts to the main foundational ontologies in use nowadays. Following, from the raised proposals, this work proposes an alternative for the conceptual modeling of these relations in a visual domain ontology. This alternative has been made available on the ontology building tool of the Obaitá Project, which is under development by the Intelligent Databases Research Group (BDI) from UFRGS.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Venugopal, Manu. « Formal specification of industry foundation class concepts using engineering ontologies ». Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42868.

Texte intégral
Résumé :
Architecture, Engineering, Construction (AEC) and Facilities Management (FM) involve domains that require a very diverse set of information and model exchanges to fully realize the potential of Building Information Modeling (BIM). Industry Foundation Classes (IFC) provides a neutral and open schema for interoperability. Model View Definitions (MVD) provide a common subset for specifying the exchanges using IFC, but are expensive to build, test and maintain. A semantic analysis of IFC data schema illustrates the complexities of embedding semantics in model views. A software engineering methodology based on formal specification of shared resources, reusable components and standards that are applicable to the AEC-FM industry for development of a Semantic Exchange Module (SEM) structure for IFC schema is adopted for this research. This SEM structure is based on engineering ontologies that are capable of developing more consistent MVDs. In this regard, Ontology is considered as a machine-readable set of definitions that create a taxonomy of classes and subclasses, and relationships between them. Typically, the ontology contains the hierarchical description of important entities that are used in IFC, along with their properties and business rules. This model of an ontological framework, similar to that of Semantic Web, makes the IFC more formal and consistent as it is capable of providing precise definition of terms and vocabulary. The outcome of this research, a formal classification structure for IFC implementations for the domain of Precast/ Prestressed Concrete Industry, when implemented by software developers, provides the mechanism for applications such as modular MVDs, smart and complex querying of product models, and transaction based services, based on the idea of testable and reusable SEMs. It can be extended and also helps in consistent implementation of rule languages across different domains within AEC-FM, making data sharing across applications simpler with limited rework. This research is expected to impact the overall interoperability of applications in the BIM realm.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hacid, Kahina. « Handling domain knowledge in system design models. An ontology based approach ». Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20157/7/HACID_kahina.pdf.

Texte intégral
Résumé :
Complex systems models are designed in heterogeneous domains and this heterogeneity is rarely considered explicitly when describing and validating processes. Moreover, these systems usually involve several domain experts and several design models corresponding to different analyses (views) of the same system. However, no explicit information regarding the characteristics neither of the domain nor of the performed system analyses is given. In our thesis, we propose a general framework offering first, the formalization of domain knowledge using ontologies and second, the capability to strengthen design models by making explicit references to the domain knowledgeformalized in these ontology. This framework also provides resources for making explicit the features of an analysis by formalizing them within models qualified as ‘’points of view ‘’. We have set up two deployments of our approach: a Model Driven Engineering (MDE) based deployment and a formal methods one based on proof and refinement. This general framework has been validated on several no trivial case studies issued from system engineering.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Leshi, Olumide. « An Approach to Extending Ontologies in the Nanomaterials Domain ». Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170255.

Texte intégral
Résumé :
As recently as the last decade or two, data-driven science workflows have become increasingly popular and semantic technology has been relied on to help align often parallel research efforts in the different domains and foster interoperability and data sharing. However, a key challenge is the size of the data and the pace at which it is being generated, so much that manual procedures lag behind. Thus, eliciting automation of most workflows. In this study, the effort is to continue investigating ways by which some tasks performed by experts in the nanotechnology domain, specifically in ontology engineering, could benefit from automation. An approach, featuring phrase-based topic modelling and formal topical concept analysis is further motivated, together with formal implication rules, to uncover new concepts and axioms relevant to two nanotechnology-related ontologies. A corpus of 2,715 nanotechnology research articles helps showcase that the approach can scale, as seen in a number of experiments conducted. The usefulness of document text ranking as an alternative form of input to topic models is highlighted as well as the benefit of implication rules to the task of concept discovery. In all, a total of 203 new concepts are uncovered by the approach to extend the referenced ontologies
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hassan, Mohsen. « Knowledge Discovery Considering Domain Literature and Ontologies : Application to Rare Diseases ». Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0092/document.

Texte intégral
Résumé :
De par leur grand nombre et leur sévérité, les maladies rares (MR) constituent un enjeu de santé majeur. Des bases de données de référence, comme Orphanet et Orphadata, répertorient les informations disponibles à propos de ces maladies. Cependant, il est difficile pour ces bases de données de proposer un contenu complet et à jour par rapport à ce qui est disponible dans la littérature. En effet, des millions de publications scientifiques sur ces maladies sont disponibles et leur nombre augmente de façon continue. Par conséquent, il serait très fastidieux d’extraire manuellement et de façon exhaustive des informations sur ces maladies. Cela motive le développement des approches semi-automatiques pour extraire l’information des textes et la représenter dans un format approprié pour son utilisation dans d’autres applications. Cette thèse s’intéresse à l’extraction de connaissances à partir de textes et propose d’utiliser les résultats de l’extraction pour enrichir une ontologie de domaine. Nous avons étudié trois directions de recherche: (1) l’extraction de connaissances à partir de textes, et en particulier l’extraction de relations maladie-phénotype (M-P); (2) l’identification d’entité nommées complexes, en particulier de phénotypes de MR; et (3) l’enrichissement d’une ontologie en considérant les connaissances extraites à partir de texte. Tout d’abord, nous avons fouillé une collection de résumés d’articles scientifiques représentés sous la forme graphes pour un extraire des connaissances sur les MR. Nous nous sommes concentrés sur la complétion de la description des MR, en extrayant les relations M-P. Cette trouve des applications dans la mise à jour des bases de données de MR telles que Orphanet. Pour cela, nous avons développé un système appelé SPARE* qui extrait les relations M-P à partir des résumés PubMed, où les phénotypes et les MR sont annotés au préalable par un système de reconnaissance des entités nommées. SPARE* suit une approche hybride qui combine une méthode basée sur des patrons syntaxique, appelée SPARE, et une méthode d’apprentissage automatique (les machines à vecteurs de support ou SVM). SPARE* bénéficié à la fois de la précision relativement bonne de SPARE et du bon rappel des SVM. Ensuite, SPARE* a été utilisé pour identifier des phénotypes candidats à partir de textes. Pour cela, nous avons sélectionné des patrons syntaxiques qui sont spécifiques aux relations M-P uniquement. Ensuite, ces patrons sont relaxés au niveau de leur contrainte sur le phénotype pour permettre l’identification de phénotypes candidats qui peuvent ne pas être références dans les bases de données ou les ontologies. Ces candidats sont vérifiés et validés par une comparaison avec les classes de phénotypes définies dans une ontologie de domaine comme HPO. Cette comparaison repose sur une modèle sémantique et un ensemble de règles de mises en correspondance définies manuellement pour cartographier un phénotype candidate extrait de texte avec une classe de l’ontologie. Nos expériences illustrent la capacité de SPARE* à des phénotypes de MR déjà répertoriés ou complètement inédits. Nous avons appliqué SPARE* à un ensemble de résumés PubMed pour extraire les phénotypes associés à des MR, puis avons mis ces phénotypes en correspondance avec ceux déjà répertoriés dans l’encyclopédie Orphanet et dans Orphadata ; ceci nous a permis d’identifier de nouveaux phénotypes associés à la maladie selon les articles, mais pas encore listés dans Orphanet ou Orphadata.Enfin, nous avons appliqué les structures de patrons pour classer les MR et enrichir une ontologie préexistante. Tout d’abord, nous avons utilisé SPARE* pour compléter les descriptions en terme de phénotypes de MR disponibles dans Orphadata. Ensuite, nous proposons de compter et grouper les MR au regard de leur description phénotypique, et ce en utilisant les structures de patron. [...]
Even if they are uncommon, Rare Diseases (RDs) are numerous and generally sever, what makes their study important from a health-care point of view. Few databases provide information about RDs, such as Orphanet and Orphadata. Despite their laudable effort, they are incomplete and usually not up-to-date in comparison with what exists in the literature. Indeed, there are millions of scientific publications about these diseases, and the number of these publications is increasing in a continuous manner. This makes the manual extraction of this information painful and time consuming and thus motivates the development of semi-automatic approaches to extract information from texts and represent it in a format suitable for further applications. This thesis aims at extracting information from texts and using the result of the extraction to enrich existing ontologies of the considered domain. We studied three research directions (1) extracting relationships from text, i.e., extracting Disease-Phenotype (D-P) relationships; (2) identifying new complex entities, i.e., identifying phenotypes of a RD and (3) enriching an existing ontology on the basis of the relationship previously extracted, i.e., enriching a RD ontology. First, we mined a collection of abstracts of scientific articles that are represented as a collection of graphs for discovering relevant pieces of biomedical knowledge. We focused on the completion of RD description, by extracting D-P relationships. This could find applications in automating the update process of RD databases such as Orphanet. Accordingly, we developed an automatic approach named SPARE*, for extracting D-P relationships from PubMed abstracts, where phenotypes and RDs are annotated by a Named Entity Recognizer. SPARE* is a hybrid approach that combines a pattern-based method, called SPARE, and a machine learning method (SVM). It benefited both from the relatively good precision of SPARE and from the good recall of the SVM. Second, SPARE* has been used for identifying phenotype candidates from texts. We selected high-quality syntactic patterns that are specific for extracting D-P relationships only. Then, these patterns are relaxed on the phenotype constraint to enable extracting phenotype candidates that are not referenced in databases or ontologies. These candidates are verified and validated by the comparison with phenotype classes in a well-known phenotypic ontology (e.g., HPO). This comparison relies on a compositional semantic model and a set of manually-defined mapping rules for mapping an extracted phenotype candidate to a phenotype term in the ontology. This shows the ability of SPARE* to identify existing and potentially new RD phenotypes. We applied SPARE* on PubMed abstracts to extract RD phenotypes that we either map to the content of Orphanet encyclopedia and Orphadata; or suggest as novel to experts for completing these two resources. Finally, we applied pattern structures for classifying RDs and enriching an existing ontology. First, we used SPARE* to compute the phenotype description of RDs available in Orphadata. We propose comparing and grouping RDs in regard to their phenotypic descriptions, and this by using pattern structures. The pattern structures enable considering both domain knowledge, consisting in a RD ontology and a phenotype ontology, and D-P relationships from various origins. The lattice generated from this pattern structures suggests a new classification of RDs, which in turn suggests new RD classes that do not exist in the original RD ontology. As their number is large, we proposed different selection methods to select a reduced set of interesting RD classes that we suggest for experts for further analysis
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kriegel, Francesco [Verfasser], Franz [Akademischer Betreuer] Baader, Franz [Gutachter] Baader et Sergei O. [Gutachter] Kuznetsov. « Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis / Francesco Kriegel ; Gutachter : Franz Baader, Sergei O. Kuznetsov ; Betreuer : Franz Baader ». Dresden : Technische Universität Dresden, 2019. http://d-nb.info/1226942601/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tsatsaronis, George, Yue Ma, Alina Petrova, Maria Kissa, Felix Distel, Franz Baader et Michael Schroeder. « Formalizing biomedical concepts from textual definitions ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-192186.

Texte intégral
Résumé :
Background Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. Results We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations’ domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations’ domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. Conclusions The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Petrova, Alina, Yue Ma, George Tsatsaronis, Maria Kissa, Felix Distel, Franz Baader et Michael Schroeder. « Formalizing biomedical concepts from textual definitions ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-191181.

Texte intégral
Résumé :
BACKGROUND: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. RESULTS: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. CONCLUSIONS: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nasiri, Khoozani Ehsan. « An ontological framework for the formal representation and management of human stress knowledge ». Thesis, Curtin University, 2011. http://hdl.handle.net/20.500.11937/2220.

Texte intégral
Résumé :
There is a great deal of information on the topic of human stress which is embedded within numerous papers across various databases. However, this information is stored, retrieved, and used often discretely and dispersedly. As a result, discovery and identification of the links and interrelatedness between different aspects of knowledge on stress is difficult. This restricts the effective search and retrieval of desired information. There is a need to organize this knowledge under a unifying framework, linking and analysing it in mutual combinations so that we can obtain an inclusive view of the related phenomena and new knowledge can emerge. Furthermore, there is a need to establish evidence-based and evolving relationships between the ontology concepts.Previous efforts to classify and organize stress-related phenomena have not been sufficiently inclusive and none of them has considered the use of ontology as an effective facilitating tool for the abovementioned issues.There have also been some research works on the evolution and refinement of ontology concepts and relationships. However, these fail to provide any proposals for an automatic and systematic methodology with the capacity to establish evidence-based/evolving ontology relationships.In response to these needs, we have developed the Human Stress Ontology (HSO), a formal framework which specifies, organizes, and represents the domain knowledge of human stress. This machine-readable knowledge model is likely to help researchers and clinicians find theoretical relationships between different concepts, resulting in a better understanding of the human stress domain and its related areas. The HSO is formalized using OWL language and Protégé tool.With respect to the evolution and evidentiality of ontology relationships in the HSO and other scientific ontologies, we have proposed the Evidence-Based Evolving Ontology (EBEO), a methodology for the refinement and evolution of ontology relationships based on the evidence gleaned from scientific literature. The EBEO is based on the implementation of a Fuzzy Inference System (FIS).Our evaluation results showed that almost all stress-related concepts of the sample articles can be placed under one or more category of the HSO. Nevertheless, there were a number of limitations in this work which need to be addressed in future undertakings.The developed ontology has the potential to be used for different data integration and interoperation purposes in the domain of human stress. It can also be regarded as a foundation for the future development of semantic search engines in the stress domain.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Petrova, Alina, Yue Ma, George Tsatsaronis, Maria Kissa, Felix Distel, Franz Baader et Michael Schroeder. « Formalizing biomedical concepts from textual definitions ». BioMed Central, 2015. https://tud.qucosa.de/id/qucosa%3A29123.

Texte intégral
Résumé :
BACKGROUND: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. RESULTS: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. CONCLUSIONS: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Tsatsaronis, George, Yue Ma, Alina Petrova, Maria Kissa, Felix Distel, Franz Baader et Michael Schroeder. « Formalizing biomedical concepts from textual definitions : Research Article ». Journal of Biomedical Semantics, 2010. https://tud.qucosa.de/id/qucosa%3A29146.

Texte intégral
Résumé :
Background Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. Results We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations’ domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations’ domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. Conclusions The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Gallina, Leandro Zulian. « Extração e representação semântica de fatos temporais ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/55443.

Texte intégral
Résumé :
Este trabalho descreve EXTIO (Extraction of Temporal Information Using Ontologies), uma abordagem que permite a normalização de expressões temporais e a organização em ontologia de fatos temporais extraídos de texto em linguagem natural. Isto permite que motores de busca possam aproveitar melhor a informação temporal de páginas daWeb, realizando inferências sobre fatos temporais. EXTIO propõe: a normalização de expressões temporais relativas através de uma gramática formal para a língua inglesa; e a organização de fatos temporais extraídos do texto normalizado em uma ontologia. Expressões temporais relativas são construções textuais de tempo que se referem a uma data absoluta cujo valor é relativo a outra data. Por exemplo, a expressão “three months ago” (três meses atrás) é uma expressão temporal relativa, pois seu surgimento no texto se refere a uma data três meses antes da data de publicação do documento. Experimentos demonstram que a gramática formal proposta para a normalização de expressões temporais relativas supera o baseline na eficácia da normalização e no tempo de processamento de documentos em linguagem natural. A principal contribuição deste trabalho é a gramática formal para normalização de expressões temporais relativas de texto na língua inglesa. Também é contribuição deste trabalho o processamento semântico da informação temporal disponível em formato texto em documentos, para que possa ser melhor aproveitada por motores de busca.
This work describes EXTIO, an approach for the normalization of temporal expressions and the semantic organization of temporal facts extracted from natural language text. This approach allows search engines to benefit from temporal information in Web pages, performing inferences on temporal facts. EXTIO proposes: the normalization of relative temporal expressions through a formal grammar for the English language; and the organization of temporal facts extracted from normalized text in an ontology. Relative temporal expressions are textual time structures that refer to an absolute date whose value is relative to another date. For instance, “three months ago” is a relative temporal expression because its appearance in the text refers to a date three months before the document publication date. Experiments show that the proposed formal grammar for the normalization of relative temporal expressions has a better performance than the baseline in effectiveness and processing time. The main contribution of this work is the formal grammar for the normalization of temporal expressions in natural language text in English. Another contribution of this work is the semantic processing of temporal information available in documents, so that search engines may benefit from this information.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Mary, Melissa. « Intéropérabilité sémantique dans le domaine du diagnostic in vitro : Représentation des Connaissances et Alignement ». Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR033.

Texte intégral
Résumé :
La centralisation des données patients au sein de répertoires numériques soulève des problématiques d’interopérabilité avec les différents systèmes d’information médicaux tels que ceux utilisés en clinique, à la pharmacie ou dans les laboratoires d’analyse. Les instances de santé publique, en charge de développer et de déployer ces dossiers, recommandent l’utilisation de standards pour structurer (syntaxe) et coder l’information (sémantique). Pour les données du diagnostic in vitro (DIV) deux standards sémantiques sont largement préconisés : - la terminologie LOINC® (Logical Observation Identifier Names and Codes) pour représenter les tests de laboratoire ;- l’ontologie SNOMED CT® (Systematized Nomenclature Of MEDicine Clinical Terms) pour exprimer les résultats observés.Ce travail de thèse s’articule autour des problématiques d’interopérabilité sémantique en microbiologie clinique avec deux axes principaux : Comment aligner un Système Organisé de Connaissances du DIV en microbiologie avec l’ontologie SNOMED CT® ? Pour répondre à cet objectif j’ai pris le parti dans mon travail de thèse de développer des méthodologies d’alignement adaptées aux données du diagnostic in vitro plutôt que de proposer une méthode spécifique à l’ontologie SNOMED CT®. Les méthodes usuelles pour l’alignement d’ontologies ont été évaluées sur un alignement de référence entreLOINC® et SNOMED CT®. Les plus pertinentes sont implémentées dans une librairie R, qui sert de point de départ pour créer de nouveaux alignements au sein de bioMérieux. Quels sont les bénéfices et limites d’une représentation formelle des connaissances du DIV ? Pour répondre à cet objectif je me suis intéressée à la formalisation du couple (Observation) au sein d’un compte-rendu de laboratoire. J’ai proposé un formalisme logique pour représenter les tests de la terminologie LOINC® qui a permis de montrer les bénéfices d’une représentation ontologique pour classer et requêter les tests. Dans un second temps, j’ai formalisé un patron d’observations compatible avec l’ontologie SNOMED CT® et aligné sur lesconcepts de la top-ontologie BioTopLite2. Enfin, le patron d’observation a été évaluée afin d’être utilisé au sein des systèmes d’aide à la décision en microbiologie clinique. Pour résumer, ma thèse s’inscrit dans une dynamique de partage et réutilisation des données patients. Les problématiques d’interopérabilité sémantique et de formalisation des connaissances dans le domaine du diagnostic in vitro freinent aujourd’hui encore le développement de systèmes experts. Mes travaux de recherche ont permis de lever certains de ces verrous et pourront être réutilisés dans de nouveaux systèmes intelligents en microbiologie clinique afin de surveiller par exemple l’émergence de bactéries multi-résistantes, et adapter en conséquence des thérapies antibiotiques
The centralization of patient data in different digital repositories raises issues of interoperability with the different medical information systems, such as those used in clinics, pharmacies or in medical laboratories. The public health authorities, charged with developing and implementing these repositories, recommend the use of standards to structure (syntax) and encode (semantic) health information. For data from in vitro diagnostics (IVD) two standards are recommended: - the LOINC® terminology (Logical Observation Identifier Names and Codes) to represent laboratory tests;- the SNOMED CT® ontology (Systematized Nomenclature Of MEDicine Clinical Terms) to express the observed results.This thesis focuses on the semantic interoperability problems in clinical microbiology with two major axes: How can an IVD Knowledge Organization System be aligned with SNOMED CT®? To answer this, I opted for the development of alignment methodologies adapted to the in vitro diagnostic data rather than proposing a specific method for the SNOMED CT®. The common alignment methods are evaluated on a gold standard alignment between LOINC® and SNOMED CT®. Themost appropriate are implemented in an R library which serves as a starting point to create new alignments at bioMérieux.What are the advantages and limits of a formal representation of DIV knowledge? To answer this, I looked into the formalization of the couple ‘test-result’ (observation) in a laboratory report. I proposed a logical formalization to represent the LOINC® terminology and I demonstrated the advantages of an ontological representation to sort and query laboratory tests. As a second step, I formalized an observation pattern compatible with the SNOMED CT® ontology and aligned onthe concept of the top-ontology BioTopLite2. Finally, the observation pattern was evaluated in order to be used within clinical microbiology expert systems. To resume, my thesis addresses some issues on IVD patient data share and reuse. At present, the problems of semantic interoperability and knowledge formalization in the field of in vitro diagnostics hampers the development of expert systems. My research has enabled some of the obstacles to be raised and could be used in new intelligent clinical microbiology systems, for example in order to be able to monitor the emergence of multi resistant bacteria and consequently adapt antibiotic therapies
Styles APA, Harvard, Vancouver, ISO, etc.
15

Loebe, Frank. « Ontological Semantics ». Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-166326.

Texte intégral
Résumé :
The original and still a major purpose of ontologies in computer and information sciences is to serve for the semantic integration of represented content, facilitating information system interoperability. Content can be data, information, and knowledge, and it can be distributed within or across these categories. A myriad of languages is available for representation. Ontologies themselves are artifacts which are expressed in various languages. Different such languages are utilized today, including, as well-known representatives, predicate logic, subsuming first-order (predicate) logic (FOL), in particular, and higher-order (predicate) logic (HOL); the Web Ontology Language (OWL) on the basis of description logics (DL); and the Unified Modeling Language (UML). We focus primarily on languages with formally defined syntax and semantics. This overall picture immediately suggests questions of the following kinds: What is the relationship between an ontology and the language in which it is formalized? Especially, what is the impact of the formal semantics of the language on the formalized ontology? How well understood is the role of ontologies in semantic integration? Can the same ontology be represented in multiple languages and/or in distinct ways within one language? Is there an adequate understanding of whether two expressions are intensionally/conceptually equivalent and whether two ontologies furnish the same ontological commitments? One may assume that these questions are resolved. Indeed, the development and adoption of ontologies is widespread today. Ontologies are authored in a broad range of different languages, including offering equally named ontologies in distinct languages. Much research is devoted to techniques and technologies that orbit ontologies, for example, ontology matching, modularization, learning, and evolution, to name a few. Ontologies have found numerous beneficial applications, and hundreds of ontologies have been created, considering solely the context of biomedical research. For us, these observations increase the relevance of the stated questions and close relatives thereof, and raise the desire for solid theoretical underpinnings. In the literature of computer and information sciences, we have found only few approaches that tackle the foundations of ontologies and their representation to allow for answering such questions or that actually answer them. We elaborate an analysis of the subject as the first item of central contributions within this thesis. It mainly results in the identification of a vicious circularity in (i) the intended use of ontologies to mediate between formal representations and (ii) solely exploiting formal semantic notions in representing ontologies and defining ontology-based equivalence as a form of intensional/conceptual equivalence. On this basis and in order to overcome its identified limitations, we contribute a general model-theoretic semantic account, named \\\"ontological semantics\\\". This kind of semantics takes the approach of assigning arbitrary entities as referents of atomic symbols and to link syntactic constructions with corresponding ontological claims and commitments. In particular, ontological semantics targets the avoidance of encoding effects in its definition. Therefore we argue that this semantic account is well suited for interpreting formalized ontologies and for defining languages for the representation of ontologies. It is further proposed as a fundament for envisioned novel definitions of the intensional equivalence of expressions, in potential deviation from only being formally equivalent under set-theoretic semantics. The thesis is defended that a particular usage of a formalism and its respective vocabulary should be accompanied by establishing an ontological semantics that is tailored to that use of the formalism, in parallel to the formal semantics of the language, in order to capture the ontological content of the formal representation for adequate reuse in other formalisms. Accordingly, we advocate ontological semantics as a useful framework for justifying translations on an intensional basis. Despite all deviations of ontological semantics from its set-theoretic blueprint, close relationships between the two can be shown, which allow for using established FOL and DL reasoners while assuming ontological semantics.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Ferrari, Francesco Maria. « Questioni di semantica formale e logica plurale ». Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3426765.

Texte intégral
Résumé :
The present research is a logico-philosophical analysis of the issues concerning the semantics for plural logic, with particular attention to the recent work by A. Oliver and T. Smiley, Plural Logic (OUP). The first chapter introduces into the model-theoretic semantics for second-order languages. Three versions are presented: standard, Henkin and multi-sorted. All three differ in the definition of the assignment function to the second-order variables. The second chapter analyzes the relationship between model-theoretic semantics and ontology, in particular realism and nominalism. On the one hand, realism relies on the so called referential (or objectual) semantics; on the other hand, nominalism must rely on the so called substitutional semantics, for what concerns second-order variables, in order to avoid any ontological commitment with respect to such variables. The third chapter introduces plural semantics. W.O. Quine (in 1970), argued that second-order logic is a ‘set theory in sheep’s clothing’ and, so, not a pure logic. Quine’s approach was strongly criticized by G. Boolos, in a series of articles in the 70s and 80s of the last century. He proposed a new sort of referential semantics for (monadic) secon-order variables and quantifiers, the so called plural interpretation based on a (one-many) ralation assignment. The fourth chapter presents an outline of the more ideas of the work by Oliver and Smiley. In particular, their system 1) recasts predication in terms of plural predication and 2) attempts to capture plural denotation phenomena. In order to extend the category of terms to the plural case, Autors propose a theory of definite descriptions that contrasts with the Russellian one. Plural functional terms, obtained by means of the descriptive apparatus, denote so called multivalued functions – to be added to the usual functions, now singlevalued. In the fifth chapter it is provided an analysis of such function with respect to mathematics and logic. Multivalued functions play also a key role in the semantics of plural logic, modeling the assignment function for plural variables. It is also considered some semantic consequence due to their assumption. The final chapter concludes the analysis of plural logic. Ø. Linnebo (2003) presented a criterion of logicality. From the application of such a criterion, it emerges that there are no compelling reasons not to define plural logic a pure logic. The only main point against such plural logics is the modal rigidity of the notion of plurality. Such a rigidity reveals that the alleged formalization of some typical features of that fragment of natural language that is related to plural phenomena is not fully adequate in these sort of plural logics.
La presente ricerca consiste in un’analisi logico-filosofica delle questioni inerenti alla semantica per la logica plurale, con particolare attenzione al recente lavoro di A. Oliver e T. Smiley, Plural Logic (OUP). Il primo capitolo introduce la semantica modellistica per i linguaggi del secondo ordine, presentando tre varianti, standard, di Henkin e multi-sorted, le quali si distinguono per la definizione della funzione di assegnazione di valori alle variabili del secondo ordine. Il secondo capitolo analizza le relazioni fra la semantica modellistica e l’ontologia, in particolare il realismo e il nominalismo. Da un lato, il realismo si affida alla semantica c.d. referenziale, dall’altro il nominalismo, che deve evitare il c.d. impegno ontologico delle variabili del secondo ordine pur permettendone l’uso linguistico, si deve affidare alla semantica c.d. sostituzionale rispetto a tali variabili. Il terzo capitolo, introduce alle questioni inerenti alla semantica plurale. W.O. Quine (nel 1970) sostenne che la logica del secondo ordine non è una logica pura. Tale approccio fu intensamente criticato da G. Boolos, in una serie di articoli negli anni ’70-’80 del secolo scorso in cui Boolos giunse a proporre un nuovo tipo di semantica referenziale per le variabili (monadiche) del secondo ordine quantificate, la c.d. interpretazione plurale basata su una relazione (uno-molti) di assegnazione di valori a tali variabili. Il quarto capitolo presenta una sintesi delle maggiori idee su cui si basa il sistema di logica plurale di Oliver e Smiley. Tale logica plurale riformula la predicazione nei termini della predicazione plurale. La preoccupazione principale degli Autori è quella di catturare il fenomeno della denotazione plurale. A tal fine presentano una teoria delle descrizioni definite che contrasta con quella russelliana. I termini funzionali plurali, ottenuti mediante tale apparato descrittivo, denotano le c.d. funzioni polidrome. Nel quinto capitolo si fornisce una analisi estensionale di tali funzioi. Le funzioni polidrome sono fondamentali anche nella semantica della logica plurale, in quanto codificano la funzione di assegnazione plurale. Alcune conseguenze semantiche dovute alla loro assunzione sono così evidenziate. Il capitolo conclusivo, il sesto, termina l’analisi della logica plurale. Ø. Linnebo (2003) presentò un criterio di logicità, dalla cui applicazione emerge che non ci sono ragioni stringenti per non considerare la logica plurale una pura logica. Una critica, però, su tutte alla logica plurale: la rigidità della nozione di pluralità. Emerge, così, che alcuni tratti del linguaggio naturale legati a fenomeni plurali (quantificazione e denotazione), non sono catturati adeguatamente nei sistemi di logica plurale come quello di Oliver e Smiley.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Monnin, Pierre. « Matching and mining in knowledge graphs of the Web of data : Applications in pharmacogenomics ». Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0212.

Texte intégral
Résumé :
Dans le Web des données, des graphes de connaissances de plus en plus nombreux sont simultanément publiés, édités, et utilisés par des agents humains et logiciels. Cette large adoption rend essentielles les tâches d'appariement et de fouille. L'appariement identifie des unités de connaissances équivalentes, plus spécifiques ou similaires au sein et entre graphes de connaissances. Cette tâche est cruciale car la publication et l'édition parallèles peuvent mener à des graphes de connaissances co-existants et complémentaires. Cependant, l'hétérogénéité inhérente aux graphes de connaissances (e.g., granularité, vocabulaires, ou complétude) rend cette tâche difficile. Motivés par une application en pharmacogénomique, nous proposons deux approches pour apparier des relations n-aires représentées au sein de graphes de connaissances : une méthode symbolique à base de règles et une méthode numérique basée sur le plongement de graphe. Nous les expérimentons sur PGxLOD, un graphe de connaissances que nous avons construit de manière semi-automatique en intégrant des relations pharmacogénomiques de trois sources du domaine. La tâche de fouille permet quant à elle de découvrir de nouvelles unités de connaissances à partir des graphes de connaissances. Leur taille croissante et leur nature combinatoire entraînent des problèmes de passage à l'échelle que nous étudions dans le cadre de la fouille de patrons de chemins. Nous proposons également l'annotation de concepts, une méthode d'amélioration des graphes de connaissances qui étend l'Analyse Formelle de Concepts, un cadre mathématique groupant des entités en fonction de leurs attributs communs. Au cours de tous nos travaux, nous nous sommes particulièrement intéressés à tirer parti des connaissances de domaines formalisées au sein d'ontologies qui peuvent être associées aux graphes de connaissances. Nous montrons notamment que, lorsqu'elles sont prises en compte, ces connaissances permettent de réduire l'impact des problèmes d'hétérogénéité et de passage à l'échelle dans les tâches d'appariement et de fouille
In the Web of data, an increasing number of knowledge graphs are concurrently published, edited, and accessed by human and software agents. Their wide adoption makes key the two tasks of matching and mining. First, matching consists in identifying equivalent, more specific, or somewhat similar units within and across knowledge graphs. This task is crucial since concurrent publication and edition may result in coexisting and complementary knowledge graphs. However, this task is challenging because of the inherent heterogeneity of knowledge graphs, e.g., in terms of granularities, vocabularies, and completeness. Motivated by an application in pharmacogenomics, we propose two approaches to match n-ary relationships represented in knowledge graphs: a symbolic rule-based approach and a numeric approach using graph embedding. We experiment on PGxLOD, a knowledge graph that we semi-automatically built by integrating pharmacogenomic relationships from three distinct sources of this domain. Second, mining consists in discovering new and useful knowledge units from knowledge graphs. Their increasing size and combinatorial nature entail scalability issues, which we address in the mining of path patterns. We also propose Concept Annotation, a refinement approach extending Formal Concept Analysis, a mathematical framework that groups entities based on their common attributes. Throughout all our works, we particularly focus on taking advantage of domain knowledge in the form of ontologies that can be associated with knowledge graphs. We show that, when considered, such domain knowledge alleviates heterogeneity and scalability issues in matching and mining approaches
Styles APA, Harvard, Vancouver, ISO, etc.
18

Dias, Luiz Gustavo. « Análise formal no gerenciamento de competências : o emprego de ontologias e lógica de descrição ». Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/8182.

Texte intégral
Résumé :
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-02-23T12:55:26Z No. of bitstreams: 2 Dissertação - Luiz Gustavo Dias - 2018.pdf: 24658970 bytes, checksum: e3353986b772fe9103c6a705f3aa4ee0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-02-23T12:56:11Z (GMT) No. of bitstreams: 2 Dissertação - Luiz Gustavo Dias - 2018.pdf: 24658970 bytes, checksum: e3353986b772fe9103c6a705f3aa4ee0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-02-23T12:56:11Z (GMT). No. of bitstreams: 2 Dissertação - Luiz Gustavo Dias - 2018.pdf: 24658970 bytes, checksum: e3353986b772fe9103c6a705f3aa4ee0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-01-30
Nowadays to manage knowledge from corporate activities, competency management is increasingly effective in the process of defining organizational strategies. In this way, the present research had the purpose to use formal methods to verify if functions from positions, and competences coming from employees that occupy them, can interfere in the functioning of organizations. A long this lines, an ontology was developed using methodology 101, which had as its domain the educational sector, and served as an information base for the execution of queries elaborated from the description logic, in order to find possible inconsistencies as well as solutions. During the study of the case, were found inconsistencies related to qualification, training, stocking and compatibility of information from different sources that represented the same domain. As a result the research allowed the creation of correct knowledge and that can be used by managers, at different hierarchical levels, helping in the improvement of processes and decision making.
Com a necessidade de gerenciar o conhecimento advindo das atividades corporativas, a gerência de competências se mostra cada vez mais efetiva no processo de definição de estratégias Organizacionais. Desta forma a presente pesquisa objetivou utilizar métodos formais para verificar se funções advindas de cargos, e competências advindas de colaboradores que ocupam os mesmos, podem interferir no funcionamento de organizações. Tendo em vista essa necessidade, foi produzida nesta pesquisa uma ontologia utilizando a metodologia 101, que teve como domínio o setor educacional, e serviu de base de informações para a execução de consultas elaboradas a partir da lógica de descrição, a fim de encontrar possíveis inconsistências bem como soluções. Durante a realização do estudo de caso, foram encontradas inconsistências relacionadas a qualificação, capacitação, lotação e compatibilidade de informações advindas de fontes diferentes que representavam o mesmo domínio. Como resultado a pesquisa possibilitou a criação de conhecimento correto e que pode ser empregado por gestores, em diferentes níveis hierárquicos, auxiliando na melhoria de processos e tomadas de decisões.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Trapp, Rogério Vaz. « Antropologia e semântica formal : fenomenologia e linguagem ». Pontifícia Universidade Católica do Rio Grande do Sul, 2011. http://hdl.handle.net/10923/3413.

Texte intégral
Résumé :
Made available in DSpace on 2013-08-07T18:55:31Z (GMT). No. of bitstreams: 1 000433557-Texto+Completo-0.pdf: 651443 bytes, checksum: 6395d825ca0c362474b1c94890bb8218 (MD5) Previous issue date: 2011
The aim of this thesis consists in demonstrating that the formal semantics – as a field of articulation between logic and ontology – requires an Anthropology as its grounding. For this it will be necessary to demonstrate that the distinction between relative and absolute grounding leads the formal semantics to the same mode of grounding as the Heidegger’s phenomenology. This means that instead of just providing the analytical control of Heidegger’s phenomenological method, the formal semantics also reverses the relation of implication between both methods in a way that its own analytical method is eventually supplemented by the phenomenological method. Then what the philosopher Ernst Tugendhat would have not noticed is that the distinction between relative and absolute grounding (between semantic-ontological and phenomenological grounding) introduces, in the core of its own conception of philosophy, the heideggerian distinction between ontic and ontological grounds, that is, the ontological difference. Thus, to demonstrate this thesis, we will take the circularity between the ground in the attunements and the ground in the formal-semantical rules for the assertorical sentences as the field of articulation between logic and ontology with the anthropology. For this, we will need to take the set of rules drawn up by Tugendhat for the verification of assertorical statements and also to demonstrate that, as a stable relation between a subject and an object in space or time allows the construction of a system of references from which the objectuality can be established, a stable behavioral relation between a subject and a system of objectives rules allows as well the emergence of the system of practical-behavioral references – the consciousness.
O objetivo do texto consiste em demonstrar que a Semântica formal, enquanto campo de articulação entre Lógica e Ontologia, exige sua fundamentação em uma Antropologia. Para isto será necessário demonstrar que a distinção entre fundamento relativo e absoluto conduz a Semântica formal ao modo de fundamentação da fenomenologia de Heidegger. Isto significa que, ao invés de apenas fornecer controle analítico ao método fenomenológico de Heidegger, a Semântica formal também inverte a relação de implicação entre ambos os métodos, de tal modo que o próprio método analítico é que acaba suplementado pelo método fenomenológico. Portanto, o que Tugendhat não teria percebido é que a diferenciação entre fundamento relativo e absoluto, entre fundamento semântico-ontológico e fundamento fenomenológico, introduz no cerne de sua filosofia a distinção heideggeriana entre fundamento ôntico e ontológico, isto é, a diferença ontológica. Assim, para demonstrar nossa tese, deveremos tomar a circularidade entre o fundamento nos estados-de-ânimo e o fundamento nas regras semântico-formais para sentenças assertóricos como campo de articulação entre Lógica e Ontologia com a Antropologia. Para isto, precisaremos tomar o conjunto de regras elaboradas por Tugendhat para a verificação de enunciados assertóricos e demonstrar que, tal como a relação estável entre um sujeito e um objeto no espaço ou no tempo permite a construção de um sistema de referências a partir do qual a objetualidade pode ser estabelecida, assim também uma relação comportamental estável entre o sujeito e um sistema de regras objetivas permite o surgimento do sistema de referências práticocomportamental – a consciência.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Szejka, Anderson Luis. « Contribution to interoperable products design and manufacturing information : application to plastic injection products manufacturing ». Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0159/document.

Texte intégral
Résumé :
La compétitivité toujours plus importante et la mondialisation ont mis l'industrie manufacturière au défi de rationaliser les différentes façons de mettre sur le marché de nouveaux produits dans un délai court, avec des prix compétitifs tout en assurant des niveaux de qualité élevés. Le PDP moderne exige simultanément la collaboration de plusieurs groupes de travail qui assurent la création et l’échange d’information avec des points de vue multiples dans et à travers les frontières institutionnelles. Dans ce contexte, des problèmes d’interopérabilité sémantique ont été identifiés en raison de l'hétérogénéité des informations liées à des points de vue différents et leurs relations pour le développement de produits. Le travail présenté dans ce mémoire propose un cadre conceptuel d’interopération pour la conception et la fabrication de produits. Ce cadre est basé sur un ensemble d’ontologies clés, de base d’ingénierie et sur des approches de cartographie sémantique. Le cadre soutient les mécanismes qui permettent la conciliation sémantique en termes de partage, conversion et traduction, tout en améliorant la capacité de partage des connaissances entre les domaines hétérogènes qui doivent interopérer. La recherche a particulièrement porté sur la conception et la fabrication de produits tournants en plastique et explore les points particuliers de la malléabilité - la conception et la fabrication de moules. Un système expérimental a été proposé à l’aide de l'outil Protégé pour modéliser des ontologies de base et d’une plateforme Java intégrée à Jena pour développer l'interface avec l'utilisateur. Le concept et la mise en œuvre de cette recherche ont été testés par des expériences en utilisant des produits tournants en plastiques. Les résultats ont montré que l'information et ses relations rigoureusement définies peuvent assurer l'efficacité de la conception et la fabrication du produit dans un processus de développement de produits moderne et collaboratif
Global competitiveness has challenged manufacturing industry to rationalise different ways of bringing to the market new products in a short lead-time with competitive prices while ensuring higher quality levels. Modern PDP has required simultaneously collaborations of multiple groups, producing and exchanging information from multi-perspectives within and across institutional boundaries. However, it has been identified semantic interoperability issues in view of the information heterogeneity from multiple perspectives and their relationships across product development. This research proposes a conceptual framework of an Interoperable Product Design and Manufacturing based on a set of core ontological foundations and semantic mapping approaches. This framework has been particularly instantiated for the design and manufacturing of plastic injection moulded rotational products and has explored the particular viewpoints of moldability, mould design and manufacturing. The research approach explored particular information structures to support Design and Manufacture application. Subsequently, the relationships between these information structures have been investigated and the semantics reconciliation has been designed through mechanisms to convert, share and translate information from the multi-perspectives. An experimental system has been performed using the Protégé tool to model the core ontologies and the Java platform integrated with the Jena to develop the interface with the user. The conceptual framework proposed in this research has been tested through experiments using rotational plastic products. Therefore, this research has shown that information rigorously-defined and their well-defined relationships can ensure the effectiveness of product design and manufacturing in a modern and collaborative PDP
Styles APA, Harvard, Vancouver, ISO, etc.
21

Kohne, Jens. « Drei Betrachtungen zum Problem der Eigenschaften dargestellt anhand der Lehren von H. H. Price, G. F. Stout und N. Kemp Smith / ». [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10806357.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Machado, Alexandre Lopes. « Modelo conceitual formal de relacionamentos do ordenamento jurídico positivo ». Instituto Tecnológico de Aeronáutica, 2013. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2867.

Texte intégral
Résumé :
A informação jurídica é inerentemente caracterizada por relacionamentos. A regra geral é que qualquer documento deste domínio está sempre inserido em um contexto, parte do ordenamento jurídico. Ordenamento jurídico pode ser definido como um conjunto de normas. Sendo assim, conceitos de uma norma jurídica são importantes para o entendimento de outra norma jurídica. Entretanto, essa caracter?stica do ordenamento jur?dico causa dificuldade para o entendimento de uma norma, uma vez que o seu entendimento pleno exige um conjunto de conhecimentos sobre a dependencia jur?dica entre as normas que tratam um determinado assunto. As abordagens encontradas na literatura nao sao suficientes para explicitar este conhecimento. Considerando essa dificuldade, esta pesquisa destaca a importancia da modelagem dos relacionamentos entre as normas jur?dicas em um modelo conceitual formal visando capturar de forma clara, concisa e nao amb?gua a dependencia jur?dica entre as normas. O papel da modelagem conceitual e estabelecer um entendimento comum da realidade entre os seres humanos que serao usuarios do ambiente de conhecimento decorrente, levando a solucoes tendo como foco mais os seres humanos e menos as maquinas. Neste contexto, o trabalho propoe um modelo conceitual formal, baseado em ontologias, visando a sistematizar e explicitar o conhecimento legal com enfase nos relacionamentos existentes no Ordenamento Jur?dico Positivo, tratando o assunto da norma de forma textual, e permitindo, consequentemente, o entendimento compartilhado dos conceitos e relacionamentos, beneficiando a interoperabilidade semantica. Devido `a complexidade do dom?nio, adotamos a utilizacao de ontologias de referencia (heavyweight ontology) para capturar os conceitos e relacionamentos do dom?nio legal. Buscando-se a instanciacao dessas ontologias, implementamos algumas ontologias leves (lightweight ontology) com diferentes n?veis de expressividade, para atender a requisitos computacionais. Com o intituito de avaliar o modelo proposto, aplicamos o modelo em alguns cenarios de visao. Desenvolvemos, entao, um prototipo de software, que implementa o modelo proposto. Por fim, esse prototipo e instanciado com um subconjunto de normas do Ordenamento Jur?dico Brasileiro. Resultados obtidos indicam que relacionamentos complexos, impl?citos nas normas, puderam ser explicitados.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Alan, Yilmaz. « Integrative Geschäftsprozessmodellierung : ein Ansatz auf der Basis von Ontologien und Petri-Netzen / ». Saarbrücken : VDM, Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2968232&prov=M&dok_var=1&dok_ext=htm.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Torres, Carlos Eduardo Atencio. « Uso de informação linguística e análise de conceitos formais no aprendizado de ontologias ». Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-11022013-152711/.

Texte intégral
Résumé :
Na atualidade, o interesse pelo uso de ontologias tem sido incrementado. No entanto, o processo de construção pode ser custoso em termos de tempo. Para uma ontologia ser construída, precisa-se de um especialista com conhecimentos de um editor de ontologias. Com a finalidade de reduzir tal processo de construção pelo especialista, analisamos e propomos um método para realizar aprendizado de ontologias (AO) de forma supervisionada. O presente trabalho consiste em uma abordagem combinada de diferentes técnicas no AO. Primeiro, usamos uma técnica estatística chamada C/NC-values, acompanhada da ferramenta Cogroo, para extrair os termos mais representativos do texto. Esses termos são considerados por sua vez como conceitos. Projetamos também uma gramática de restrições (GR), com base na informação linguística do Português, com o objetivo de reconhecer e estabelecer relações entre conceitos. Para poder enriquecer a informação na ontologia, usamos a análise de conceitos formais (ACF) com o objetivo de identificar possíveis superconceitos entre dois conceitos. Finalmente, extraímos ontologias para os textos de três temas, submetendo-as à avaliação dos especialistas na área. Um web site foi feito para tornar o processo de avaliação mais amigável para os avaliadores e usamos o questionário de marcos de características proposto pelo método OntoMetrics. Os resultados mostram que nosso método provê um ponto de partida aceitável para a construção de ontologias.
Nowadays, the interest in the use of ontologies has increased, nevertheless, the process of ontology construction can be very time consuming. To build an ontology, we need a domain expert with knowledge in an ontology editor. In order to reduce the time needed by the expert, we propose and analyse a supervised ontology learning (OL) method. The present work consists of a combined approach of different techniques in OL. First, we use a statistic technique called C/NC-values, with the help of the Cogroo tool, to extract the most significant terms. These terms are considered as concepts consequently. We also design a constraint grammar (CG) based in linguistic information of Portuguese to recognize relations between concepts. To enrich the ontology information, we use the formal concept analysis (FCA) in order to discover a parent for a set of concepts. In order to evaluate the method, we have extracted ontologies from text on three different domains and tested them with corresponding experts. A web site was built to make the evaluation process friendlier for the experts and we used an evaluation framework proposed in the OntoMetrics method. The results show that our method provides an acceptable starting point for the construction of ontologies.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Araújo, Lauro César. « Uma linguagem para formalização de discursos com base em ontologias ». reponame:Repositório Institucional da UnB, 2015. http://dx.doi.org/10.26512/2015.11.T.19319.

Texte intégral
Résumé :
Tese (doutorado)—Universidade de Brasília, Faculdade de Ciência da Informação, Programa de Pós-Graduação em Ciência da Informação, 2015.
Submitted by Albânia Cézar de Melo (albania@bce.unb.br) on 2016-01-25T15:17:04Z No. of bitstreams: 1 2015_LauroCesarAraujo.pdf: 14597557 bytes, checksum: d2a60d9b4cea4d03cede2f2768669396 (MD5)
Approved for entry into archive by Patrícia Nunes da Silva(patricia@bce.unb.br) on 2016-01-25T16:01:48Z (GMT) No. of bitstreams: 1 2015_LauroCesarAraujo.pdf: 14597557 bytes, checksum: d2a60d9b4cea4d03cede2f2768669396 (MD5)
Made available in DSpace on 2016-01-25T16:01:48Z (GMT). No. of bitstreams: 1 2015_LauroCesarAraujo.pdf: 14597557 bytes, checksum: d2a60d9b4cea4d03cede2f2768669396 (MD5)
Esta pesquisa propõe a arquitetura da informação de uma linguagem formal textual para representar discursos sobre entidades ontológicas e obter deduções a respeito de ontologias de domínio. Por meio do paradigma de metamodelagem, a linguagem permite tratamento de ontologias heterogêneas que podem ser descritas como instâncias de uma ou mais ontologias de fundamentação. A linguagem suporta comportamentos clássicos e modais sustentados por noções de prova baseadas no paradigma de Programação em Lógica (Modal). O arcabouço modal desenvolvido possibilita que diferentes interpretações modais sejam introduzidas às especificações das ontologias, e contempla especialmente sistemas baseados em lógicas de múltiplos agentes. Uma sistematização do fragmento endurante da Unified Foundational Ontology (UFO) é realizada com objetivo de compor parte do marco teórico que fundamenta a proposta e de servir de exemplo de instanciação do arcabouço desenvolvido. Como resultados complementares, destacam-se: uma sistematização de um conjunto ampliado de regras para produção de modelos conceituais e um glossário detalhado de termos e conceitos da UFO-A; protótipos funcionais que implementam os sistemas elaborados; traduções das teorias descritas no arcabouço proposto para linguagens visuais, como extensões da representação gráfica da OntoUML; e discussões a respeito da integração de Arquitetura da Informação, Modelagem Conceitual e Programação em Lógica (Modal) no contexto social aplicado.
This research proposes the information architecture of a textual formal language to represent and reason about ontological entities based on foundational ontologies. Through metamodeling, the language is able to deal with heterogeneous ontologies that can be described as instances of one or more foundational ontology. The language provides classic and modal inference mechanisms supported by proof notions based on the (Modal) Logic Programming paradigm. The modalities introduced by the modal framework allow a wide range of interpretations, including multi-agent systems. A systematization of the endurant fragment of the Unified Foundational Ontology (UFO) is produced in order to compose part of the theoretical framework underlying the proposal, and to serve as an example instantiating the developed framework. As complementary results we highlight: a systematization of an extended set of rules for conceptual modeling and a detailed glossary of terms and concepts of UFO-A; functional prototypes implementing the developed systems; translations of the theories described as instances of the framework to diagramatic representations, as extensions of the OntoUML visual language; and discussions regarding the integration of Information Architecture, Conceptual Modeling and Logic Programming within Applied Social Science.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Trapp, Rog?rio Vaz. « Antropologia e sem?ntica formal : fenomenologia e linguagem ». Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2011. http://tede2.pucrs.br/tede2/handle/tede/2873.

Texte intégral
Résumé :
Made available in DSpace on 2015-04-14T13:55:09Z (GMT). No. of bitstreams: 1 433557.pdf: 651443 bytes, checksum: 6395d825ca0c362474b1c94890bb8218 (MD5) Previous issue date: 2011-08-30
O objetivo do texto consiste em demonstrar que a Sem?ntica formal, enquanto campo de articula??o entre L?gica e Ontologia, exige sua fundamenta??o em uma Antropologia. Para isto ser? necess?rio demonstrar que a distin??o entre fundamento relativo e absoluto conduz a Sem?ntica formal ao modo de fundamenta??o da fenomenologia de Heidegger. Isto significa que, ao inv?s de apenas fornecer controle anal?tico ao m?todo fenomenol?gico de Heidegger, a Sem?ntica formal tamb?m inverte a rela??o de implica??o entre ambos os m?todos, de tal modo que o pr?prio m?todo anal?tico ? que acaba suplementado pelo m?todo fenomenol?gico. Portanto, o que Tugendhat n?o teria percebido ? que a diferencia??o entre fundamento relativo e absoluto, entre fundamento sem?ntico-ontol?gico e fundamento fenomenol?gico, introduz no cerne de sua filosofia a distin??o heideggeriana entre fundamento ?ntico e ontol?gico, isto ?, a diferen?a ontol?gica. Assim, para demonstrar nossa tese, deveremos tomar a circularidade entre o fundamento nos estados-de-?nimo e o fundamento nas regras sem?ntico-formais para senten?as assert?ricos como campo de articula??o entre L?gica e Ontologia com a Antropologia. Para isto, precisaremos tomar o conjunto de regras elaboradas por Tugendhat para a verifica??o de enunciados assert?ricos e demonstrar que, tal como a rela??o est?vel entre um sujeito e um objeto no espa?o ou no tempo permite a constru??o de um sistema de refer?ncias a partir do qual a objetualidade pode ser estabelecida, assim tamb?m uma rela??o comportamental est?vel entre o sujeito e um sistema de regras objetivas permite o surgimento do sistema de refer?ncias pr?ticocomportamental a consci?ncia.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Nieri, Ederaldo Luiz [UNESP]. « Duas formas da recepção das idéias de Lukács no Brasil : estética e ontologia ». Universidade Estadual Paulista (UNESP), 2007. http://hdl.handle.net/11449/88524.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-11T19:23:30Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-02-27Bitstream added on 2014-06-13T20:30:17Z : No. of bitstreams: 1 nieri_el_me_mar.pdf: 1428112 bytes, checksum: 4e8b7707f988c8605587696e2ef17002 (MD5)
Este trabalho se propôs a abordar dois momentos da receptividade das idéias de Lukács no Brasil: a das idéias filosófico-estéticas; a das idéias filosófico-ontológicas. Demonstrou-se que o significativo consiste no caráter ídeo-político que se conferiu a ambos momentos – configurando uma unidade de continuidade-descontinuidade. No decurso dos anos 1960, jovens comunistas inspiraram-se nas idéias estéticas do filósofo para a elaboração de um projeto de política cultural como um momento de uma “renovação” política (do PCB). Neste contexto, se enfatizou dois pontos: que a política cultural de extração lukacsiana é incompatível com a tradição cultural do partido, que, no campo específico da arte, além de determinar-se por categoriais não-imanentes à produção estético-artística, caracterizara-se por elementos estéticos de extração stalinistazhadnovista; e, que em razão de conceber dialeticamente as relações entre as revoluções burguesa e proletária, As Teses de Blum se distinguem das Teses (de extração terceiro-internacionalista stalinizada) estratégico-políticas propugnadas pelo PCB após 1958. Mediados pelas idéias ontológicas de Lukács, no contexto do capitalismo contemporâneo, autores marxistas (Sérgio Lessa, Ricardo Antunes, José Chasin, José Paulo Netto, Ivo Tonet), explicitam, primeiro, a falácia das teses que propugnam a descentralidade do trabalho do mundo humano-social, segundo, que a determinação do trabalho como o fundamento ontológico do ser social une-se à imperiosa necessidade de se emancipar a humanidade dos ditames do capital – neste sentido, conferem à sua adoção destas idéias lukacsianas, ainda que no âmbito das atividades acadêmicas, uma dimensão ídeo-política. Enfatizou-se esta dimensão em três momentos: mediante a explicitação de que o trabalho abstrato fundamenta as sociedades contemporâneas...
This work has the objective to learn about two moments of receptivity of the ideas of Lukács here in Brazil: Philosophical Esthetics and Philosophcal-Ontologics. It demonstrated it`s importance based on the aspect of political ideology in both cases forming a unity of continuitydiscontinuity. In the nineteen sixties young communist were inspired by the esthetics of the philosopher to formulate a political-cultural project as part of renovating political ideas (PCB). In this context two important aspects were enphesized: the political-cultural views of Lukács are incompatible with the cultural tradition of the party, arts specifically, are not judged on there esthetic artistic value, but are more an enlightening on esthetics based on Stalin-Zhadnov; because of that there is a dialectic relationship between the bourgeois and proletarian revolution. The Blum Theses distinguish them selves from the Theses (influenced by the third international under Stalin) political strategies defended by the PCB party after 1958. Applying the ontological ideas of Lukács to contemporary capitalism, marxist authors like (Sérgio Lessa, Ricardo Antunes, José Chasin, José Paulo Netto, Ivo Tonet) in first place show fallacy In the theory that defend decentralization of work in the human social society, secondly a definition of work as an ontological foundation that meets the imposing necessity to overcome exploration by capital and in this way they atribute to the receptivity of Lukács ontology ideas, although in the academic world this was a political ideology. In three occasions this ideology demonstrated itself: highlighting that employment is the fundament of contemporary society which confirms Marx social theory to confront the consequences of offensive capital over work; by defending the thesis of ontological dependency of other social interactives in relation to work...(Complete abstract, acess undermentioned eletronic adress)
Styles APA, Harvard, Vancouver, ISO, etc.
28

Nieri, Ederaldo Luiz. « Duas formas da recepção das idéias de Lukács no Brasil : estética e ontologia / ». Marília : [s.n.], 2007. http://hdl.handle.net/11449/88524.

Texte intégral
Résumé :
Orientador: Marcos Tadeu Del Roio
Banca: Antonio Carlos Mazzeo
Banca: Paulo Douglas Barsotti
Resumo: Este trabalho se propôs a abordar dois momentos da receptividade das idéias de Lukács no Brasil: a das idéias filosófico-estéticas; a das idéias filosófico-ontológicas. Demonstrou-se que o significativo consiste no caráter ídeo-político que se conferiu a ambos momentos - configurando uma unidade de continuidade-descontinuidade. No decurso dos anos 1960, jovens comunistas inspiraram-se nas idéias estéticas do filósofo para a elaboração de um projeto de política cultural como um momento de uma "renovação" política (do PCB). Neste contexto, se enfatizou dois pontos: que a política cultural de extração lukacsiana é incompatível com a tradição cultural do partido, que, no campo específico da arte, além de determinar-se por categoriais não-imanentes à produção estético-artística, caracterizara-se por elementos estéticos de extração stalinistazhadnovista; e, que em razão de conceber dialeticamente as relações entre as revoluções burguesa e proletária, As Teses de Blum se distinguem das Teses (de extração terceiro-internacionalista stalinizada) estratégico-políticas propugnadas pelo PCB após 1958. Mediados pelas idéias ontológicas de Lukács, no contexto do capitalismo contemporâneo, autores marxistas (Sérgio Lessa, Ricardo Antunes, José Chasin, José Paulo Netto, Ivo Tonet), explicitam, primeiro, a falácia das teses que propugnam a descentralidade do trabalho do mundo humano-social, segundo, que a determinação do trabalho como o fundamento ontológico do ser social une-se à imperiosa necessidade de se emancipar a humanidade dos ditames do capital - neste sentido, conferem à sua adoção destas idéias lukacsianas, ainda que no âmbito das atividades acadêmicas, uma dimensão ídeo-política. Enfatizou-se esta dimensão em três momentos: mediante a explicitação de que o trabalho abstrato fundamenta as sociedades contemporâneas...(Resumo completo, clicar acesso eletrônico abaixo)
Abstract: This work has the objective to learn about two moments of receptivity of the ideas of Lukács here in Brazil: Philosophical Esthetics and Philosophcal-Ontologics. It demonstrated it's importance based on the aspect of political ideology in both cases forming a unity of continuitydiscontinuity. In the nineteen sixties young communist were inspired by the esthetics of the philosopher to formulate a political-cultural project as part of renovating political ideas (PCB). In this context two important aspects were enphesized: the political-cultural views of Lukács are incompatible with the cultural tradition of the party, arts specifically, are not judged on there esthetic artistic value, but are more an enlightening on esthetics based on Stalin-Zhadnov; because of that there is a dialectic relationship between the bourgeois and proletarian revolution. The Blum Theses distinguish them selves from the Theses (influenced by the third international under Stalin) political strategies defended by the PCB party after 1958. Applying the ontological ideas of Lukács to contemporary capitalism, marxist authors like (Sérgio Lessa, Ricardo Antunes, José Chasin, José Paulo Netto, Ivo Tonet) in first place show fallacy In the theory that defend decentralization of work in the human social society, secondly a definition of work as an ontological foundation that meets the imposing necessity to overcome exploration by capital and in this way they atribute to the receptivity of Lukács ontology ideas, although in the academic world this was a political ideology. In three occasions this ideology demonstrated itself: highlighting that employment is the fundament of contemporary society which confirms Marx social theory to confront the consequences of offensive capital over work; by defending the thesis of ontological dependency of other social interactives in relation to work...(Complete abstract, acess undermentioned eletronic adress)
Mestre
Styles APA, Harvard, Vancouver, ISO, etc.
29

Distel, Felix. « Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-70199.

Texte intégral
Résumé :
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Jurkevičius, Darius. « Formalių konceptų naudojimo informacinėms sistemoms kurti tyrimas ». Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120126_145845-07986.

Texte intégral
Résumé :
Šiuolaikinių informacinių sistemų kūrimas darosi vis sudėtingesnis ir daugiau išteklių reikalaujantis procesas, nes kuriamoms informacinėms sistemoms keliami vis didesni reikalavimai. Šiame darbe pristatomas ontologijos kūrimo būdas formalių konceptų pagrindu. Ontologijos leidžia saugoti žinias apie dalykinę sritį. Kaip žinoma, ontologijų kūrimas yra sudėtingas procesas, reikalaujantis daug pastangų bei ekspertinių žinių. Dauguma šiuolaikinių informacinių sistemų yra pradedamos kurti iš naujo, nepasinaudojus turimomis žiniomis. Veltui gaištamas laikas, susiduriama su tomis pačiomis problemomis, daromos klaidos. Ontologijų panaudojimas leidžia pasinaudoti jau turimomis žiniomis kuriant naujas informacines sistemas. Disertacijoje siūlomas ontologijų sudarymo metodas naudojant formalius konceptus, kurie praplėsti taisyklėmis. Analitinėje darbo dalyje pristatomos ontologijos ir formalių konceptų sąvokos ir reikšmė šiuolaikinėse informacinėse sistemose. Pateikiamas egzistuojančių formalių konceptų panaudojimo būdų bei ontologijų kūrimo metodų tyrimas. Metodui įvertinti siūloma taikyti naudos vertinimo metodą. Paskutiniame disertacijos skyriuje aprašomi eksperimentai, kuriuose dalykinių sričių ontologijos yra kuriamos naudojant formalius konceptus. Šiems eksperimentams įgyvendinti buvo sukurti programiniai įrankiai, kurie praplėtė šiuolaikinius ontologijų kūrimo metodus. Pabaigoje pateikiami eksperimento rezultatai ir išvados.
Knowledge is widely used in development of modern information systems. One of the ways to represent knowledge is ontologies. They make it possible to shorten the time of information system development and to reduce costs. Moreover, it is an opportunity to re-use the knowledge. The objective of the thesis is to propose a method that allows partially simplifying and automating an ontology development process. Typically, an ontology development process consists of four phases: collection of terms, analysis of terms, specification and representation. A first stage of ontology development process is to capture the entire domain identified in terms of their mutual relations and definitions. During the analysis phase the collected terms are analysed: different terms for describing the same objects or phenomena are searched. The next step can be performed using the selected ontology development tool. This determines the display language and ontology selection. For example, enterprise specialists (low level IT specialists) can compose ontology by using ontology development tools. The above described ontology development process is rather slow and requires scrupulous work. Human involvement in every step of an ontology development process makes big influence on performance. Different people can not create identical ontologies even developing the same subject area ontology. We believe that the situation can be improved by the qualitative leap which would enable the acceleration of the... [to full text]
Styles APA, Harvard, Vancouver, ISO, etc.
31

Pari, Andrea. « Modellazione e realizzazione di un'ontologia formale per la rappresentazione di informazioni relative ai beni culturali nel Web Semantico ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14237/.

Texte intégral
Résumé :
La trattazione corrente si pone come obiettivo principale quello di fornire una descrizione esauriente del lavoro svolto relativo alla costruzione di un’ontologia per la rappresentazione di informazioni sui beni culturali nel Web Semantico, la nuova estensione del Web che sta prendendo piede negli ultimi anni come modello standard per attribuire un significato alle informazioni contenute nei documenti della rete. Il lavoro descritto deriva essenzialmente dalla necessità di definire un modello ontologico per la rappresentazione informatizzata dei dati relativi ai beni del patrimonio culturale, in contrasto con le modalità generali di rappresentazione nel Web attuale, estremamente limitanti e poco adeguate in un contesto per cui si dovrebbe invece favorire la diffusione della conoscenza a livello globale. Il dominio applicativo da cui poi si è articolato l’intero lavoro è rappresentato dalle modalità con cui i dati culturali sono attualmente catalogati. In Italia, l’organo responsabile della catalogazione è l’Istituto Centrale per il Catalogo e la Documentazione (ICCD), che nel corso degli anni ha definito appositi standard normativi per la descrizione delle informazioni relative alle diverse tipologie di beni. Sebbene il lavoro faccia riferimento a normative di catalogazione utilizzate esclusivamente a livello nazionale, l’ontologia proposta rappresenta comunque un modello ampiamente valido anche per un suo riutilizzo a livello internazionale. La descrizione del lavoro riportata nella trattazione corrente è strutturata secondo le tre fasi che hanno caratterizzato la realizzazione dell’ontologia in questione: analisi generale del dominio applicativo di riferimento; definizione dei concetti ontologici, rigorosamente allineati ai vincoli stabiliti dalle normative dell’ICCD, attraverso l’utilizzo di modelli concettuali e schemi già pubblicati nel Web Semantico; costruzione dell’ontologia secondo i concetti ontologici definiti alla fase precedente.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Kherroubi, Souad. « Un cadre formel pour l'intégration de connaissances du domaine dans la conception des systèmes : application au formalisme Event-B ». Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0230/document.

Texte intégral
Résumé :
Cette thèse vise à définir des techniques pour mieux exploiter les connaissances du domaine dans l’objectif de rendre compte de la réalité de systèmes qualifiés de complexes et critiques. La modélisation est une étape indispensable pour effectuer des vérifications et exprimer des propriétés qu’un système doit satisfaire. La modélisation est une représentation simplificatrice, mais réductionniste de la réalité d’un système. Or, un système complexe ne peut se réduire à un modèle. Un modèle doit s’intégrer dans sa théorie observationnelle pour rendre compte des anomalies qu’il peut y contenir. Notre étude montre clairement que le contexte est la première problématique à traiter car principale source de conflits dans le processus de conception d’un système. L’approche retenue dans cette thèse est celle d’intégrer des connaissances du domaine en associant le système à concevoir à des formalismes déclaratifs qualifiés de descriptifs appelés ontologies. Notre attention est portée au formalisme Event-B dont l’approche correct-par-construction appelée raffinement est le principal mécanisme dans ce formalisme qui permet de faire des preuves sur des représentations abstraites de systèmes pour exprimer/vérifier des propriétés de sûreté et d’invariance. Le premier problème traité concerne la représentation et la modélisation des connaissances du contexte en V&V de modèles. Suite à l’étude des sources de conflits, nous avons établi de nouvelles règles pour une extraction de connaissances liées au contexte par raffinement pour la V&V. Une étude des formalismes de représentation et d’interprétation logiques du contexte a permis de définir un nouveau mécanisme pour mieux structurer les modèles Event-B. Une deuxième étude concerne l’apport des connaissances du domaine pour la V&V. Nous définissons une logique pour le formalisme Event-B avec contraintes du domaine fondées sur les logiques de description, établissons des règles à exploiter pour l’intégration de ces connaissances à des fins de V&V. L’évaluation des propositions faites portent sur des études de cas très complexes telles que les systèmes de vote dont des patrons de conception sont aussi développés dans cette thèse. Nous soulevons des problématiques fondamentales sur la complémentarité que peut avoir l’intégration par raffinement des connaissances du domaine à des modèles en exploitant les raisonnements ontologiques, proposons de définir de nouvelles structures pour une extraction partiellement automatisée
This thesis aims at defining techniques to better exploit the knowledge provided from the domain in order to account for the reality of systems described as complex and critical. Modeling is an essential step in performing verifications and expressing properties that a system must satisfy according to the needs and requirements established in the specifications. Modeling is a representation that simplifies the reality of a system. However, a complex system can not be reduced to a model. A model that represents a system must always fit into its observational theory to account for any anomalies that it may contain. Our study clearly shows that the context is the first issue to deal with as the main source of conflict in the design process of a system. The approach adopted in this thesis is that of integrating knowledge of the domain by associating the system to design with declarative formalisms qualified of descriptive ones that we call ontologies. We pay a particular attention to the Event-B formalism, whose correct-by-construction approach called refinement is the main mechanism at the heart of this formalism, which makes it possible to make proofs on abstract representations of systems for expressing and verifying properties of safety and invariance. The first problem treated is the representation and modeling of contextual knowledge in V&V of models. Following to the study looked at the different sources of conflict, we established new definitions and rules for a refinement context knowledge extraction for Event-B V&V. A study of logical formalisms that represent and interpret the context allowed us to define a new mechanism for better structuring Event-B models. A second study concerns the contribution that domain knowledge can make to the V&V of models. We define a logic for the Event-B formalism with domain constraints based on the description logic, and we define rules to integrate domain knowledge for model V&V. The evaluation of the proposals made deal with very complex case studies such as voting systems whose design patterns are also developed in this thesis. We raise fundamental issues about the complementarity that the integration of domain knowledge can bring to Event-B models by refinement using ontological reasoning, and we propose to define a new structures for a partially automated extraction on both levels, namely the V&V
Styles APA, Harvard, Vancouver, ISO, etc.
33

Marques, José Oscar de Almeida 1949. « Forma e representação no Tractactus de Wittgenstein ». [s.n.], 1998. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280745.

Texte intégral
Résumé :
Orientador: Michael Wrigley
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-07-24T17:20:29Z (GMT). No. of bitstreams: 1 Marques_JoseOscardeAlmeida_D.pdf: 49000020 bytes, checksum: 94c245bb4c4d7275649db0a6023bf5ca (MD5) Previous issue date: 1998
Resumo: Não informado
Abstract: Not informed.
Doutorado
Doutor em Filosofia
Styles APA, Harvard, Vancouver, ISO, etc.
34

Sertkaya, Baris. « Formal Concept Analysis Methods for Description Logics ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1215598189927-85390.

Texte intégral
Résumé :
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Sadoun, Driss. « Des spécifications en langage naturel aux spécifications formelles via une ontologie comme modèle pivot ». Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01060540.

Texte intégral
Résumé :
Le développement d'un système a pour objectif de répondre à des exigences. Aussi, le succès de sa réalisation repose en grande partie sur la phase de spécification des exigences qui a pour vocation de décrire de manière précise et non ambiguë toutes les caractéristiques du système à développer.Les spécifications d'exigences sont le résultat d'une analyse des besoins faisant intervenir différentes parties. Elles sont généralement rédigées en langage naturel (LN) pour une plus large compréhension, ce qui peut mener à diverses interprétations, car les textes en LN peuvent contenir des ambiguïtés sémantiques ou des informations implicites. Il n'est donc pas aisé de spécifier un ensemble complet et cohérent d'exigences. D'où la nécessité d'une vérification formelle des spécifications résultats.Les spécifications LN ne sont pas considérées comme formelles et ne permettent pas l'application directe de méthodes vérification formelles.Ce constat mène à la nécessité de transformer les spécifications LN en spécifications formelles.C'est dans ce contexte que s'inscrit cette thèse.La difficulté principale d'une telle transformation réside dans l'ampleur du fossé entre spécifications LN et spécifications formelles.L'objectif de mon travail de thèse est de proposer une approche permettant de vérifier automatiquement des spécifications d'exigences utilisateur, écrites en langage naturel et décrivant le comportement d'un système.Pour cela, nous avons exploré les possibilités offertes par un modèle de représentation fondé sur un formalisme logique.Nos contributions portent essentiellement sur trois propositions :1) une ontologie en OWL-DL fondée sur les logiques de description, comme modèle de représentation pivot permettant de faire le lien entre spécifications en langage naturel et spécifications formelles; 2) une approche d'instanciation du modèle de représentation pivot, fondée sur une analyse dirigée par la sémantique de l'ontologie, permettant de passer automatiquement des spécifications en langage naturel à leur représentation conceptuelle; et 3) une approche exploitant le formalisme logique de l'ontologie, pour permettre un passage automatique du modèle de représentation pivot vers un langage de spécifications formelles nommé Maude.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Sertkaya, Baris. « Formal Concept Analysis Methods for Description Logics ». Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23613.

Texte intégral
Résumé :
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
Styles APA, Harvard, Vancouver, ISO, etc.
37

El, Ghosh Mirna. « Automatisation du raisonnement et décision juridiques basés sur les ontologies ». Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR16/document.

Texte intégral
Résumé :
Le but essentiel de la thèse est de développer une ontologie juridique bien fondée pour l'utiliser dans le raisonnement à base des règles. Pour cela, une approche middle-out, collaborative et modulaire est proposée ou des ontologies fondationnelles et core ont été réutilisées pour simplifier le développement de l'ontologie. L’ontologie résultante est adoptée dans une approche homogène a base des ontologies pour formaliser la liste des règles juridiques du code pénal en utilisant le langage logique SWRL
This thesis analyses the problem of building well-founded domain ontologies for reasoning and decision support purposes. Specifically, it discusses the building of legal ontologies for rule-based reasoning. In fact, building well-founded legal domain ontologies is considered as a difficult and complex process due to the complexity of the legal domain and the lack of methodologies. For this purpose, a novel middle-out approach called MIROCL is proposed. MIROCL tends to enhance the building process of well-founded domain ontologies by incorporating several support processes such as reuse, modularization, integration and learning. MIROCL is a novel modular middle-out approach for building well-founded domain ontologies. By applying the modularization process, a multi-layered modular architecture of the ontology is outlined. Thus, the intended ontology will be composed of four modules located at different abstraction levels. These modules are, from the most abstract to the most specific, UOM(Upper Ontology Module), COM(Core Ontology Module), DOM(Domain Ontology Module) and DSOM(Domain-Specific Ontology Module). The middle-out strategy is composed of two complementary strategies: top-down and bottom-up. The top-down tends to apply ODCM (Ontology-Driven Conceptual Modeling) and ontology reuse starting from the most abstract categories for building UOM and COM. Meanwhile, the bottom-up starts from textual resources, by applying ontology learning process, in order to extract the most specific categories for building DOM and DSOM. After building the different modules, an integration process is performed for composing the whole ontology. The MIROCL approach is applied in the criminal domain for modeling legal norms. A well-founded legal domain ontology called CriMOnto (Criminal Modular Ontology) is obtained. Therefore, CriMOnto has been used for modeling the procedural aspect of the legal norms by the integration with a logic rule language (SWRL). Finally, an hybrid approach is applied for building a rule-based system called CORBS. This system is grounded on CriMOnto and the set of formalized rules
Styles APA, Harvard, Vancouver, ISO, etc.
38

Bénard, Jeremy. « Import, export et traduction sémantiques génériques basés sur une ontologie de langages de représentation de connaissances ». Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0021/document.

Texte intégral
Résumé :
Les langages de représentation de connaissances (LRCs) sont des langages qui permettent de représenter et partager des informations sous une forme logique. Il y a de nombreux LRCs. Chaque LRC a un modèle structurel abstrait et peut avoir plusieurs notations. Ces modèles et notations ont été conçus pour répondre à des besoins de modélisation ou de calculabilité différents, ainsi qu'à des préférences différentes. Les outils actuels gérant ou traduisant des RCs ne travaillent qu'avec quelques LRCs et ne permettent pas – ou très peu – à leurs utilisateurs finaux d'adapter les modèles et notations de ces LRCs. Cette thèse contribue à résoudre ces problèmes pratiques et le problème de recherche original suivant : “une fonction d'import et une fonction d'export de RCs peuvent-elle être spécifiées de façon générique et, si oui, comment leurs ressources peuvent-elles êtres spécifiées ?”. Cette thèse s'inscrit dans un projet plus vaste dont l'objectif général est de faciliter le partage et la réutilisation des connaissances liées aux composants logiciels et à leurs présentations. L'approche suivie dans cette thèse est basée sur une ontologie de LRCs nommée KRLO, et donc sur une représentation formelle de ces LRCs.KRLO a trois caractéristiques importantes et originales auxquelles cette thèse à contribué : i) elle représente des modèles de LRCs de différentes familles de façon uniforme, ii) elle inclut une ontologie de notations de LRCs, et iii) elle spécifie des fonctions génériques pour l'import et l'export de RCs dans divers LRCs. Cette thèse a contribué à améliorer la première version de KRLO (KRLO_2014) et à donner naissance à sa seconde version. KRLO_2014 contenait des imprécisions de modélisation qui rendaient son exploitation difficile ou peu pratique. Cette thèse a aussi contribué à la spécification et l'opérationnalisation de “Structure_map”, une fonction permettant d'écrire de façon modulaire et paramétrable toute autre fonction utilisant une boucle. Son utilisation permet de créer et d'organiser les fonctions en une ontologie de composants logiciels. Pour implémenter une fonction générique d'export basée sur KRLO, j'ai développé SRS (Structure_map based Request Solver), un résolveur d'expressions de chemins sur des RCs. SRS interprète toutes les fonctions. SRS apporte ainsi une validation expérimentale à la fois à l'utilisation de cette primitive (Structure_map) et à l'utilisation de KRLO. Directement ou indirectement, SRS et KRLO pourront être utilisés par GTH (Global Technologies Holding), l'entreprise partenaire de cette thèse
Knowledge Representation Languages (KRLs) are languages enabling to represent and share information in a logical form. There are many KRLs. Each KRL has one abstract structural model and can have multiple notations. These models and notations were designed to meet different modeling or computational needs, as well as different preferences. Current tools managing or translating knowledge representations (KRs) allow the use of only one or few KRLs and do not enable – or hardly enable – their end-users to adapt the models and notations of these KRLs. This thesis helps to solve these practical problems and this original research problem: “Can a KR import function and a KR export function be specified in a generic way and, if so, how can their resources be Specified ?”. This thesis is part of a larger project the overall objective of which is to facilitate i) the sharing and reuse of knowledge related to software components, and ii) knowledge presentations. The approach followed in this thesis is based on an ontology of KRLs named KRLO, and therefore on a formal representation of these KRLs.KRLO has three important and original features to which this thesis contributed: i) it represents KRL models of different families in a uniform way, ii) it includes an ontology of KRLs notations, and iii) it specifies generic functions for KR import and export in various KRLs. This thesis has contributed to the improvement of the first version of KRLO (KRLO_2014) and to the creation of its second version. KRLO_2014 contained modeling inaccuracies that made it difficult or inconvenient to use. This thesis has also contributed to the specification and the operationalization of “Structure_map”, a function enabling to write any other function that uses a loop, in a modular and configurable way. Its use makes it possible to create and organize these functions into an ontology of software components. To implement a generic export function based on KRLO, I developed SRS (Structure_map based Request Solver), a KR retrieval tool enabling the use of KR path expressions. SRS interprets all functions. SRS thus provides an experimental validation for both the use of this primitive (Structure_map) and the use of KRLO.Directly or indirectly, SRS and KRLO may be used by GTH (Global Technologies Holding), the partner company of this thesis
Styles APA, Harvard, Vancouver, ISO, etc.
39

Šarić, Jasmin. « Extracting information for biology ». [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-27959.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

COSTA, Adriana Leite. « MADAE-PRO : UM PROCESSO BASEADO NO CONHECIMENTO PARA ENGENHARIA DE DOMÍNIO E DE APLICAÇÕES MULTIAGENTE ». Universidade Federal do Maranhão, 2009. http://tedebc.ufma.br:8080/jspui/handle/tede/1844.

Texte intégral
Résumé :
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-08-21T13:05:14Z No. of bitstreams: 1 Adriana Leite.pdf: 5184172 bytes, checksum: 6e560c465acfbbc76b4bfc1dd01bd86b (MD5)
Made available in DSpace on 2017-08-21T13:05:14Z (GMT). No. of bitstreams: 1 Adriana Leite.pdf: 5184172 bytes, checksum: 6e560c465acfbbc76b4bfc1dd01bd86b (MD5) Previous issue date: 2009-02-17
The interest in the agent-oriented paradigm development has increased in recent years. This is due mainly to the increasing complexity of current software that requires new characteristics as autonomy behavior. In the agent-oriented paradigm, the software has no longer a strictly predictable behavior, has from the control over their own behavior and can make decisions based on observations the environment and inferences upon its knowledge base. A set of meth and process have been already proposed for agent-oriented software engineering. Domain Engineering is a process for the development of a reusable application family in a particular domain problem, and Application Engineering, the one for the construction of a specific application in a family based on the reuse of software artifacts in the application family previously produced in the Domain Engineering process. MADAE-Pro is an ontology-driven process for multi-agent domain and application engineering which promotes the construction and reuse of agent-oriented applications families. The process is specified in a formal representation language, thus avoiding ambiguous interpretations. Another differential of MADAE-Pro is the reuse of software support in all levels of abstraction, from the requirements to the deployment.
O interesse pelo paradigma de desenvolvimento orientado a agentes tem aumentado nos últimos anos. Isso se deve principalmente ao crescente aumento da complexidade dos produtos de software atuais que requerem novas características como comportamento autônomo. No paradigma orientado a agentes, o software deixa de ter comportamento estritamente previsível e passa a ter controle sobre seu próprio comportamento, podendo tomar decisões a partir de observações do ambiente e de inferências realizada em sua base de conhecimento. Para guiar o desenvolvimento orientado a agentes tem sido proposto um conjunto de metodologias e processos pela comunidade da Engenharia de Software. Nesse trabalho, apresenta-se MADAE-Pro, um processo para o desenvolvimento de sistemas multiagente com alguns diferenciais em relação aos já propostos pela comunidade. A Engenharia de Domínio é um processo para criação de abstrações de software reusáveis no desenvolvimento de uma família de aplicações em um domínio particular de problema. A Engenharia de Aplicações é um processo para construção de aplicações baseadas no reúso de artefatos de software previamente produzidos no processo da Engenharia de Domínio. O MADAE-Pro é um processo dirigido por ontologias para a Engenharia de Domínio e de Aplicações Multiagente, o qual promove a construção e o reúso de famílias de aplicações. O processo é especificado em uma linguagem de representação de processos formal, evitando assim interpretações ambíguas. Outro diferencial do MADAE-Pro é o suporte ao reúso de software em todos os níveis de abstração, desde os requisitos até a implementação.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Pontes, Andrà Nascimento. « A forma lÃgica de sentenÃas de existÃncia : uma avaliaÃÃo da abordagem quantificacional ». Universidade Federal do CearÃ, 2010. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=19928.

Texte intégral
Résumé :
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
O objetivo desse trabalho à apresentar uma avaliaÃÃo da abordagem quantificacional do problema da existÃncia nas versÃes defendidas por Frege, Russell e Quine. Tal abordagem à apresentada tendo como pano de fundo sua reaÃÃo ao modelo clÃssico de anÃlise de sentenÃas utilizado pelas ontologias inflacionadas derivadas do argumento do nÃo-ser de PlatÃo e da Teoria dos Objetos de Meinong. A ideia bÃsica à mostrar que a ontologia inflacionada sustentada por PlatÃo e Meinong que, em grande parte, à derivada de um modelo deficiente de anÃlise de sentenÃas, pode ser eliminada atravÃs de um tratamento lÃgico eficiente de enunciados de existÃncia com base na lÃgica de predicados. A despeito das divergÃncias internas, a tese central dos proponentes da abordagem quantificacional à que o predicado de existÃncia Ã, do ponto de vista lÃgico, representado pelo quantificador existencial (E) da lÃgica de predicados. Tento mostrar tambÃm que, embora a abordagem quantificacional represente um avanÃo sem precedentes em filosofia no que diz respeito à anÃlise do estatuto lÃgico do termo âexisteâ, ela possui algumas limitaÃÃes relevantes que seus proponentes atà entÃo nÃo conseguiram superar.
The objective of this work is to present an evaluation of the quantificational approach of the problem of existence in the versions defended by Frege, Russell and Quine. This approach is presented having as a background its reaction to the classic model of sentence analysis used by inflationed ontologies derived from Platoâs nonbeing argument, as well as from the Meinongâs Theory of Objects. The basic idea is to show that the inflationed ontology claimed by Plato and Meinong - which, in the most part, is derived from a deficient model of sentence analysis -, can be eliminated through an efficient logical treatment of existence utterances based on the logic of predicates. In spite of internal divergences, the central thesis of proponents of the quantificational approach is that the existence predicate is, from a logical point of view, represented by the existential quantifier (E) of the logic of predicates. I also try to show that, although the quantificational approach represents an advance without precedents in the philosophy concerning to the logical status analysis of the term âexistâ, it has some relevant limitations which its proponents have not overcome yet.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Pontes, André Nascimento. « A forma lógica de sentenças de existência : uma avaliação da abordagem quantificacional ». reponame:Repositório Institucional da UFC, 2010. http://www.repositorio.ufc.br/handle/riufc/26051.

Texte intégral
Résumé :
PONTES, André Nascimento. A forma lógica de sentenças de existência: uma avaliação da abordagem quantificacional. 2010. 101f. – Dissertação (Mestrado) – Universidade Federal do Ceará, Programa de Pós-graduação em Filosofia, Fortaleza (CE), 2010.
Submitted by Gustavo Daher (gdaherufc@hotmail.com) on 2017-09-20T15:19:09Z No. of bitstreams: 1 2010_dis_anpontes.pdf: 596782 bytes, checksum: cdf303737f283864797e797d93cae2fb (MD5)
Approved for entry into archive by Márcia Araújo (marcia_m_bezerra@yahoo.com.br) on 2017-09-23T14:25:20Z (GMT) No. of bitstreams: 1 2010_dis_anpontes.pdf: 596782 bytes, checksum: cdf303737f283864797e797d93cae2fb (MD5)
Made available in DSpace on 2017-09-23T14:25:20Z (GMT). No. of bitstreams: 1 2010_dis_anpontes.pdf: 596782 bytes, checksum: cdf303737f283864797e797d93cae2fb (MD5) Previous issue date: 2010
The objective of this work is to present an evaluation of the quantificational approach of the problem of existence in the versions defended by Frege, Russell and Quine. This approach is presented having as a background its reaction to the classic model of sentence analysis used by inflationed ontologies derived from Plato’s nonbeing argument, as well as from the Meinong’s Theory of Objects. The basic idea is to show that the inflationed ontology claimed by Plato and Meinong - which, in the most part, is derived from a deficient model of sentence analysis -, can be eliminated through an efficient logical treatment of existence utterances based on the logic of predicates. In spite of internal divergences, the central thesis of proponents of the quantificational approach is that the existence predicate is, from a logical point of view, represented by the existential quantifier (E) of the logic of predicates. I also try to show that, although the quantificational approach represents an advance without precedents in the philosophy concerning to the logical status analysis of the term “exist”, it has some relevant limitations which its proponents have not overcome yet.
O objetivo desse trabalho é apresentar uma avaliação da abordagem quantificacional do problema da existência nas versões defendidas por Frege, Russell e Quine. Tal abordagem é apresentada tendo como pano de fundo sua reação ao modelo clássico de análise de sentenças utilizado pelas ontologias inflacionadas derivadas do argumento do não-ser de Platão e da Teoria dos Objetos de Meinong. A ideia básica é mostrar que a ontologia inflacionada sustentada por Platão e Meinong que, em grande parte, é derivada de um modelo deficiente de análise de sentenças, pode ser eliminada através de um tratamento lógico eficiente de enunciados de existência com base na lógica de predicados. A despeito das divergências internas, a tese central dos proponentes da abordagem quantificacional é que o predicado de existência é, do ponto de vista lógico, representado pelo quantificador existencial (E) da lógica de predicados. Tento mostrar também que, embora a abordagem quantificacional represente um avanço sem precedentes em filosofia no que diz respeito à análise do estatuto lógico do termo “existe”, ela possui algumas limitações relevantes que seus proponentes até então não conseguiram superar.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Moraes, Sílvia Maria Wanderley. « Construção de estruturas ontológicas a partir de textos : um estudo baseado no método formal concept analysis e em papéis semânticos ». Pontifícia Universidade Católica do Rio Grande do Sul, 2012. http://hdl.handle.net/10923/1609.

Texte intégral
Résumé :
Made available in DSpace on 2013-08-07T18:43:12Z (GMT). No. of bitstreams: 1 000439881-Texto+Completo-0.pdf: 4189361 bytes, checksum: cc72da8cbd69a8a5387851bb140f1b30 (MD5) Previous issue date: 2012
This work aims to study conceptual structures based on the Formal Concept Analysis method. We build these structures based on lexico-semantic information extracted from texts, among which we highlight the semantic roles. In our research, we propose ways to include semantic roles in concepts produced by this formal method. We analyze the contribution of semantic roles and verb classes in the composition of these concepts through structural measures. In these studies, we use the Penn Treebank Sample and SemLink 1. 1 corpora, both in English. We test, also for English, the applicability of our proposal in the Finance and Tourism domains with text extracted from the Wikicorpus 1. 0. This applicability was extrinsically analyzed based on the text categorization task, which was evaluated through functional measures traditionally used in this area. We also performed some preliminary studies for a corpus in Portuguese: PLN-BR CATEG. In our studies, we obtained satisfactory results which show that the proposed approach is promising.
Este trabalho tem como propósito estudar estruturas conceituais geradas seguindo o método Formal Concept Analysis. Usamos na construção dessas estruturas informações lexicossemânticas extraídas dos textos, dentre as quais se destacam os papéis semânticos. Em nossa pesquisa, propomos formas de inclusão de tais papéis nos conceitos produzidos por esse método formal. Analisamos a contribuição dos papéis semânticos e das classes de verbos na composição dos conceitos, por meio de medidas de ordem estrutural. Nesses estudos, utilizamos os corpora Penn TreeBank Sample e SemLink 1. 1, ambos em Língua Inglesa. Testamos, também para Língua Inglesa, a aplicabilidade de nossa proposta nos domínios de Finanças e Turismo com textos extraídos do corpus Wikicorpus 1. 0. Essa aplicabilidade foi analisada extrinsecamente com base na tarefa de categorização de textos, a qual foi avaliada a partir de medidas de ordem funcional tradicionalmente usadas nessa área. Realizamos ainda alguns estudos preliminares relacionados à nossa proposta para um corpus em Língua Portuguesa: PLN-BR CATEG. Obtivemos, nos estudos realizados, resultados satisfatórios os quais mostram que a abordagem proposta é promissora.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Tang, My Thao. « Un système interactif et itératif extraction de connaissances exploitant l'analyse formelle de concepts ». Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0060/document.

Texte intégral
Résumé :
Dans cette thèse, nous présentons notre méthodologie de la connaissance interactive et itérative pour une extraction des textes - le système KESAM: Un outil pour l'extraction des connaissances et le Management de l’Annotation Sémantique. Le KESAM est basé sur l'analyse formelle du concept pour l'extraction des connaissances à partir de ressources textuelles qui prend en charge l'interaction aux experts. Dans le système KESAM, l’extraction des connaissances et l'annotation sémantique sont unifiées en un seul processus pour bénéficier à la fois l'extraction des connaissances et l'annotation sémantique. Les annotations sémantiques sont utilisées pour formaliser la source de la connaissance dans les textes et garder la traçabilité entre le modèle de la connaissance et la source de la connaissance. Le modèle de connaissance est, en revanche, utilisé afin d’améliorer les annotations sémantiques. Le processus KESAM a été conçu pour préserver en permanence le lien entre les ressources (textes et annotations sémantiques) et le modèle de la connaissance. Le noyau du processus est l'Analyse Formelle de Concepts (AFC) qui construit le modèle de la connaissance, i.e. le treillis de concepts, et assure le lien entre le modèle et les annotations des connaissances. Afin d'obtenir le résultat du treillis aussi près que possible aux besoins des experts de ce domaine, nous introduisons un processus itératif qui permet une interaction des experts sur le treillis. Les experts sont invités à évaluer et à affiner le réseau; ils peuvent faire des changements dans le treillis jusqu'à ce qu'ils parviennent à un accord entre le modèle et leurs propres connaissances ou le besoin de l’application. Grâce au lien entre le modèle des connaissances et des annotations sémantiques, le modèle de la connaissance et les annotations sémantiques peuvent co-évoluer afin d'améliorer leur qualité par rapport aux exigences des experts du domaine. En outre, à l'aide de l’AFC de la construction des concepts avec les définitions des ensembles des objets et des ensembles d'attributs, le système KESAM est capable de prendre en compte les deux concepts atomiques et définis, à savoir les concepts qui sont définis par un ensemble des attributs. Afin de combler l'écart possible entre le modèle de représentation basé sur un treillis de concept et le modèle de représentation d'un expert du domaine, nous présentons ensuite une méthode formelle pour l'intégration des connaissances d’expert en treillis des concepts d'une manière telle que nous pouvons maintenir la structure des concepts du treillis. La connaissance d’expert est codée comme un ensemble de dépendance de l'attribut qui est aligné avec l'ensemble des implications fournies par le concept du treillis, ce qui conduit à des modifications dans le treillis d'origine. La méthode permet également aux experts de garder une trace des changements qui se produisent dans le treillis d'origine et la version finale contrainte, et d'accéder à la façon dont les concepts dans la pratique sont liés à des concepts émis automatiquement à partir des données. Nous pouvons construire les treillis contraints sans changer les données et fournir la trace des changements en utilisant des projections extensives sur treillis. À partir d'un treillis d'origine, deux projections différentes produisent deux treillis contraints différents, et, par conséquent, l'écart entre le modèle de représentation basée sur un treillis de réflexion et le modèle de représentation d'un expert du domaine est rempli avec des projections
In this thesis, we present a methodology for interactive and iterative extracting knowledge from texts - the KESAM system: A tool for Knowledge Extraction and Semantic Annotation Management. KESAM is based on Formal Concept Analysis for extracting knowledge from textual resources that supports expert interaction. In the KESAM system, knowledge extraction and semantic annotation are unified into one single process to benefit both knowledge extraction and semantic annotation. Semantic annotations are used for formalizing the source of knowledge in texts and keeping the traceability between the knowledge model and the source of knowledge. The knowledge model is, in return, used for improving semantic annotations. The KESAM process has been designed to permanently preserve the link between the resources (texts and semantic annotations) and the knowledge model. The core of the process is Formal Concept Analysis that builds the knowledge model, i.e. the concept lattice, and ensures the link between the knowledge model and annotations. In order to get the resulting lattice as close as possible to domain experts' requirements, we introduce an iterative process that enables expert interaction on the lattice. Experts are invited to evaluate and refine the lattice; they can make changes in the lattice until they reach an agreement between the model and their own knowledge or application's need. Thanks to the link between the knowledge model and semantic annotations, the knowledge model and semantic annotations can co-evolve in order to improve their quality with respect to domain experts' requirements. Moreover, by using FCA to build concepts with definitions of sets of objects and sets of attributes, the KESAM system is able to take into account both atomic and defined concepts, i.e. concepts that are defined by a set of attributes. In order to bridge the possible gap between the representation model based on a concept lattice and the representation model of a domain expert, we then introduce a formal method for integrating expert knowledge into concept lattices in such a way that we can maintain the lattice structure. The expert knowledge is encoded as a set of attribute dependencies which is aligned with the set of implications provided by the concept lattice, leading to modifications in the original lattice. The method also allows the experts to keep a trace of changes occurring in the original lattice and the final constrained version, and to access how concepts in practice are related to concepts automatically issued from data. The method uses extensional projections to build the constrained lattices without changing the original data and provide the trace of changes. From an original lattice, two different projections produce two different constrained lattices, and thus, the gap between the representation model based on a concept lattice and the representation model of a domain expert is filled with projections
Styles APA, Harvard, Vancouver, ISO, etc.
45

Auer, Sören. « Von Open Access zu Open Knowledge - wie wir Informationsflüsse der Wissenschaft in der Digitalen Welt organisieren können ». Technische Informationsbibliothek TIB, 2019. https://monarch.qucosa.de/id/qucosa%3A35998.

Texte intégral
Résumé :
Trotz eines verbesserten digitalen Zugangs zu wissenschaftlichen Publikationen in den letzten Jahren bleiben die Grundprinzipien der wissenschaftlichen Kommunikation unverändert und sind weiterhin weitgehend dokumentenbasiert. Die dokumentorientierten Arbeitsabläufe in der Wissenschaft haben die Grenzen der Angemessenheit erreicht, wie die jüngsten Diskussionen über das ungebremste Wachstum wissenschaftlicher Literatur, die Mängel des Peer-Review und die Reproduzierbarkeitskrise zeigen. Open Access ist eine wichtige Voraussetzung diesen Herausforderungen zu begegnen, aber auch nur der erste Schritt. Wir müssen die wissenschaftliche Kommunikation stärker wissensbasiert organisieren, indem wir wissenschaftliche Beiträge und verwandte Artefakte durch semantisch reichhaltige, vernetzte Wissensgraphen ausdrücken und miteinander vernetzen. In diesem Vortrag werden wir mit der Open Research Knowledge Graph Initiative erste Schritte in diese Richtung vorstellen.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Xavier, Marivelto Leite. « O conceito de forma como belo em H. C. de Lima Vaz ». Universidade do Vale do Rio do Sinos, 2008. http://www.repositorio.jesuita.org.br/handle/UNISINOS/2047.

Texte intégral
Résumé :
Made available in DSpace on 2015-03-04T21:02:10Z (GMT). No. of bitstreams: 0 Previous issue date: 30
Milton Valente
A presente pesquisa tem por objetivo legitimar o conceito de forma como belo em Lima Vaz. Principio de legitimidade filosófica que escapa a forma do belo, e é justamente por isso, por se encontrar nas dimensões da eternidade é que podemos falar da forma do belo em Lima Vaz. Forma é ser. Ou melhor, sendo formas o ser está enquanto “ens”. Assim, sendo-no-mundo a forma é aquilo que recebe tal condição do ser (Esse). Há, portanto, uma humildade ontológica da forma em receber e uma generosidade do ser em dar. Dizemos que a Forma não é ser, mas antes, “habens esse”, i. é, portadora do existir. Enquanto são portadoras do existir ou Esse, as formas são substâncias separadas. Constituídas por matéria que as individualiza e forma, i. é, o que assegura a substância ser o que ela é. Doravante, a substância - “algo que é” – é numa infinitude sensível, não o sensível meramente dos sentidos, mas anterior a este numa “unidade ontológica concreta” com o Esse. Sensível é perceber essa unidade concreta do ser. Dizer o “homem ex
This research aims to legitimize the concept os shape as beautiful in Lima Vaz. Philosophical principle of legitimacy that escapes the shape of beautiful, and is precisely why, because it was the size of eternity we can talk about the beautiful shape in Lima Vaz. Shape is to be. Or better, Shape as to be is “ens”. Therefore, being in the world as shape it´s receiving such a condition of being (Esse). There´s, thus a shape of ontological humility in receiving and a generosity in giving to be. We say that shape is not to be, but rather, “habens esse”, is the carrier exist. While there are carriers of to be or Esse, the shapes are separates substances. Consist of matters that individually and shapes, is what provides the substance is what it´s. Henceforth, the substance – “somethings that to be” – is an infinit sensitive, sensitive not only of the senses, but prior to that in a “concrete ontological unity” with this. Sensitive realize is that unit´s be concret. . Sayng the “man exist” is considering this singu
Styles APA, Harvard, Vancouver, ISO, etc.
47

Chen, Wei. « Formal Modeling and Automatic Generation of Test Cases for the Autonomous Vehicle ». Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG002.

Texte intégral
Résumé :
Les véhicules autonomes reposent principalement sur un pilote de système intelligent pour réaliser les fonctions de la conduite autonome. Ils combinent une variété de capteurs (caméras, radars, lidars,..) pour percevoir leurs environnements. Les algorithmes de perception des ADSs (Automated Driving Systems) fournissent des observations sur les éléments environnementaux à partir des données fournies par les capteurs, tandis que les algorithmes de décision génèrent les actions à mettre en oeuvre par les véhicules. Les ADSs sont donc des systèmes critiques dont les pannes peuvent avoir des conséquences catastrophiques. Pour assurer la sûreté de fonctionnement de tels systèmes, il est nécessaire de spécifier, valider et sécuriser la fiabilité de l’architecture et de la logique comportementale de ces systèmes pour toutes les situations qui seront rencontrées par le véhicule. Ces situations sont décrites et générées comme différents cas de test.L'objectif de cette thèse est de développer une approche complète permettant la conceptualisation et la caractérisation de contextes d'exécution pour le véhicule autonome, et la modélisation formelle des cas de test dans le contexte de l’autoroute. Enfin, cette approche doit permettre une génération automatique des cas de test qui ont un impact sur les performances et la fiabilité du véhicule.Dans cette thèse, nous proposons une méthodologie de génération de cas de test composée de trois niveaux. Le premier niveau comprend tous les concepts statiques et mobiles de trois ontologies que nous définissons afin de conceptualiser et de caractériser l'environnement d'execution du véhicule autonome: une ontologie de l'autoroute et une ontologie de la météo pour spécifier l'environnement dans lequel évolue le véhicule autonome, et une ontologie du véhicule qui se compose des feux du véhicule et les actions de contrôle. Chaque concept de ces ontologies est défini en termes d'entité, de sous-entités et de propriétés.Le second niveau comprend les interactions entre les entités des ontologies définies. Nous utilisons les équations de la logique du premier ordre pour représenter les relations entre ces entités.Le troisième et dernier niveau est dédié à la génération de cas de test qui est basée sur l'algèbre des processus PEPA (Performance Evaluation Process Algebra). Celle-ci est utilisée pour modéliser les situations décrites par les cas de test.Notre approche permet de générer automatiquement les cas de test et d'identifier les cas critiques. Nous pouvons générer des cas de test à partir de n'importe quelle situation initiale et avec n'importe quel nombre de scènes. Enfin, nous proposons une méthode pour calculer la criticité de chaque cas de test. Nous pouvons évaluer globalement l'importance d'un cas de test par sa criticité et sa probabilité d'occurrence
Autonomous vehicles mainly rely on an intelligent system pilot to achieve the purpose of self-driving. They combine a variety of sensors (cameras, radars, lidars,..) to perceive their surroundings. The perception algorithms of the Automated Driving Systems (ADSs) provide observations on the environmental elements based on the data provided by the sensors, while decision algorithms generate the actions to be implemented by the vehicles. Therefore, ADSs are safety-critical systems whose failures can have catastrophic consequences. To ensure the safety of such systems, it is necessary to specify, validate and secure the dependability of the architecture and the behavioural logic of ADSs running on vehicle for all the situations that will be met by the vehicle. These situations are described and generated as different test cases.The objective of this thesis is to develop a complete approach allowing the conceptualization and the characterization of execution contexts of autonomous vehicle, and the formal modelling of the test cases in the context of the highway. Finally, this approach has to allow an automatic generation of the test cases that have an impact on the performances and the dependability of the vehicle.In this thesis, we propose a three-layer test case generation methodology. The first layer includes all static and mobile concepts of three ontologies we define in order to conceptualize and characterize the driving environment for the construction of test cases: a highway ontology and a weather ontology to specify the environment in which evolves the autonomous vehicle, and a vehicle ontology which consists of the vehicle lights and the control actions. Each concept of these ontologies is defined in terms of entity, sub-entities and properties.The second layer includes the interactions between the entities of the defined ontologies. We use first-order logic equations to represent the relationships between these entities.The third and last layer is dedicated to the test case generation which is based on the process algebra PEPA (Performance Evaluation Process Algebra), which is used to model the situations described by the test cases.Our approach allows us to generate automatically the test cases and to identify the critical ones. We can generate test cases from any initial situation and with any number of scenes. Finally we propose a method to calculate the criticality of each test case. We can comprehensively evaluate the importance of a test case by its criticality and its probability of occurrence
Styles APA, Harvard, Vancouver, ISO, etc.
48

Abdul, Ghafour Samer. « Interopérabilité sémantique des connaissances des modèles de produits à base de features ». Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00688098.

Texte intégral
Résumé :
Dans un environnement collaboratif de développement de produit, plusieurs acteurs, ayant différents points de vue et intervenant dans plusieurs phases du cycle de vie de produit, doivent communiquer et échanger des connaissances entre eux. Ces connaissances, existant sous différents formats hétérogènes, incluent potentiellement plusieurs concepts tels que l'historique de conception, la structure du produit, les features, les paramètres, les contraintes, et d'autres informations sur le produit. Les exigences industrielles de réduction du temps et du coût de production nécessitent l'amélioration de l'interopérabilité sémantique entre les différents processus de développement afin de surmonter ces problèmes d'hétérogénéité tant au niveau syntaxique, structurel, que sémantique. Dans le domaine de la CAO, la plupart des méthodes existantes pour l'échange de données d'un modèle de produit sont, effectivement, basées sur le transfert des données géométriques. Cependant, ces données ne sont pas suffisantes pour saisir la sémantique des données, telle que l'intention de conception, ainsi que l'édition des modèles après leur échange. De ce fait, nous nous sommes intéressés à l'échange des modèles " intelligents ", autrement dit, définis en termes d'historique de construction, de fonctions intelligentes de conception appelées features, y compris les paramètres et les contraintes. L'objectif de notre thèse est de concevoir des méthodes permettant d'améliorer l'interopérabilité sémantique des systèmes CAO moyennant les technologies du Web Sémantique comme les ontologies OWL DL et le langage des règles SWRL. Nous avons donc élaboré une approche d'échange basée sur une ontologie commune de features de conception, que nous avons appelée CDFO " Common Design Features Ontology ", servant d'intermédiaire entre les différents systèmes CAO. Cette approche s'appuie principalement sur deux grandes étapes. La première étape consiste en une homogénéisation des formats de représentation des modèles CAO vers un format pivot, en l'occurrence OWL DL. Cette homogénéisation sert à traiter les hétérogénéités syntaxiques entre les formats des modèles. La deuxième étape consiste à définir des règles permettant la mise en correspondance sémantique entre les ontologies d'application de CAO et notre ontologie commune. Cette méthode de mise en correspondance se base principalement, d'une part, sur la définition explicite des axiomes et des règles de correspondance permettant l'alignement des entités de différentes ontologies, et d'autre part sur la reconnaissance automatique des correspondances sémantiques supplémentaires à l'aide des capacités de raisonnement fournies par les moteurs d'inférence basés sur les logiques de description. Enfin, notre méthode de mise en correspondance est enrichie par le développement d'une méthode de calcul de similarité sémantique appropriée pour le langage OWL DL, qui repose principalement sur les composants des entités en question tels que leur description et leur contexte.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Schreyer, Marcus. « MetBiM - ein semantisches Datenmodell für Baustoff-Informationen im World Wide Web : Anwendungen für Beton mit rezyklierter Gesteinskörnung / ». [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB9937536.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Bourreau, Pierre. « Jeux de typage et analyse de lambda-grammaires non-contextuelles ». Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00733964.

Texte intégral
Résumé :
Les grammaires catégorielles abstraites (ou λ-grammaires) sont un formalisme basé sur le λ-calcul simplement typé. Elles peuvent être vues comme des grammaires générant de tels termes, et ont été introduites afin de modéliser l'interface entre la syntaxe et la sémantique du langage naturel, réunissant deux idées fondamentales : la distinction entre tectogrammaire (c.a.d. structure profonde d'un énoncé) et phénogrammaire (c.a.d représentation de la surface d'un énoncé) de la langue, exprimé par Curry ; et une modélisation algébrique du principe de compositionnalité afin de rendre compte de la sémantique des phrases, due à Montague. Un des avantages principaux de ce formalisme est que l'analyse d'une grammaires catégorielle abstraite permet de résoudre aussi bien le problème de l'analyse de texte, que celui de la génération de texte. Des algorithmes d'analyse efficaces ont été découverts pour les grammaires catégorielles abstraites de termes linéaires et quasi-linéaires, alors que le problème de l'analyse est non-élémentaire dans sa forme la plus générale. Nous proposons d'étudier des classes de termes pour lesquels l'analyse grammaticale reste solvable en temps polynomial. Ces résultats s'appuient principalement sur deux théorèmes de typage : le théorème de cohérence, spécifiant qu'un λ-terme donné est l'unique habitant d'un certain typage ; et le théorème d'expansion du sujet, spécifiant que deux termes β-équivalents habitent les même typages. Afin de mener cette étude à bien, nous utiliserons une représentation abstraite des notions de λ-termes et de typages, sous forme de jeux. En particulier, nous nous appuierons grandement sur cette notion afin de démontrer le théorème de cohérence pour de nouvelles familles de λ-termes et de typages. Grâce à ces résultats, nous montrerons qu'il est possible de construire de manière directe, un reconnaisseur dans le langage Datalog, pour des grammaires catégorielles abstraites de λ-termes quasi-affines.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie