Dissertations / Theses on the topic 'Base de connaissance semi-Sémantique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Base de connaissance semi-Sémantique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mrabet, Yassine. "Approches hybrides pour la recherche sémantique de l'information : intégration des bases de connaissances et des ressources semi-structurées." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00737282.
Full textVentalon, Geoffrey. "La compréhension de la métaphore dans les images." Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080115/document.
Full textA metaphor is a figure of style in which the meaning of a term is transferred to that another term. For example, the sentence “Axel is a fox” is a metaphor in which a man is smart. The metaphor is not only introduced in a text. It can be depicted in a picture. Therefore, the image of a man with a body of a fox can refer to the sentence: “this man is a fox.” According to Forceville (2007,2009), a pictorial metaphor can be characterized considering its type (contextual metaphor, hybrid metaphor, simile and integrated metaphor), considering its structure (monomodal and multimodal metaphor) and its use (in commercials, in social campaigns, in political cartoons or in Art). The aim of this work is to create a knowledge base of pictorial metaphors examining their characteristics (topics, vehicles). Experimental studies examined the understanding of monomodal pictorial hybrid metaphors by focusing on property attribution process in several situations regarding the effect of the native language (French versus Spanish), context, age and the use of the metaphor. The discussion section illustrates perspectives of research considering current studies focused on pictorial metaphor comprehension and the use of specific tools (e.g. eye tracker). The understanding of pictorial metaphors could be applied to others field of expertise of psychology (e.g. Neuropsychology), other people (e.g. children) and different cultures (e.g. Korean)
Amardeilh, Florence. "Web Sémantique et Informatique Linguistique : propositions méthodologiques et réalisation d'une plateforme logicielle." Phd thesis, Université de Nanterre - Paris X, 2007. http://tel.archives-ouvertes.fr/tel-00146213.
Full textCandlot, Alexandre. "PRINCIPES D'ASSISTANCE A LA MAITRISE D'OUVRAGE POUR LA MODELISATION ET L'INTEGRATION D'EXPERTISE." Phd thesis, Ecole centrale de nantes - ECN, 2006. http://tel.archives-ouvertes.fr/tel-00429650.
Full textBen, marzouka Wissal. "Traitement possibiliste d'images, application au recalage d'images." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2022. http://www.theses.fr/2022IMTA0271.
Full textIn this work, we propose a possibilistic geometric registration system that merges the semantic knowledge and the gray level knowledge of the images to be registered. The existing geometric registration methods are based on an analysis of the knowledge at the level of the sensors during the detection of the primitives as well as during the matching. The evaluation of the results of these geometric registration methods has limits in terms of the perfection of the precision caused by the large number of outliers. The main idea of our proposed approach is to transform the two images to be registered into a set of projections from the original images (source and target). This set is composed of images called “possibility maps”, each map of which has a single content and presents a possibilistic distribution of a semantic class of the two original images. The proposed geometric registration system based on the possibility theory presents two contexts: a supervised context and an unsupervised context. For the first case, we propose a supervised classification method based on the theory of possibilities using learning models. For the unsupervised context, we propose a possibilistic clustering method using the FCM-multicentroid method. The two proposed methods provide as a result the sets of semantic classes of the two images to be registered. We then create the knowledge bases for the proposed possibilistic registration system. We have improved the quality of the existing geometric registration in terms of precision perfection, reductionin the number of false landmarks and optimization of time complexity
Malarme, Pierre. "Conception d'un système d'aide à la chirurgie sur base de la modélisation d'opérations, d'un recalage temporel des données et d'un recalage sémantique de métadonnées." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209844.
Full textThe main goal of this PhD thesis is to design a computer assisted surgery system based on surgical workflow (SWf) modeling, and intra-operative data and metadata acquired during the operation. For the SWf modeling, workflow-mining techniques will be developed based on dynamic learning and incremental inference. An ontology will be used to describe the various steps of the surgery and their attributes.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
Bouzeghoub, Mokrane. "Secsi : un système expert en conception de systèmes d'informations, modélisation conceptuelle de schémas de bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066046.
Full textSabatier, Paul. "Contribution au développement d'interfaces en langage naturel." Paris 7, 1987. http://www.theses.fr/1987PA077081.
Full textDehainsala, Hondjack. "Explicitation de la sémantique dans les bases de données : base de données à base ontologique et le modèle OntoDB." Poitiers, 2007. http://www.theses.fr/2007POIT2270.
Full textAn Ontology–Based DataBase (OBDB) is a database which allows to store both data and ontologies that define data meaning. In this thesis, we propose a new architecture model for OBDB, called OntoDB. This model has two main original features. First, like usual databases, each stored entity is associated with a logical schema which define the structure of all its instances. Thus, our approach provides for adding ontology to existing database for semantic indexation of its content. Second, meta-model of the ontology model is also represented in the same database. This allows to support change and evolution of ontology models. The OntoDB model has been validated by a prototype. Performance evaluation of this prototype has been done and has shown that our approach allows to manage very large data and supports scalability much better than the previously proposed approaches
Abdul, Ghafour Samer. "Interopérabilité sémantique des connaissances des modèles de produits à base de features." Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00688098.
Full textSzulman, Sylvie. "Enrichissement d'une base de connaissances à partir de textes en langage naturel." Paris 13, 1990. http://www.theses.fr/1990PA132020.
Full textDjeraba, Chaabane. "Quelques liens sémantiques dans un système à base de connaissances." Lyon 1, 1993. http://www.theses.fr/1993LYO10289.
Full textDjioua, Brahim. "Modélisation informatique d'une base de connaissances lexicales (DISSC) : réseaux polysémiques et schémas sémantico-cognitifs." Paris 4, 2000. http://www.theses.fr/2000PA040180.
Full textEsculier, Christian. "Introduction à la tolérance sémantique : la prise en compte des exceptions dans le cadre du couplage des bases de données et des bases de connaissances." Phd thesis, Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00333100.
Full textPugeault, Florence. "Extraction dans les textes de connaissances structurées : une méthode fondée sur la sémantique lexicale linguistique." Toulouse 3, 1995. http://www.theses.fr/1995TOU30164.
Full textKoutraki, Maria. "Approches vers des modèles unifiés pour l'intégration de bases de connaissances." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV082/document.
Full textMy thesis aim the automatic integration of new Web services in a knowledge base. For each method of a Web service, a view is automatically calculated. The view is represented as a query on the knowledge base. Our algorithm also calculates an XSLT transformation function associated to the method that is able to transform the call results in a fragment according to the schema of the knowledge base. The novelty of our approach is that the alignment is based only on the instances. It does not depend on the names of the concepts or constraints that are defined by the schema. This makes it particularly relevant for Web services that are currently available on the Web, because these services use the REST protocol. This protocol does not allow the publication schemes. In addition, JSON seems to establish itself as the standard for the representation of technology call results
Galarraga, Del Prado Luis. "Extraction des règles d'association dans des bases de connaissances." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0050/document.
Full textThe continuous progress of information extraction (IE) techniques has led to the construction of large general-purpose knowledge bases (KBs). These KBs contain millions of computer-readable facts about real-world entities such as people, organizations and places. KBs are important nowadays because they allow computers to “understand” the real world. They are used in multiple applications in Information Retrieval, Query Answering and Automatic Reasoning, among other fields. Furthermore, the plethora of information available in today’s KBs allows for the discovery of frequent patterns in the data, a task known as rule mining. Such patterns or rules convey useful insights about the data. These rules can be used in several applications ranging from data analytics and prediction to data maintenance tasks. The contribution of this thesis is twofold : First, it proposes a method to mine rules on KBs. The method relies on a mining model tailored for potentially incomplete webextracted KBs. Second, the thesis shows the applicability of rule mining in several data-oriented tasks in KBs, namely facts prediction, schema alignment, canonicalization of (open) KBs and prediction of completeness
Chebil, Wiem. "Méthodes d'indexation multi-terminologique à base de connaissances : application aux documents en santé." Rouen, 2016. http://www.theses.fr/2016ROUES031.
Full textThe big quantity of data managed by information retrieval systems is a real challenge, especially in the biomedical field. Indeed the task of documents or queries indexing is painful for the experts and the replacement of these latter by automatic approaches is essential. In the aim of improving the performance of the automatic management of data, we propose in this thesis a set of approaches which aim to minimize the errors of indexing documents and queries. First of all, we evaluated the indexing finction of CISMeF (Catalogue et Index des Sites Médicaux en langue Française) through an empirical study. We were then based on the identified categories of errors of the indexing function to propose an indexing approach based on a Vector Space Model (VSM) which aims to reduce the stemming errors and the irrelevant information generated by the partial matching. The last task is performed through the semantic and statistical information supplied by the UMLS (Unified Medical Language System). The VSM-based approach also proposes a new weight to evaluate the indexing terms. This weight is semantic, statistical and takes int account the structure of document. We also exploited a bayesian network which contributes through its capacity to solve the uncertainty and its capacity to exploit the architecture of terminologies to better classify concepts. Furthermore, we proposed an approach of indexing documents with a possibilistic network (PN) for the first time. Our contribution through this approach is to improve the estimation of the relevance of the concepts through a double evaluation. This latter consists of two measures of possibility and necessity. We combined then the VSM and the PN models based on the fact that the advantages of the VSM are different from the advantages of the PN and the two models are complementary. We also exploited the PN for the first time for the enrichment of the queries by new concepts which are semantically close to those of the initial index. This approach contributes to improve the ranking of the concepts which are candidates for the enrichment. The integration of these contributions in a SRI and its evaluation with comparing it to those existing highlight the interest of the solutions that we proposed to improve the indexing errors
Razmerita, Liana. "Modèle utilisateur et modélisation utilisateur dans les systèmes de gestion des connaissances : une approche fondée sur les ontologies." Toulouse 3, 2003. http://www.theses.fr/2003TOU30179.
Full textSimonet, Geneviève. "Héritage non monotone à base de chemins et de graphes partiels." Montpellier 2, 1994. http://www.theses.fr/1994MON20151.
Full textVignard, Philippe. "Un mécanisme d'exploitation à base de filtrage flou pour une représentation des connaissances centrée objets." Phd thesis, Grenoble INPG, 1985. http://tel.archives-ouvertes.fr/tel-00316169.
Full textKarray, Mohamed Hedi. "Contribution à la spécification et à l’élaboration d’une plateforme de maintenance orientée connaissances." Thesis, Besançon, 2012. http://www.theses.fr/2012BESA2013/document.
Full textOperational condition maintenance of industrial equipment is a principal challenge for the firm production. This fact transfer the maintenance from the cost center to the profit center which has lead to massif development of maintenance support system starting from the GMAO to the e-maintenance platform. These systems provide to the maintenance agent, decision-support, and set of services allowing a computerized management of core activities for maintenance process. (e.g. intervention, planning, diagnostic,...). However, the user request continues evolving in time with respect of their expertise, their renewed knowledge and new constraints. On the other hand, the existing services are not following their requirements and they need to be updated. In this thesis, an overview on the advantage and drawback of existing computerized support system, in particular the e-maintenance platform (the most advanced maintenance system) is presented in order to meet the users needs and propose scalable and on-demand services. To overcome the existing system shortage, we propose the s-maintenance concept characterized by the collaborative exchange between users and applications and the common knowledge of the maintenance field. Thus, to implement this concept, a knowledge-oriented platform is proposed providing the auto-x functionalities (auto-traceability, auto-learning and auto-management) and meeting the s-maintenance characteristics. The architecture based on components of this platform, is also based on shared knowledge between integrated components for the benefit of the semantic interoperability as well as for the knowledge capitalization. Maintenance domain ontology is also developed on which the knowledge base is rested. Finally, in order to develop the auto-x functionalities, provided by the platform, a trace-based system is proposed by exploiting the knowledge base and the associated ontology
Ayari, Naouel. "Modélisation des connaissances et raisonnement à base d'ontologies spatio-temporelles : application à la robotique ambiante d'assistance." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1023.
Full textIn this thesis, we propose a generic framework for modeling and managing the context in ambient and robotic intelligent systems. The contextual knowledge considered is of several types and derived from multimodal perceptions : spatial and / or temporal knowledge, change of states and properties of entities, statements in natural language. To do this, we proposed an extension of the Narrative Knowledge Representation and Reasoning (NKRL) language to reach a unified representation of contextual knowledge whether spatial, temporal or spatio-temporal and perform the associated reasoning. We have exploited the expressiveness of the n-ary ontologies on which the NKRL language is based to bearing on the problems encountered in the spatial and dynamic knowledge representation approaches based on binary ontologies, commonly used in ambient intelligence and robotics. The result is a richer, finer and more coherent modeling of the context allowing a better adaptation of user assistance services in the context of ambient and robotic intelligent systems. The first contribution concerns the modeling of spatial and / or temporal knowledge and contextual changes, and spatial, temporal or spatial-temporal inferences. The second contribution concerns the development of a methodology allowing to carry out a syntactic treatment and a semantic annotation to extract, from a statement in natural language, spatial or temporal contextual knowledge in NKRL. These contributions have been validated and evaluated in terms of performance (processing time, error rate, and user satisfaction rate) in scenarios involving different forms of services: wellbeing assistance, social assistance, assistance with the preparation of a meal
Guizol, Léa. "Partitioning semantics for entity resolution and link repairs in bibliographic knowledge bases." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20188/document.
Full textWe propose a qualitative entity resolution approach to repair links in a bibliographicknowledge base. Our research question is: "How to detect and repair erroneouslinks in a bibliographic knowledge base using qualitative methods?" Theproposed approach is decomposed into two major parts. The first contributionconsists in a partitioning semantics using symbolic criteria used in order to detecterroneous links. The second one consists in a repair algorithm restoring link quality.We implemented our approach and proposed qualitative and quantitative evaluationfor the partitioning semantics as well as proving certain properties for the repairalgorithms
Gandon, Fabien. "Graphes RDF et leur Manipulation pour la Gestion de Connaissances." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00351772.
Full textDans le deuxième chapitre, nous rappelons comment les formalismes à base de graphes peuvent être utilisés pour représenter des connaissances avec un degré variable de formalisation en fonction des besoins identifiés dans les scénarios d'application et des traitements à effectuer notamment pour la mise en place de webs sémantiques. Nous identifierons brièvement les caractéristiques de certains de ces formalismes qui sont utilisés dans nos travaux et les opportunités d'extensions qu'ils offrent. Nous synthétiserons aussi une initiative en cours pour factoriser la définition des structures mathématiques partagées par ces formalismes et réutiliser l'algorithmique des traitements communs à ces structures.
Dans le troisième chapitre nous expliquons que l'ontologie offre un support à d'autres types de raisonnement que la dérivation logique. Par exemple, la hiérarchie de notions contenue dans une ontologie peut être vue comme un espace métrique permettant de définir des distances pour comparer la proximité sémantique de deux notions. Nous avons mis en œuvre cette idée dans plusieurs scénarios comme l'allocation distribuée d'annotations, la recherche approchée ou le clustering. Nous résumons dans ce troisième chapitre diverses utilisations que nous avons faites des distances sémantiques et discutons notre position sur ce domaine. Nous donnons les scénarios d'utilisation et les distances utilisées dans un échantillon représentatif de projets que nous avons menés. Pour nous, cette première série d'expériences a permis de démontrer l'intérêt et le potentiel des distances, et aussi de souligner l'importance du travail restant à faire pour identifier et caractériser les familles de distances existantes et leur adéquation respective aux tâches pour lesquelles nos utilisateurs souhaitent être assistés.
Dans le quatrième chapitre, nous rappelons qu'un web sémantique, tel que nous en utilisons dans nos scénarios, qu'il soit public ou sur l'intranet d'une entreprise, repose généralement sur plusieurs serveurs web qui proposent chacun différentes ontologies et différentes bases d'annotations utilisant ces ontologies pour décrire des ressources. Les scénarios d'usage amènent souvent un utilisateur à formuler des requêtes dont les réponses combinent des éléments d'annotation distribués entre plusieurs de ces serveurs.
Ceci demande alors d'être capable :
(1) d'identifier les serveurs susceptibles d'avoir des éléments de réponse ;
(2) d'interroger des serveurs distants sur les éléments qu'ils connaissent sans surcharger le réseau;
(3) de décomposer la requête et router les sous-requêtes vers les serveurs idoines ;
(4) de recomposer les résultats à partir des réponses partielles.
Nous avons, avec le web sémantique, les briques de base d'une architecture distribuée. Le quatrième chapitre résume un certain nombre d'approches que nous avons proposées pour tenir compte de la distribution et gérer des ressources distribuées dans les webs sémantiques que nous concevons.
Les ontologies et les représentations de connaissances sont souvent dans le cœur technique de nos architectures, notamment lorsqu'elles utilisent des représentations formelles. Pour interagir avec le web sémantique et ses applications, le cinquième chapitre rappelle que nous avons besoin d'interfaces qui les rendent intelligibles pour les utilisateurs finaux. Dans nos systèmes d'inférences des éléments de connaissances sont manipulés et combinés, et même si les éléments de départ étaient intelligibles, l'intelligibilité des résultats, elle, n'est pas préservée par ces transformations.
Actuellement, et dans le meilleur des cas, les concepteurs d'interfaces mettent en œuvre des transformations ad hoc des structures de données internes en représentations d'interface en oubliant souvent les capacités de raisonnement que pourraient fournir ces représentations pour construire de telles interfaces. Dans le pire des cas, et encore trop souvent, les structures de représentation normalement internes sont directement mises à nu dans des widgets sans que cela soit justifié et, au lieu d'assister l'interaction, ces représentations alourdissent les interfaces.
Puisqu'elles reçoivent les contributions d'un monde ouvert, les interfaces du web sémantique devront être, au moins en partie, générées dynamiquement et rendues pour chaque structure devant rentrer en contact avec les utilisateurs. Le cinquième et dernier chapitre souligne cette opportunité croissante d'utiliser des systèmes à base d'ontologies dans l'assistance aux interactions avec nos utilisateurs.
Nikiema, Jean. "Intégration de connaissances biomédicales hétérogènes grâce à un modèle basé sur les ontologies de support." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0179/document.
Full textIn the biomedical domain, there are almost as many knowledge resources in health as there are application fields. These knowledge resources, described according to different representation models and for different contexts of use, raise the problem of complexity of their interoperability, especially for actual public health problematics such as personalized medicine, translational medicine and the secondary use of medical data. Indeed, these knowledge resources may represent the same notion in different ways or represent different but complementary notions.For being able to use knowledge resources jointly, we studied three processes that can overcome semantic conflicts (difficulties encountered when relating distinct knowledge resources): the alignment, the integration and the semantic enrichment of the integration. The alignment consists in creating a set of equivalence or subsumption mappings between entities from knowledge resources. The integration aims not only to find mappings but also to organize all knowledge resources’ entities into a unique and coherent structure. Finally, the semantic enrichment of integration consists in finding all the required mapping relations between entities of distinct knowledge resources (equivalence, subsumption, transversal and, failing that, disjunction relations).In this frame, we firstly realized the alignment of laboratory tests terminologies: LOINC and the local terminology of Bordeaux hospital. We pre-processed the noisy labels of the local terminology to reduce the risk of naming conflicts. Then, we suppressed erroneous mappings (confounding conflicts) using the structure of LOINC.Secondly, we integrated RxNorm to SNOMED CT. We constructed formal definitions for each entity in RxNorm by using their definitional features (active ingredient, strength, dose form, etc.) according to the design patterns proposed by SNOMED CT. We then integrated the constructed definitions into SNOMED CT. The obtained structure was classified and the inferred equivalences generated between RxNorm and SNOMED CT were compared to morphosyntactic mappings. Our process resolved some cases of naming conflicts but was confronted to confounding and scaling conflicts, which highlights the need for improving RxNorm and SNOMED CT.Finally, we performed a semantically enriched integration of ICD-10 and ICD-O3 using SNOMED CT as support. As ICD-10 describes diagnoses and ICD-O3 describes this notion according to two different axes (i.e., histological lesions and anatomical structures), we used the SNOMED CT structure to identify transversal relations between their entities (resolution of open conflicts). During the process, the structure of the SNOMED CT was also used to suppress erroneous mappings (naming and confusion conflicts) and disambiguate multiple mappings (scale conflicts)
Jones, Hazaël. "Raisonnement à base de règles implicatives floues." Toulouse 3, 2007. http://thesesups.ups-tlse.fr/113/.
Full textThis thesis considers expert knowledge modelling by implicative fuzzy rules. It explores the benefits of these rules compared to the most frequently used fuzzy rules: conjunctive rules. However, inference from implicative rules and fuzzy inputs is not easy and has long been an impediment to their use. The main difficulties are the complexity of the inference with several implicative rules and fuzzy inputs, the partition design, and the semantic interpretation for users familiar with the reasoning with conjunctive fuzzy rules. Our work focuses on these points. We present an inference method using implicative fuzzy rules and fuzzy inputs, which can easily implement the implicative reasoning in the one and two-dimensional case. We also give a comparison between conjunctive rules and implicative rules, and we study the semantics of these rules, in terms of logic and practical use. A real world illustration in the food industry is presented. The goal of this work is the prediction of post maturing cheese defects with information available before the maturing process. Available information comes from CTFC (Technical Center on Comtois Cheese) expert knowledge and process data. Since the developed methods are generic, they can be used for a wide class of applications: those in which the expert knowledge is expressed in the form of a model. They provide modeling perspectives that respect both imprecise data and expert reasoning characteristics
Dang-Ngoc, Tuyet-Tram. "Fédération de données semi-structurées avec XML." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2003. http://tel.archives-ouvertes.fr/tel-00733510.
Full textNguyen, Thi Hoa Hue. "La vérification de patrons de workflow métier basés sur les flux de contrôle : une approche utilisant les systèmes à base de connaissances." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4033/document.
Full textThis thesis tackles the problem of modelling semantically rich business workflow templates and proposes a process for developing workflow templates. The objective of the thesis is to transform a business process into a control flow-based business workflow template that guarantees syntactic and semantic validity. The main challenges are: (i) to define formalism for representing business processes; (ii) to establish automatic control mechanisms to ensure the correctness of a business workflow template based on a formal model and a set of semantic constraints; and (iii) to organize the knowledge base of workflow templates for a workflow development process. We propose a formalism which combines control flow (based on Coloured Petri Nets (CPNs)) with semantic constraints to represent business processes. The advantage of this formalism is that it allows not only syntactic checks based on the model of CPNs, but also semantic checks based on Semantic Web technologies. We start by designing an OWL ontology called the CPN ontology to represent the concepts of CPN-based business workflow templates. The design phase is followed by a thorough study of the properties of these templates in order to transform them into a set of axioms for the CPN ontology. In this formalism, a business process is syntactically transformed into an instance of the CPN ontology. Therefore, syntactic checking of a business process becomes simply verification by inference, by concepts and by axioms of the CPN ontology on the corresponding instance
Karray, Mohamed Hedi. "Contribution à la spécification et à l'élaboration d'une plateforme de maintenance orientée connaissances." Phd thesis, Université de Franche-Comté, 2012. http://tel.archives-ouvertes.fr/tel-00914600.
Full textDang, Ngoc Tuyet Tram. "Federation de données semi-structurées avec XML." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2003. http://tel.archives-ouvertes.fr/tel-00005162.
Full textsont irrégulières : des données peuvent manquer, des concepts
similaires peuvent être représentés par différents types de données,
et les structures même peuvent être mal connues. Cette absence
de schéma prédéfini, permettant de tenir compte de toutes les données
du monde extérieur, présente l'inconvénient de complexifier les
algorithmes d'intégration des données de différentes sources.
Nous proposons une architecture de médiation basée entièrement sur XML.
L'objectif de cette architecture de médiation est de fédérer des sources de
données distribuées de différents types.
Elle s'appuie sur le langage XQuery, un langage fonctionnel
conçu pour formuler des requêtes sur des documents XML. Le médiateur analyse
les requêtes exprimées en XQuery et répartit l'exécution de la requête
sur les différentes sources avant de recomposer les résultats.
L'évaluation des requêtes doit se faire en exploitant au maximum les
spécificités des données et permettre une optimisation efficace.
Nous décrivons l'algèbre XAlgebre à base d'opérateurs conçus
pour XML. Cette algèbre a pour but de construire des plans d'exécution pour
l'évaluation de requêtes XQuery et traiter des tuples d'arbres XML.
Ces plans d'exécution doivent pouvoir être modélisés par un modèle
de coût et celui de coût minimum sera sélectionné pour l'exécution.
Dans cette thèse, nous définissons un modèle de coût pour les données
semi-structurées adapté à notre algèbre.
Les sources de données (SGBD, serveurs Web, moteur de recherche)
peuvent être très hétérogènes, elles peuvent avoir des
capacités de traitement de données très différentes, mais aussi avoir
des modèles de coût plus ou moins définis.
Pour intégrer ces différentes informations dans
l'architecture de médiation, nous devons déterminer comment communiquer
ces informations entre le médiateur et les sources, et comment les intégrer.
Pour cela, nous utilisons des langages basés sur XML comme XML-Schema et MathML
pour exporter les informations de métadonnées, de formules de coûts
et de capacité de sources.
Ces informations exportées sont communiquées par l'intermédiaire d'une interface
applicative nommée XML/DBC.
Enfin, des optimisations diverses spécifiques à l'architecture de médiation
doivent être considérées. Nous introduisons pour cela un cache sémantique
basé sur un prototype de SGBD stockant efficacement des données XML
en natif.
Berrut, Catherine. "Une méthode d'indexation fondée sur l'analyse sémantique de documents spécialisés : le prototype RIME et son application à un corpus médical." Phd thesis, Grenoble 1, 1988. http://tel.archives-ouvertes.fr/tel-00330027.
Full textHedi, Karray Mohamed. "Contribution à la spécification et à l'élaboration d'une plateforme de maintenance orientée connaissances." Phd thesis, Université de Franche-Comté, 2012. http://tel.archives-ouvertes.fr/tel-00716178.
Full textSettouti, Lotfi. "Systèmes à base de traces modélisées : modèles et langages pour l'exploitation des traces d'interactions." Thesis, Lyon 1, 2011. http://www.theses.fr/2011LYO10019.
Full textThis thesis is funded by the Rhône-Alpes Region as a part of the project < Personalisation of Technology-Enhanced Learning (TEL) Systems >. Personalising TEL Systems is, above all, dependent on the capacity to produce relevant and exploitable traces of individual or collaborative learning activities. In this field, exploiting interaction traces addresses several problems ranging from its representation in a normalised and intelligible manner to its processing and interpretation in continuous way during the ongoing TEL activities. The proliferation of trace-based exploitations raises the need of generic tools to support their representation and exploitation. The main objective of this thesis is to define the theoretical foundations of such generic tools. To do that, we define the notion of Trace-Based System (TBS) as a kind of Knowledge-based system whose main source of knowledge is a set of trace of user-system interactions. This thesis investigates practical and theoretical issues related to TBS, covering the spectrum from concepts, services and architecture involved by such TBS (conceptual framework) to language design over declarative semantics (formal framework). The central topic of our framework is the development of a high-level trace transformation language supporting deductive rules as an abstraction and reasoning mechanism for traces. The declarative semantics for such language is defined by a (Tarski-style) model theory with accompanying fixpoint theory
Tran, Duc Minh. "Découverte de règles d'association multi-relationnelles à partir de bases de connaissances ontologiques pour l'enrichissement d'ontologies." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4041/document.
Full textIn the Semantic Web context, OWL ontologies represent explicit domain knowledge based on the conceptualization of domains of interest while the corresponding assertional knowledge is given by RDF data referring to them. In this thesis, based on ideas derived from ILP, we aim at discovering hidden knowledge patterns in the form of multi-relational association rules by exploiting the evidence coming from the assertional data of ontological knowledge bases. Specifically, discovered rules are coded in SWRL to be easily integrated within the ontology, thus enriching its expressive power and augmenting the assertional knowledge that can be derived. Two algorithms applied to populated ontological knowledge bases are proposed for finding rules with a high inductive power: (i) level-wise generated-and-test algorithm and (ii) evolutionary algorithm. We performed experiments on publicly available ontologies, validating the performances of our approach and comparing them with the main state-of-the-art systems. In addition, we carry out a comparison of popular asymmetric metrics, originally proposed for scoring association rules, as building blocks for a fitness function for evolutionary algorithm to select metrics that are suitable with data semantics. In order to improve the system performance, we proposed to build an algorithm to compute metrics instead of querying via SPARQL-DL
Harispe, Sébastien. "Knowledge-based Semantic Measures : From Theory to Applications." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20038/document.
Full textThe notions of semantic proximity, distance, and similarity have long been considered essential for the elaboration of numerous cognitive processes, and are therefore of major importance for the communities involved in the development of artificial intelligence. This thesis studies the diversity of semantic measures which can be used to compare lexical entities, concepts and instances by analysing corpora of texts and knowledge representations (e.g., ontologies). Strengthened by the development of Knowledge Engineering and Semantic Web technologies, these measures are arousing increasing interest in both academic and industrial fields.This manuscript begins with an extensive state-of-the-art which presents numerous contributions proposed by several communities, and underlines the diversity and interdisciplinary nature of this domain. Thanks to this work, despite the apparent heterogeneity of semantic measures, we were able to distinguish common properties and therefore propose a general classification of existing approaches. Our work goes on to look more specifically at measures which take advantage of knowledge representations expressed by means of semantic graphs, e.g. RDF(S) graphs. We show that these measures rely on a reduced set of abstract primitives and that, even if they have generally been defined independently in the literature, most of them are only specific expressions of generic parametrised measures. This result leads us to the definition of a unifying theoretical framework for semantic measures, which can be used to: (i) design new measures, (ii) study theoretical properties of measures, (iii) guide end-users in the selection of measures adapted to their usage context. The relevance of this framework is demonstrated in its first practical applications which show, for instance, how it can be used to perform theoretical and empirical analyses of measures with a previously unattained level of detail. Interestingly, this framework provides a new insight into semantic measures and opens interesting perspectives for their analysis.Having uncovered a flagrant lack of generic and efficient software solutions dedicated to (knowledge-based) semantic measures, a lack which clearly hampers both the use and analysis of semantic measures, we consequently developed the Semantic Measures Library (SML): a generic software library dedicated to the computation and analysis of semantic measures. The SML can be used to take advantage of hundreds of measures defined in the literature or those derived from the parametrised functions introduced by the proposed unifying framework. These measures can be analysed and compared using the functionalities provided by the library. The SML is accompanied by extensive documentation, community support and software solutions which enable non-developers to take full advantage of the library. In broader terms, this project proposes to federate the several communities involved in this domain in order to create an interdisciplinary synergy around the notion of semantic measures: http://www.semantic-measures-library.org This thesis also presents several algorithmic and theoretical contributions related to semantic measures: (i) an innovative method for the comparison of instances defined in a semantic graph – we underline in particular its benefits in the definition of content-based recommendation systems, (ii) a new approach to compare concepts defined in overlapping taxonomies, (iii) algorithmic optimisation for the computation of a specific type of semantic measure, and (iv) a semi-supervised learning-technique which can be used to identify semantic measures adapted to a specific usage context, while simultaneously taking into account the uncertainty associated to the benchmark in use. These contributions have been validated by several international and national publications
Rivault, Yann. "Analyse de trajectoires de soins à partir de bases de données médico-administratives : apport d'un enrichissement par des connaissances biomédicales issues du Web des données." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B003/document.
Full textReusing healthcare administrative databases for public health research is relevant and opens new perspectives. In pharmacoepidemiology, it allows to study large scale diseases as well as care consumption for a population. Nevertheless, reusing these information systems that were initially designed for accounting purposes and whose interoperability is limited raises new challenges in terms of representation, integration, exploration and analysis. This thesis deals with the joint use of healthcare administrative databases and biomedical knowledge for the study of patient care trajectories. This includes both (1) exploration and identification through queries of relevant care pathways in voluminous flows, and (2) analysis of retained trajectories. Semantic Web technologies and biomedical ontologies from the Linked Data allowed to identify care trajectories containing a drug interaction or a potential contraindication between a prescribed drug and the patient’s state of health. In addition, we have developed the R queryMed package to enable public health researchers to carry out such studies by overcoming the difficulties of using Semantic Web technologies and ontologies. After identifying potentially interesting trajectories, knowledge from biomedical nomenclatures and ontologies has also enriched existing methods of analysing care trajectories to better take into account the complexity of data. This resulted notably in the integration of semantic similarities between medical concepts. Semantic Web technologies have also been used to explore obtained results
David, Jérôme. "AROMA : une méthode pour la découverte d'alignements orientés entre ontologies à partir de règles d'association." Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00200040.
Full textDans la littérature, la plupart des travaux traitant des méthodes d'alignement d'ontologies ou de schémas s'appuient sur une définition intentionnelle des schémas et utilisent des relations basées sur des mesures de similarité qui ont la particularité d'être symétriques (équivalences). Afin d'améliorer les méthodes d'alignement, et en nous inspirant des travaux sur la découverte de règles d'association, des mesures de qualité associées, et sur l'analyse statistique implicative, nous proposons de découvrir des appariements asymétriques (implications) entre ontologies. Ainsi, la contribution principale de cette thèse concerne la conception d'une méthode d'alignement extensionnelle et orientée basée sur la découverte des implications significatives entre deux hiérarchies plantées dans un corpus textuel.
Notre méthode d'alignement se décompose en trois phases successives. La phase de prétraitement permet de préparer les ontologies à l'alignement en les redéfinissant sur un ensemble commun de termes extraits des textes et sélectionnés statistiquement. La phase de fouille extrait un alignement implicatif entre hiérarchies. La dernière phase de post-traitement des résultats permet de produire des alignements consistants et minimaux (selon un critère de redondance).
Les principaux apports de cette thèse sont : (1) Une modélisation de l'alignement étendue pour la prise en compte de l'implication. Nous définissons les notions de fermeture et couverture d'un alignement permettant de formaliser la redondance et la consistance d'un alignement. Nous étudions également la symétricité et les cardinalités d'un alignement. (2) La réalisation de la méthode AROMA et d'une interface d'aide à la validation d'alignements. (3) Une extension d'un modèle d'évaluation sémantique pour la prise en compte de la présence d'implications dans un alignement. (4) L'étude du comportement et de la performance d'AROMA sur différents types de jeux de tests (annuaires Web, catalogues et ontologies au format OWL) avec une sélection de six mesures de qualité.
Les résultats obtenus sont prometteurs car ils montrent la complémentarité de notre méthode avec les approches existantes.
Destandau, Marie. "Path-Based Interactive Visual Exploration of Knowledge Graphs." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG063.
Full textKnowledge Graphs facilitate the pooling and sharing of information from different domains. They rely on small units of information named triples that can be combined to form higher-level statements. Producing interactive visual interfaces to explore collections in Knowledge Graphs is a complex problem, mostly unresolved. In this thesis, I introduce the concept of path outlines to encode aggregate information relative to a chain of triples. I demonstrate 3 applications of the concept withthe design and implementation of 3 open source tools. S-Paths lets users browse meaningful overviews of collections; Path Outlines supports data producers in browsing the statements thatcan be produced from their data; and The Missing Path supports data producers in analysingincompleteness in their data. I show that the concept not only supports interactive visual interfaces for Knowledge Graphs but also helps better their quality
Buron, Maxime. "Raisonnement efficace sur des grands graphes hétérogènes." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX061.
Full textThe Semantic Web offers knowledge representations, which allow to integrate heterogeneous data from several sources into a unified knowledge base. In this thesis, we investigate techniques for querying such knowledge bases.The first part is devoted to query answering techniques on a knowledge base, represented by an RDF graph subject to ontological constraints. Implicit information entailed by the reasoning, enabled by the set of RDFS entailment rules, has to be taken into account to correctly answer such queries. First, we present a sound and complete query reformulation algorithm for Basic Graph Pattern queries, which exploits a partition of RDFS entailment rules into assertion and constraint rules. Second, we introduce a novel RDF storage layout, which combines two well-known layouts. For both contributions, our experiments assess our theoretical and algorithmic results.The second part considers the issue of querying heterogeneous data sources integrated into an RDF graph, using BGP queries. Following the Ontology-Based Data Access paradigm, we introduce a framework of data integration under an RDFS ontology, using the Global-Local-As-View mappings, rarely considered in the literature.We present several query answering strategies, which may materialize the integrated RDF graph or leave it virtual, and differ on how and when RDFS reasoning is handled. We implement these strategies in a platform, in order to conduct experiments, which demonstrate the particular interest of one of the strategies based on mapping saturation. Finally, we show that mapping saturation can be extended to reasoning defined by a subset of existential rules
Werner, David. "Indexation et recommandation d'informations : vers une qualification précise des items par une approche ontologique, fondée sur une modélisation métier du domaine : application à la recommandation d'articles économiques." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS078/document.
Full textEffective management of large amounts of information has become a challenge increasinglyimportant for information systems. Everyday, new information sources emerge on the web. Someonecan easily find what he wants if (s)he seeks an article, a video or a specific artist. However,it becomes quite difficult, even impossible, to have an exploratory approach to discover newcontent. Recommender systems are software tools that aim to assist humans to deal withinformation overload. The work presented in this Phd thesis proposes an architecture for efficientrecommendation of news. In this document, we propose an architecture for efficient recommendationof news articles. Our ontological approach relies on a model for precise characterization of itemsbased on a controlled vocabulary. The ontology contains a formal vocabulary modeling a view on thedomain knowledge. Carried out in collaboration with the company Actualis SARL, this work has ledto the marketing of a new highly competitive product, FristECO Pro’fil
Del, Razo Lopez Federico. "Recherche de sous-structures arborescentes ordonnées fréquentes au sein de bases de données semi-structurées." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00203608.
Full textL'objectif de cette thèse est de proposer une méthode d'extraction d'arborescences fréquentes. Cette approche est basée sur une représentation compacte des arborescences cherchant à diminuer la consommation de mémoire dans le processus de fouille. En particulier, nous présentons une nouvelle technique de génération d'arborescences candidates visant à réduire leur nombre. Par ailleurs, nous proposons différents algorithmes pour valider le support des arborescences candidates dans une base de données selon divers types de contraintes d'inclusion d'arbres : induite, incrustée et floue. Finalement nous appliquons nos algorithmes à des jeux de données synthétiques et réels et nous présentons les résultats obtenus.
Membrado, Miguel. "Génération d'un système conceptuel écrit en langage de type semi-naturel en vue d'un traitment des données textuelles : application au langage médical." Paris 11, 1989. http://www.theses.fr/1989PA112004.
Full textWe present our research and our own realization on a KBMS (Knowledge Based Management System) aiming at processing any kind of data, especially textual data, and the related knowledge. In this field of applied Artificial Intelligence, we propose a way for representing knowledge : to describe it in a semi-natural language able as well to describe structures or relations as rules. Knowledge is managed as conceptual definitions figuring in a dictionary which represents the knowledge base. The power of this language allows to process a lot of ambiguities, especially those coming from contextual polysemia, to deal with metonymia or incomplete knowledge, and to solve several kinds of paraphrases. Simultaneous polyhierarchies as well as chunks are taken into account. The system has been specially studied for automatic processing of medical reports. An application to neuro radiology has been taken as example. But it could be applied as well to any other field, included outside Medecine to any professional field. Text analysis is realized in two steps : first a conceptual extraction, secondly a structural analysis. The first step only is taken into account in this thesis. It aims at retrieving pertinent documents, matching them to the given question by comparison between concepts, not between character strings. An overview of the second step will be presented. The final goal is to be able to retrieve the knowledge contained into the texts, i. E. The data themselves, and to manage it in respect to the knowledge represented into the dictionaries
Dellal, Ibrahim. "Gestion et exploitation de larges bases de connaissances en présence de données incomplètes et incertaines." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0016/document.
Full textIn the era of digitilization, and with the emergence of several semantic Web applications, many new knowledge bases (KBs) are available on the Web. These KBs contain (named) entities and facts about these entities. They also contain the semantic classes of these entities and their mutual links. In addition, multiple KBs could be interconnected by their entities, forming the core of the linked data web. A distinctive feature of these KBs is that they contain millions to trillions of unreliable RDF triples. This uncertainty has multiple causes. It can result from the integration of data sources with various levels of intrinsic reliability or it can be caused by some considerations to preserve confidentiality. Furthermore, it may be due to factors related to the lack of information, the limits of measuring equipment or the evolution of information. The goal of this thesis is to improve the usability of modern systems aiming at exploiting uncertain KBs. In particular, this work proposes cooperative and intelligent techniques that could help the user in his decision-making when his query returns unsatisfactory results in terms of quantity or reliability. First, we address the problem of failing RDF queries (i.e., queries that result in an empty set of responses).This type of response is frustrating and does not meet the user’s expectations. The approach proposed to handle this problem is query-driven and offers a two fold advantage: (i) it provides the user with a rich explanation of the failure of his query by identifying the MFS (Minimal Failing Sub-queries) and (ii) it allows the computation of alternative queries called XSS (maXimal Succeeding Sub-queries), semantically close to the initial query, with non-empty answers. Moreover, from a user’s point of view, this solution offers a high level of flexibility given that several degrees of uncertainty can be simultaneously considered.In the second contribution, we study the dual problem to the above problem (i.e., queries whose execution results in a very large set of responses). Our solution aims at reducing this set of responses to enable their analysis by the user. Counterparts of MFS and XSS have been defined. They allow the identification, on the one hand, of the causes of the problem and, on the other hand, of alternative queries whose results are of reasonable size and therefore can be directly and easily used in the decision making process.All our propositions have been validated with a set of experiments on different uncertain and large-scale knowledge bases (WatDiv and LUBM). We have also used several Triplestores to conduct our tests
Christophe, Benoit. "Semantic based middleware to support nomadic users in IoT-enabled smart environments." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066669/document.
Full textWith the growth in Internet of Things, the realization of environments composed of diverse connected resources (devices, sensors, services, data, etc.) becomes a tangible reality. Together with the preponderant place that smartphones take in the daily life of users, these nascent smart spaces pave the way to the development of novel types of applications; carried by the phones of nomadic users and dynamically reconfiguring themselves to make use of such appropriate connected resources. Creating these applications however goes hand-in-hand with the design of tools supporting the nomadic users roaming in these spaces, in particular by enabling the efficient selection of resources. While such a selection calls for the design of theoretically grounded descriptions, it should also consider the profile and preferences of the users. Finally, the rise of (possibly mobile) connected resources calls for designing a scalable process underlying this selection. Progress in the field is however sluggish especially because of the ignorance of the stakeholders (and the interactions between them) composing this eco-system of “IoT-enabled smart environments”. Thus, the multiplicity of diverse connected resources entails interoperability and scalability problems. While the Semantic Web helped in solving the interoperability issue, it however emphasizes the scalability one. Thus, misreading of the ecosystem led to producing models partially covering connected resource characteristics.Revolving from our research works performed over the last 6 years, this dissertation identifies the interactions between the stakeholders of the nascent ecosystem to further propose formal representations. The dissertation further designs a framework providing search capabilities to support the selection of connected resources through a semantic analysis. In particular, the framework relies on a distributed architecture that we design in order to manage scalability issues. The framework is embodied in a VR Gateway further deployed in a set of interconnected smart places and that has been assessed by several experimentations
Gyawali, Bikash. "Surface Realisation from Knowledge Bases." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0004/document.
Full textNatural Language Generation is the task of automatically producing natural language text to describe information present in non-linguistic data. It involves three main subtasks: (i) selecting the relevant portion of input data; (ii) determining the words that will be used to verbalise the selected data; and (iii) mapping these words into natural language text. The latter task is known as Surface Realisation (SR). In my thesis, I study the SR task in the context of input data coming from Knowledge Bases (KB). I present two novel approaches to surface realisation from knowledge bases: a supervised approach and a weakly supervised approach. In the first, supervised, approach, I present a corpus-based method for inducing a Feature Based Lexicalized Tree Adjoining Grammar from a parallel corpus of text and data. I show that the induced grammar is compact and generalises well over the test data yielding results that are close to those produced by a handcrafted symbolic approach and which outperform an alternative statistical approach. In the weakly supervised approach, I explore a method for surface realisation from KB data which does not require a parallel corpus. Instead, I build a corpus from heterogeneous sources of domain-related text and use it to identify possible lexicalisations of KB symbols and their verbalisation patterns. I evaluate the output sentences and analyse the issues relevant to learning from non-parallel corpora. In both these approaches, the proposed methods are generic and can be easily adapted for input from other ontologies for which a parallel/non-parallel corpora exists
Gaignard, Alban. "Partage et production de connaissances distribuées dans des plateformes scientifiques collaboratives." Phd thesis, Université de Nice Sophia-Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00827926.
Full textThomas, Corinne. "Accès par le contenu à des documents numérisés contenant du texte et de l'image." Paris 7, 2001. http://www.theses.fr/2001PA077150.
Full textLécué, Freddy. "Composition de Services Web: Une Approche basée Liens Sémantiques." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00782557.
Full textAmad, Ashraf. "L’acquisition et l’extraction de connaissances dans un contexte patrimoniale peu documenté." Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080101.
Full textThe importance of cultural heritage documentation increases in parallel with the risks to which it is exposed, such as wars, uncontrolled urban development, natural disasters, neglect and inappropriate conservation techniques or strategies. In addition, this documentation is a fundamental tool for the assessment, the conservation, and the management of cultural heritage. Consequently, this tool allows us to estimate the historical, scientific, social and economic value of this heritage. According to several international institutions dedicated to the preservation of cultural heritage, there is an urgent need to develop computer solutions to facilitate and support the documentation of poorly documented cultural heritage especially in developing countries where there is a lack of resources. Among these countries, Palestine represents a relevant case study in this issue of lack of documentation of its heritage. To address this issue, we propose an approach of knowledge acquisition and extraction in the context of poorly documented heritage. We take as a case study the church of the Nativity in Palestine and we put in place our theoretical approach by the development of a platform for the acquisition and extraction of heritage knowledge. Our solution is based on the semantic technologies, which gives us the possibility, from the beginning, to provide a rich ontological description, a better structuring of the information, a high level of interoperability and a better automatic processing without additional efforts.Additionally, our approach is evolutionary and reciprocal because the acquisition of knowledge (in structured form) improves the extraction of heritage knowledge from unstructured text and vice versa. Therefore, the interaction between the two components of our system as well as the heritage knowledge develop and improve over time especially that our system uses manual contributions and validations of the automatic results (in both components) by the experts to optimize its performance