Dissertations / Theses on the topic 'Résumés de bases de connaissances'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Résumés de bases de connaissances.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Rihany, Mohamad. "Keyword Search and Summarization Approaches for RDF Dataset Exploration." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG030.
Full textAn increasing number of datasets are published on the Web, expressed in the standard languages proposed by the W3C such as RDF, RDF (S), and OWL. These datasets represent an unprecedented amount of data available for users and applications. In order to identify and use the relevant datasets, users and applications need to explore them using queries written in SPARQL, a query language proposed by the W3C. But in order to write a SPARQL query, a user should not only be familiar with the query language but also have knowledge about the content of the RDF dataset in terms of the resources, classes or properties it contains. The goal of this thesis is to provide approaches to support the exploration of these RDF datasets. We have studied two alternative and complementary exploration techniques, keyword search and summarization of an RDF dataset. Keyword search returns RDF graphs in response to a query expressed as a set of keywords, where each resulting graph is the aggregation of elements extracted from the source dataset. These graphs represent possible answers to the keyword query, and they can be ranked according to their relevance. Keyword search in RDF datasets raises the following issues: (i) identifying for each keyword in the query the matching elements in the considered dataset, taking into account the differences of terminology between the keywords and the terms used in the RDF dataset, (ii) combining the matching elements to build the result by defining aggregation algorithms that find the best way of linking matching elements, and finally (iii), finding appropriate metrics to rank the results, as several matching elements may exist for each keyword and consequently several graphs may be returned. In our work, we propose a keyword search approach that addresses these issues. Providing a summarized view of an RDF dataset can help a user in identifying if this dataset is relevant to his needs, and in highlighting its most relevant elements. This could be useful for the exploration of a given dataset. In our work, we propose a novel summarization approach based on the underlying themes of a dataset. Our theme-based summarization approach consists of extracting the existing themes in a data source, and building the summarized view so as to ensure that all these discovered themes are represented. This raises the following questions: (i) how to identify the underlying themes in an RDF dataset? (ii) what are the suitable criteria to identify the relevant elements in the themes extracted from the RDF graph? (iii) how to aggregate and connect the relevant elements to create a theme summary? and finally, (iv) how to create the summary for the whole RDF graph from the generated theme summaries? In our work, we propose a theme-based summarization approach for RDF datasets which answers these questions and provides a summarized representation ensuring that each theme is represented proportionally to its importance in the initial dataset
Hébrail, Georges. "Définition de résumés et incertitude dans les grandes bases de données." Paris 11, 1987. http://www.theses.fr/1987PA112223.
Full textTwo apparently different problems are addressed in this study: building summaries of a database and modelling errors contained in a database. A model of summaries of a database is proposed. The summaries are physically stored in the database as redundant data and automatically updated when changes occur in the database. The cost of the summaries update is shown to be low. Lt is then possible to extract synthetic information from the database with a response time which is independent of the size of the database. The multiple applications of summaries in a database are also presented: extraction of synthetic information, query optimisation, data security, check of integrity constraints, distributed databases. A model of representation of errors contained in a database is then proposed. The model, based on a probabilistic approach, leads to a computation of the effect of errors on the result of database queries. The links which exist between these two problems are pointed out: a single concept is used both for the definition of the summaries and for the representation of errors, and particular summaries are required to compute the error associated to a query. The study is independent of the data model (relational, network, hierarchical). The results of the study are nevertheless applied to the relational model. The best area for application of the developped concepts is the area of very large databases
Cicchetti, Rosine. "Contribution à la modélisation des résumés dans les bases de données statistiques." Nice, 1990. http://www.theses.fr/1990NICE4394.
Full textCastillo, Reitz Maria. "Etude d'un système d'extraction et présentation de résumés pour les bases de données." Montpellier 2, 1994. http://www.theses.fr/1994MON20277.
Full textLopez, Guillen Karla Ivon. "Contributions aux résumés visuels des bases de données géographiques basés sur les chorèmes." Lyon, INSA, 2010. http://www.theses.fr/2010ISAL0055.
Full text[When dealing with complex situations, as in political, economic and demographic trends, use of visual metaphors ls a very effective approach to help users discover relationships and new knowledge. The traditional cartography is an essential tool to describe the facts and relations in the territory. The geographic concepts are associated with graphic symbols that help readers get an immediate understanding of the data represented. From a geographic database, il is common to extract multiple maps (cartographic restitution of ali data). My thesis ls an international research project whose objective ls to study an innovative mapping solution thal can represent both the existing situation, dynamics, movement and change in order to extract visual resumes synthetic of geographic data bases. The proposed solution is based on the concept of chorem defined by Brunet as a mapping of a territory. This represents a methodological tool instant snapshot of relevant information and gives expert users an overview of objects and phenomena. Based on preliminary, first, we provide a formal definition and classification of chorems in terms of structure and meaning to standardize both the construction and use of these chorems. Then a phase of data mining is launched to extract the most significant patterns, which will be the basis of chorems. Then, a system to general chorématique maps from available data sets is described and an XML-based language, called ChorML specified, allowing communication between the modules of the system (data mining to extract chorems, visualization of chorems Level 0 of the language corresponds to the content of the database by GML standard, then the level1 is the one who will describe the patterns extracted and chorems, and finally level 2 ls the visualisation by the SVG standard. In addition, Level integrals information such as external information (eg, names of seas and surrounding countries) and topological constraints to meet in the display: eg. ]
Lafon, Philippe. "Méthodes de vérification de bases de connaissances." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1991. http://tel.archives-ouvertes.fr/tel-00520738.
Full textLafon, Philippe. "Methodes de verification de bases de connaissances." Marne-la-vallée, ENPC, 1991. http://www.theses.fr/1991ENPC9119.
Full textNaoum, Lamiaa. "Un modèle multidimensionnel pour un processus d'analyse en ligne de résumés flous." Nantes, 2006. http://www.theses.fr/2006NANT2101.
Full textFerrané, Isabelle. "Bases de donnees et de connaissances lexicales morphosyntaxiques." Toulouse 3, 1991. http://www.theses.fr/1991TOU30157.
Full textNesta, Lionel. "Cohérence des bases de connaissances et changement technique." Grenoble 2, 2001. http://www.theses.fr/2001GRE21003.
Full textAyel, Marc. "Détection d’incohérences dans les bases de connaissances : sacco." Chambéry, 1987. http://www.theses.fr/1987CHAMS006.
Full textAyel, Marc. "Détection d'incohérences dans les bases de connaissances SACCO /." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37602486h.
Full textDesreumaux, Marc. "Détection d'incohérences potentielles dans des bases de connaissances." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37604506s.
Full textFavier, Valerie. "Manipulation d'objets dynamiques dans les bases de connaissances." S.l. : Université Grenoble 1, 2008. http://dumas.ccsd.cnrs.fr/dumas-00335931.
Full textRaschia, Guillaume. "SaintEtiq : une approche floue pour la génération de résumés à partir de bases de données relationnelles." Nantes, 2001. http://www.theses.fr/2001NANT2099.
Full textDussaux, Gaétan. "Ironweb - construction collective de bases de connaissances sur internet." INSA de Rouen, 2001. http://www.theses.fr/2001ISAM0004.
Full textBendou, Amar. "Exécution symbolique et bases de connaissances : Le systeme SVRG." Chambéry, 1998. http://www.theses.fr/1998CHAM5010.
Full textKinkielele, Dieudonné. "Vérification de la cohérence des bases de connaissances floues." Chambéry, 1994. http://www.theses.fr/1994CHAMS029.
Full textGalarraga, Del Prado Luis. "Extraction des règles d'association dans des bases de connaissances." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0050/document.
Full textThe continuous progress of information extraction (IE) techniques has led to the construction of large general-purpose knowledge bases (KBs). These KBs contain millions of computer-readable facts about real-world entities such as people, organizations and places. KBs are important nowadays because they allow computers to “understand” the real world. They are used in multiple applications in Information Retrieval, Query Answering and Automatic Reasoning, among other fields. Furthermore, the plethora of information available in today’s KBs allows for the discovery of frequent patterns in the data, a task known as rule mining. Such patterns or rules convey useful insights about the data. These rules can be used in several applications ranging from data analytics and prediction to data maintenance tasks. The contribution of this thesis is twofold : First, it proposes a method to mine rules on KBs. The method relies on a mining model tailored for potentially incomplete webextracted KBs. Second, the thesis shows the applicability of rule mining in several data-oriented tasks in KBs, namely facts prediction, schema alignment, canonicalization of (open) KBs and prediction of completeness
Galárraga, Del Prado Luis. "Extraction des règles d'association dans des bases de connaissances." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0050.
Full textThe continuous progress of information extraction (IE) techniques has led to the construction of large general-purpose knowledge bases (KBs). These KBs contain millions of computer-readable facts about real-world entities such as people, organizations and places. KBs are important nowadays because they allow computers to “understand” the real world. They are used in multiple applications in Information Retrieval, Query Answering and Automatic Reasoning, among other fields. Furthermore, the plethora of information available in today’s KBs allows for the discovery of frequent patterns in the data, a task known as rule mining. Such patterns or rules convey useful insights about the data. These rules can be used in several applications ranging from data analytics and prediction to data maintenance tasks. The contribution of this thesis is twofold : First, it proposes a method to mine rules on KBs. The method relies on a mining model tailored for potentially incomplete webextracted KBs. Second, the thesis shows the applicability of rule mining in several data-oriented tasks in KBs, namely facts prediction, schema alignment, canonicalization of (open) KBs and prediction of completeness
Arab, Mabrouka Sabiha. "Représentation des connaissances dans la logique trivalente de Lukasiewicz et structuration des bases de connaissances." Grenoble 2, 1993. http://www.theses.fr/1993GRE21006.
Full textThis work concerns uncertain and incomplete knowledge representation in three-valued logic of lukasiewicz and knowledge base management system based on coupling dbms and expert system. We compare three-valued logic of lukaciewicz with the classical one and with three-valued logics of kleene and bochvar. Propositional and predicate calculus are presented. By studing skolemisation of three-valued formulas and adaptation of the robinson's theorem proving, we obtain the procedure for passing to logic programming. The hypothese of incomplete knowledge invalidate the hypothese of closed world, in consequence, we define a non-monotonic rule for the resolution of goals as "it exists on x such that not (p(x)". The choosen approach needs three steps : construction of a negatif program, resolution of the negatif goal in this program and resolution of the positif program, resolution of the negatif goal in this program and resolution of the positif goals using the classical rule of negation as failure. Following mcdermott's approach, we extend three-valued logic of lukasiewicz to a non-monotonic one by giving a second non-monotonic inference rule. The resulting system guarantees a non-monotonic extension existence, which is not the case in mcdermott's approach. Unicity of this extension is not verified. So, we propose knowledge base structuration to describe different world. This structuration
BARRIERE, JEAN-LUC. "Detection d'anomalies dans les bases de connaissances : le systeme score." Paris 6, 1993. http://www.theses.fr/1993PA066507.
Full textCorby, Olivier. "Un tableau réflexif pour la coopération de bases de connaissances." Nice, 1988. http://www.theses.fr/1988NICE4224.
Full textFauconnet, Cécile. "La structuration des bases de connaissances des entreprises de défense." Thesis, Paris 1, 2019. http://www.theses.fr/2019PA01E040.
Full textIn this research, we defend the thesis that defense companies, by the nature of their activity, have a unique innovation process. We are mobilizing the concept of knowledge base developed by Henderson & Clark [1990] knowledge base. This database is composed of the components, i.e. the smallest units of knowledge manipulated and the architectural knowledge, i.e. the way these bricks work together. Our empirical work is based on the analysis of the technological innovation process of firms with a defense activity. The first two chapters examine the contribution of scientific knowledge to defense technological innovations. The first chapter addresses this issue from the perspective of so-called “defense” technologies, while the second chapter focuses on distinguishing defense companies from civil companies in terms of their integration of scientific knowledge into their innovation processes. These chapters highlight the importance of scientific knowledge for military innovation. The third chapter focuses on the structure of the defense industry knowledge base using indicators from the literature on technological coherence [Nesta & Saviotti, 2005, 2006]. Our results in this chapter show the importance of the exploratory nature of the defense firm knowledge base, i.e. the importance of making innovative technological connections. Finally, the last chapter questions the influence of the characteristics of the knowledge bases of firms, and more particularly defense companies, on the performance of the technological innovation process. This chapter shows the specificity of defense firms’ knowledge bases and its influence on the performance of their technological innovation process
Corby, Olivier. "Un Tableau réflexif pour la coopération de bases de connaissances." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37612991p.
Full textCorman, Julien. "Knowledge base ontological debugging guided by linguistic evidence." Thesis, Toulouse 1, 2015. http://www.theses.fr/2015TOU10070/document.
Full textWhen they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms
Naoum, Lamiaa. "Un modèle multidimensionnel pour un processus d'analyse en ligne de résumés flous." Phd thesis, Université de Nantes, 2006. http://tel.archives-ouvertes.fr/tel-00481046.
Full textZeitouni, Karine. "Analyse et extraction de connaissances des bases de données spatio-temporelles." Habilitation à diriger des recherches, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00325468.
Full textMazure, Bertrand. "De la satisfaisabilité à la compilation de bases de connaissances propositionnelles." Artois, 1999. http://www.theses.fr/1999ARTO0401.
Full textBouali, Fatma. "Validation, diagnostic et réparation de bases de connaissances : le système KBDR." Paris 11, 1996. http://www.theses.fr/1996PA112279.
Full textKoutraki, Maria. "Approches vers des modèles unifiés pour l'intégration de bases de connaissances." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV082/document.
Full textMy thesis aim the automatic integration of new Web services in a knowledge base. For each method of a Web service, a view is automatically calculated. The view is represented as a query on the knowledge base. Our algorithm also calculates an XSLT transformation function associated to the method that is able to transform the call results in a fragment according to the schema of the knowledge base. The novelty of our approach is that the alignment is based only on the instances. It does not depend on the names of the concepts or constraints that are defined by the schema. This makes it particularly relevant for Web services that are currently available on the Web, because these services use the REST protocol. This protocol does not allow the publication schemes. In addition, JSON seems to establish itself as the standard for the representation of technology call results
Wendler, Bruno. "Vérification de la cohérence dynamique dans les bases de connaissances modulaires." Chambéry, 1996. http://www.theses.fr/1996CHAMS010.
Full textGueye, Ibrahima. "Création de bases de connaissances interconnectées - institut de formation/entreprise - par la capitalisation des connaissances en maintenance industrielle." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0040/document.
Full textIn the training of our BTS (Industrial Technician's Certificate), it is important to have a competency framework that refers to professional qualification profiles detailing the skills that students must acquire. This competency framework must be constantly updated to ensure a continuous improvement in the training / employment and vocational training / business partners. Our proposal is to implement a knowledge capitalization model whose focus is on the “Etudiant/Stagiaire (ES)” in a learning situation in an industrial maintenance sector. During his/her stay in the company, each maintenance activity carried out by the trainee is assessed by the experts in the maintenance sector and recorded in the database. This database is then cross-checked with the required proficiency outlined in the training institute database. This way, the two databases will mutually enrich and update each other so there is a continuous improvement of the performance level of training given to the ES
Arioua, Abdallah. "Formalisation et étude des explications dialectiques dans les bases de connaissances incohérentes." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT261/document.
Full textKnowledge bases are deductive databases where the machinery of logic is used to represent domain-specific and general-purpose knowledge over existing data. In the existential rules framework a knowledge base is composed of two layers: the data layer which represents the factual knowledge, and the ontological layer that incorporates rules of deduction and negative constraints. The main reasoning service in such framework is answering queries over the data layer by means of the ontological layer. As in classical logic, contradictions trivialize query answering since everything follows from a contradiction (ex falso quodlibet). Recently, inconsistency-tolerant approaches have been proposed to cope with such problem in the existential rules framework. They deploy repairing strategies on the knowledge base to restore consistency and overcome the problem of trivialization. However, these approaches are sometimes unintelligible and not straightforward for the end-user as they implement complex repairing strategies. This would jeopardize the trust relation between the user and the knowledge-based system. In this thesis we answer the research question: ``How do we make query answering intelligible to the end-user in presence of inconsistency?''. The answer that the thesis is built around is ``We use explanations to facilitate the understanding of query answering''. We propose meta-level and object-level dialectical explanations that take the form of a dialogue between the user and the reasoner about the entailment of a given query. We study these explanations in the framework of logic-based argumentation and dialectics and we study their properties and their impact on users
Massotte, Anne-Marie Marty. "L'élaboration des bases de connaissances pour des systèmes experts : éléments d'approche méthodologique." Montpellier 2, 1986. http://www.theses.fr/1986MON20098.
Full textDeschamps, Renaud. "Bases de connaissances généralisées : une approche fondée sur un modèle hypertexte expert." Toulouse 3, 1995. http://www.theses.fr/1995TOU30017.
Full textLavielle, Nathalie. "Bases de connaissances généralisées : le modèle de gestion d'informations de type hypertexte." Pau, 1994. http://www.theses.fr/1994PAUU3001.
Full textSchildknecht, Stéphane. "Le projet GRAMS : Construction automatique et exploitation de bases de connaissances réactionnelles." Université Louis Pasteur (Strasbourg) (1971-2008), 2001. http://www.theses.fr/2001STR13219.
Full textMassotte, Anne-Marie. "L'Elaboration des bases de connaissances pour les systèmes experts éléments d'approche méthodologique." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb375995118.
Full textVidrequin, Cédric. "Constitution automatique de bases de connaissances à partir de données textuelles non structurées." Avignon, 2008. http://www.theses.fr/2008AVIG0171.
Full textLeclercq, Jean. "Les représentations informatiques des connaissances juridiques : l'expérience française." Lille 2, 1999. http://www.theses.fr/1999LIL20019.
Full textDang, Weidong Courtois Bernard. "Parallélisme dans une machine base de connaissances Prolog." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00323956.
Full textLaurent, Anne. "Bases de données multidimensionnelles floues et leur utilisation pour la fouille de données." Paris 6, 2002. http://www.theses.fr/2002PA066426.
Full textBoukhari, Sanâa. "Les facteurs explicatifs du comportement de contribution aux systèmes de gestion des connaissances intégratifs : le cas des bases électroniques de connaissances." Aix-Marseille 2, 2008. http://www.theses.fr/2008AIX24017.
Full textThis thesis aims at studying the knowledge sharing behavior through Knowledge Management Systems (KMS). More specifically, it intends to provide an explanation of contribution behavior to Electronic Knowledge Repositories (EKR). The literature review has allowed to clarify and define key concepts used in this work, and has enabled us to propose and discuss a conceptual framework for the development of our research model. Following a hypothetico-deductive reasoning, a explanatory model of contribution behavior to EKR is proposed. The model included two dependent variables “Knowledge sharing” and “KMS use”, nine independent variables namely motivational, cultural, relational, technological and informational, as well as six moderating variables. The model was later tested with 388 individuals belonging to two companies i. E. A world leader in food industry and a global company specializing in technology solutions. The data collected was analyzed using exploratory and confirmatory analyses through the structural equation modeling techniques. The results showed several factors determining the knowledge sharing behavior through EKR. These results enriched previous studies about knowledge sharing and provided managers an explanation about contribution behavior to EKR
Tayar, Nina. "Gestion des versions pour la construction incrémentale et partagée de bases de connaissances." Phd thesis, Université Joseph Fourier (Grenoble), 1995. http://tel.archives-ouvertes.fr/tel-00005065.
Full textBounaas, Fethi. "Gestion de l'évolution dans les bases de connaissances : une approche par les règles." Phd thesis, Grenoble INPG, 1995. http://tel.archives-ouvertes.fr/tel-00005030.
Full textSimon, Arnaud. "Outils classificatoires par objets pour l'extraction de connaissances dans des bases de données." Nancy 1, 2000. http://www.theses.fr/2000NAN10069.
Full textWurbel, Nathalie. "Dictionnaires et bases de connaissances : traitement automatique de données dictionnairiques de langue française." Aix-Marseille 3, 1995. http://www.theses.fr/1995AIX30035.
Full textRousset, Marie-Christine. "Sur la cohérence et la validation des bases de connaissances : le système COVADIS." Paris 11, 1988. http://www.theses.fr/1988PA112238.
Full textPipard, Eric. "Inde : un système de détection d'inconsistances et d'incomplétudes dans les bases de connaissances." Paris 11, 1987. http://www.theses.fr/1987PA112413.
Full textIn knowledge based systems field sophisticated tools are necessary to help expert with the difficult of knowledge acquisition. INDE is software able to detect inconsistency and incompleteness of blow systems independently of the initial facts bases. In the framework of this thesis a knowledge base will be called inconsistent if there exist a facts base such as its saturation by ail the rules of knowledge base contain a fact its negation. A knowledge base will be called incomplete if saturation of all the initial facts bases does not contain a fact considered as deducible. The solution we have implemented in the INDE system is to modelise knowledge bases in a Petri net. In order to avoid calculating saturations we split up knowledge bases in sets of rule named concept which condition of rules are not on mutual exclusion. Concept represents the maximal sets of rules simultaneously fireable. If there exists in a concept some rules ending on an identical attribute with different values, the knowledge base is recognized as inconsistent. If no rules allowing to deduce an attribute belong to any concept, the knowledge base is recognized incomplete