Dissertations / Theses on the topic 'Théorie des bases de données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Théorie des bases de données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ripoche, Hugues. "Une construction interactive d'interprétations de données : application aux bases de données de séquences génétiques." Montpellier 2, 1995. http://www.theses.fr/1995MON20248.
Stamate, Daniel. "Applications des logiques multivaluées aux bases de données avec informations incertaines." Paris 11, 1999. http://www.theses.fr/1999PA112374.
Acosta, Francisco. "Les arbres balances : spécification, performances et contrôle de concurrence." Montpellier 2, 1991. http://www.theses.fr/1991MON20201.
Magnier, Nicolas. "Validation des transactions dans les bases de données : classes décidables et vérification automatique." Bordeaux 1, 1998. http://www.theses.fr/1998BOR10506.
Lerat, Nadine. "Représentation et traitement des valeurs nulles dans les bases de données." Paris 11, 1986. http://www.theses.fr/1986PA112383.
This thesis deals with the representation and treatment of two cases of information incompleteness in the field of databases: non applicable null values and null values representing unknown objects. In the first part, queries on a unique table containing non applicable nulls are translated into a set of queries on conventional multitables. In the second part, unknown null values are represented by Skolem constants and a method adapting to this context a "chase" algorithm allows evaluating queries when functional or inclusion dependencies are satisfied. Eventually, it is shown that these two types of null values can be taken into account simultaneously
Fansi, Janvier. "Sécurité des bases de données XML (eXtensible Markup Language)." Pau, 2007. http://www.theses.fr/2007PAUU3007.
XML has emerged as the de facto standard for representing and exchanging information on the Internet. As Internet is a public network, corporations and organizations which use XML need mechanisms to protect XML data against unauthorised access. Thus, several schemes for XML access control have been proposed. They can be classified in two major categories: views materialization and queries rewriting techniques. In this thesis, we point out the drawbacks of views materialization approaches through the development of a prototype of secured XML database based on one of those approaches. Afterwards, we propose a technique aimed at securing XML by means of queries rewriting. We prove its correctness and show that it is more efficient than competing works. Finally, we extend our proposal in order to controlling the updating of XML databases
Casali, Alain. "Treillis cubes contraints et fermés dans la fouille de bases de données multidimensionnelles." Aix-Marseille 2, 2004. http://www.theses.fr/2004AIX22078.
Slimane, Mohammed. "Le langage des gractes et son usage fondamental en algèbre en logique et dans la théorie des bases de données relationnelles." Paris 5, 1986. http://www.theses.fr/1986PA05S008.
D'Ambrosio, Roberto. "Classification de bases de données déséquilibrées par des règles de décomposition." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4007/document.
Disproportion among class priors is encountered in a large number of domains making conventional learning algorithms less effective in predicting samples belonging to the minority classes. We aim at developing a reconstruction rule suited to multiclass skewed data. In performing this task we use the classification reliability that conveys useful information on the goodness of classification acts. In the framework of One-per-Class decomposition scheme we design a novel reconstruction rule, Reconstruction Rule by Selection, which uses classifiers reliabilities, crisp labels and a-priori distributions to compute the final decision. Tests show that system performance improves using this rule rather than using well-established reconstruction rules. We investigate also the rules in the Error Correcting Output Code (ECOC) decomposition framework. Inspired by a statistical reconstruction rule designed for the One-per-Class and Pair-Wise Coupling decomposition approaches, we have developed a rule that applies softmax regression on reliability outputs in order to estimate the final classification. Results show that this choice improves the performances with respect to the existing statistical rule and to well-established reconstruction rules. On the topic of reliability estimation we notice that small attention has been given to efficient posteriors estimation in the boosting framework. On this reason we develop an efficient posteriors estimator by boosting Nearest Neighbors. Using Universal Nearest Neighbours classifier we prove that a sub-class of surrogate losses exists, whose minimization brings simple and statistically efficient estimators for Bayes posteriors
Djennaoui, Mohand-Said. "Structuration des données dans le cadre d'un système de gestion de bases de connaissances." Lyon, INSA, 1992. http://www.theses.fr/1992ISAL0077.
Both the deduction and the structuration of the information are essential features for the new generation of DBMS (namely Knowledge Base management System : KBMS). EPSILON is KBMS integration (combining) logic programming (PROLOG) and relational databases ; It allows to use data stored in the data bases as prolog's facts and with user transparency. This work descibes the system's enriching by endowing it with structuration mechanisms in the sens of NF2 relations (Nested relations). The user can define external views based on a NF2 model. At the internal level, the relations remain compatible with the traditional relational model. Around the EPSILON kermel, we have designed and developed: - A meta-interpreter of logic based language including sets and tuples constructors. - A translater which allows to use the met-interpreter in a transparency way. - A meta-interpreter which allows to handle SQL request relation
El, Abed Walid. "Meta modèle sémantique et noyau informatique pour l'interrogation multilingue des bases de données en langue naturelle (théorie et application)." Besançon, 2001. http://www.theses.fr/2001BESA1014.
Laabi, Abderrazzak. "Étude et réalisation de la gestion des articles appartenant à des bases de données gérées par une machine bases de données." Paris 11, 1987. http://www.theses.fr/1987PA112338.
The work presented in this thesis is part of a study and development project concerning the design of three layers of the DBMS on the DORSAL-32 Data Base Machine. The first layer ensures record management within the storage areas, record and page locking organization according to the access mode and transaction coherency degree. It ensures also the handling of micro-logs which permit to guarantee the atomicity of an action. The second layer ensures handling of transaction logging and warm restarts which guarantee the atomicity and durability of a transaction. The third layer ensures simultaneous access management and handling of lock tables. Performance measures of the methods used are also presented. The last chapter of this report contains a research work concerning the implementation of the virtual linear hashing method in our DBMS. The problem studied is the transfer of records from one page to another. Under these conditions, the record pointers which are classically used don't permit direct access. We propose a new pointer which enables direct access to the record, on no matter which page it is contained at a given instant
Baklouti, Fatma. "Algorithmes de construction du Treillis de Galois pour des contextes généralisés." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090003.
Our main concern in this thesis is concept (or Galois) lattices. As shown by previous works, concept lattices is an effective tool for data analysis and knowledge discovery, especially for classification, clustering, information retrieval, and more recently for association rules mining. Several algorithms were proposed to generate concepts or concept lattices on a data context. They focus on binary data arrays, called contexts. However, in practice we need to deal with contexts which are large and not necessarily binary. We propose a fast Galois lattice-building algorithm, called ELL algorithm, for generating closed itemsets from objects having general descriptions and we compare its performance with other existing algorithms. In order to have better performance et to treat bigger contexts we propose also a distributed version of ELL algorithm called SD-ELL
Olteanu, Ana-Maria. "Fusion de connaissances imparfaites pour l'appariement de données géographiques : proposition d'une approche s'appuyant sur la théorie des fonctions de croyance." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00469407.
Robidou, Sébastien. "Représentation de l'imperfection des connaissances dans les bases de situation des systèmes de commandement." Rouen, 1997. http://www.theses.fr/1997ROUES083.
Taraviras, Stavros. "Évaluation de la diversité moléculaire des bases de données de molécules à intérêt pharmaceutique, en utilisant la théorie des graphes chimiques." Nice, 2000. http://www.theses.fr/2000NICE5472.
Groz, Benoît. "XML security views : queries, updates and schemas." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10143/document.
The evolution of web technologies and social trends fostered a shift from traditional enterprise databases to web services and online data. While making data more readily available to users, this evolution also raises additional security concerns regarding the privacy of users and more generally the disclosure of sensitive information. The implementation of appropriate access control models is one of the approaches to mitigate the threat. We investigate an access control model based on (non-materialized) XML views, as presented among others by Fan et al. The simplicity of such views, and in particular the absence of arithmetic features and restructuring, facilitates their modelization with tree alignments. Our objective is therefore to investigate how to manipulate efficiently such views, using formal methods, and especially query rewriting and tree automata. Our research follows essentially three directions: we first develop new algorithms to assess the expressivity of views, in terms of determinacy, query rewriting and certain answers. We show that those problems, although undecidable in our most general setting, can be decided under reasonable restrictions. Then we address the problem of handling updates in the security view framework. And last, we investigate the classical issues raised by schemata, focusing on the specific "determinism'' requirements of DTDs and XML Schemata. In particular, we survey some techniques to approximate the set of all possible view documents with a DTD, and we provide new algorithms to check if the content models of a DTD are deterministic
Chéry, Alexis. "Étude des occurences des films et des cinéastes dans les ouvrages français de théorie sur le cinéma." Paris 1, 2009. http://www.theses.fr/2009PA010536.
Moll, Georges-Henri. "Un langage pivot pour le couplage de Prolog avec des bases de données : formalisation et environnement opérationnel." Lyon 1, 1987. http://www.theses.fr/1987LYO10102.
Castagliola, Carole. "Héritage et valuation dans les réseaux sémantiques pour les bases de données objets." Compiègne, 1991. http://www.theses.fr/1991COMPD363.
Ait, Taleb Saadia. "La terminologie arabe contemporaine : théorie et application dans la base des données Lexar." Bordeaux 3, 1988. http://www.theses.fr/1988BOR30046.
Mokhtari, Amine. "Système personnalisé de planification d'itinéraire unimodal : une approche basée sur la théorie des ensembles flous." Rennes 1, 2011. http://www.theses.fr/2011REN1E004.
Ileana, Ioana. "Réécriture de requêtes avec des vues : une perspective théorique et pratique." Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0062.
In this work, we address the problem of query rewriting using views, by adopting both a theoretical and a pragmatic perspective. In the first and main chapter, we approach the topic of finding all minimal (i.e. with no redundant relational atoms) conjunctive query reformulations for a relational conjunctive query, under constraints expressed as embedded dependencies, including the relationship between the source and the target schemas. We present a novel sound and complete algorithm, the Provenance-Aware Chase & Backchase, that solves the minimal reformulations problem with practically relevant performance. We provide a detailed theoretical characterization of our algorithm. We further present the optimized implementation and the experimental evaluation thereof, and exhibit natural scenarios yielding speed-ups of up to two orders of magnitude between the execution of a best view-based rewriting found by a commercial DBMS and that of a best rewriting found by our algorithm. We generalize the Provenance-Aware Chase & Backchase towards directly finding minimum-cost reformulations for monotonic cost functions, and show the performance improvements this adaptation further enables. With our algorithm, we introduce a novel chase flavour, the Provenance-Aware Chase, which is interesting on its own, as a means of reasoning about the interaction between provenance and constraints. In the second chapter, we move to an XML context and revisit the previous work of Cautis, Deutsch and Onose on the problem of finding XPath query rewritings with a single level of intersection of multiple views. We enrich the analysis of the rewriting problem by showing its links to the problems of DAG-tree equivalence and union-freeness. We refine the rule-based rewriting technique proposed by Cautis, Deutsch and Onose to ensure its polynomial complexity and improve its completeness, and present a range of optimizations on the rewriting procedures, necessary to achieve practical performance. We provide a complete implementation comprising these optimizations and a thorough experimental evaluation thereof, showing the performanceand utility of the polynomial rewriting technique
Machado, Javam de Castro. "Parallélisme et transactions dans les bases de données à objets." Université Joseph Fourier (Grenoble), 1995. https://tel.archives-ouvertes.fr/tel-00005039.
Nous avons implanté un premier prototype qui met en œuvre le modèle de parallélisation des transactions. Pour cela, nous avons utilisé le système de bases de données à objet 02. Notre prototype introduit le parallélisme par la création et la synchronisation des activités parallèles au sein du processus client 02 qui exécute une application. Le système étant développé sur une machine monoprocesseur, les fonctions liées au parallélisme utilisent de processus légers. Nous avons applique ensuite notre modèle de parallélisations au système de règles NAOS. Notre approche considère l'ensemble de règles d'un cycle d'exécution, dites règles candidates, pour la parallélisation. Nous construisons un plan d'exécution pour les règles candidates d'un cycle qui détermine l'exécution séquentielle ou parallèle pour les règles
Bouarar, Selma. "Vers une conception logique et physique des bases de données avancées dirigée par la variabilité." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2016. http://www.theses.fr/2016ESMA0024/document.
The evolution of computer technology has strongly impacted the database design process which is henceforth requiring more time and resources to encompass the diversity of DB applications.Note that designers rely on their talent and knowledge, which have proven insufficient to face the increasing diversity of design choices, raising the problem of the reliability and completeness of this knowledge. This problem is well known as variability management in software engineering. While there exist some works on managing variability of physical and conceptual phases, very few have focused on logical design. Moreover, these works focus on design phases separately, thus ignore the different interdependencies. In this thesis, we first present a methodology to manage the variability of the whole DB design process using the technique of software product lines, so that (i)interdependencies between design phases can be considered, (ii) a holistic vision is provided to the designer and (iii) process automation is increased. Given the scope of the study, we proceed step-bystepin implementing this vision, by studying a case that shows: (i) the importance of logical design variability (iii) its impact on physical design (multi-phase management), (iv) the evaluation of logical design, and the impact of logical variability on the physical design (materialized view selection) in terms of non-functional requirements: execution time, energy consumption and storage space
Alilaouar, Abdeslame. "Contribution à l'interrogation flexible de données semi-structurées." Toulouse 3, 2007. http://thesesups.ups-tlse.fr/90/.
Many querying languages have been proposed to manipulate Semi-Structured Data (SSD) and to extract relevant information (in terms of structure and/or content) to the user. Such querying languages should take into account not only the content but also the underlying structure since it can completely change their relevance and adequacy with respect to the needs expressed by the user. However, not having prior knowledge and the heterogeneity of SSD structure makes classical database languages inadequate. The work undertaken on database flexible querying revealed that fuzzy logic is particularly well-suited for modelling the notion of flexibility and preferences according to human reasoning. In this sense, we propose a model of flexible query for SSD in general and XML documents, taking into account the content and the underlying structure of SSD. Fuzzy logic is used to represent the user's preferences on the content and structure of SSD. At the end of the evaluation process, every response is associated with a degree in the interval ]0. 1]. The more this degree is low, the answer seems less relevant. This degree is calculated using the degree of ownership and measures known similarity in information retrieval systems for content, and the minimum spanning tree for the structure. The proposed model has been reviewed and validated using PRETI Platform and INEX benchmark, thanks to the prototype that we've developped
Coupaye, Thierry. "Un modèle d'exécution paramétrique pour systèmes de bases de données actifs." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004983.
Mouaddib, Noureddine. "Gestion des informations nuancées : une proposition de modèle et de méthode pour l'identification nuancée d'un phénomène." Nancy 1, 1989. http://www.theses.fr/1989NAN10475.
Simon, Arnaud. "Outils classificatoires par objets pour l'extraction de connaissances dans des bases de données." Nancy 1, 2000. http://www.theses.fr/2000NAN10069.
Ileana, Ioana. "Réécriture de requêtes avec des vues : une perspective théorique et pratique." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0062/document.
In this work, we address the problem of query rewriting using views, by adopting both a theoretical and a pragmatic perspective. In the first and main chapter, we approach the topic of finding all minimal (i.e. with no redundant relational atoms) conjunctive query reformulations for a relational conjunctive query, under constraints expressed as embedded dependencies, including the relationship between the source and the target schemas. We present a novel sound and complete algorithm, the Provenance-Aware Chase & Backchase, that solves the minimal reformulations problem with practically relevant performance. We provide a detailed theoretical characterization of our algorithm. We further present the optimized implementation and the experimental evaluation thereof, and exhibit natural scenarios yielding speed-ups of up to two orders of magnitude between the execution of a best view-based rewriting found by a commercial DBMS and that of a best rewriting found by our algorithm. We generalize the Provenance-Aware Chase & Backchase towards directly finding minimum-cost reformulations for monotonic cost functions, and show the performance improvements this adaptation further enables. With our algorithm, we introduce a novel chase flavour, the Provenance-Aware Chase, which is interesting on its own, as a means of reasoning about the interaction between provenance and constraints. In the second chapter, we move to an XML context and revisit the previous work of Cautis, Deutsch and Onose on the problem of finding XPath query rewritings with a single level of intersection of multiple views. We enrich the analysis of the rewriting problem by showing its links to the problems of DAG-tree equivalence and union-freeness. We refine the rule-based rewriting technique proposed by Cautis, Deutsch and Onose to ensure its polynomial complexity and improve its completeness, and present a range of optimizations on the rewriting procedures, necessary to achieve practical performance. We provide a complete implementation comprising these optimizations and a thorough experimental evaluation thereof, showing the performanceand utility of the polynomial rewriting technique
Crosetti, Nicolas. "Enrichir et résoudre des programmes linéaires avec des requêtes conjonctives." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILB003.
Mathematical optimization and data management are two major fields of computer science that are widely studied by mostly separate communities.However complex optimization problems often depend on large datasets that may be cumbersome to manage,while managing large amounts of data is only useful insofar as one analyzes this data to extract some knowledgein order to solve some practical problem, so these fields are often actually intertwined in practice.This thesis places itself at the crossroads between these two fields by studying linear programs that reason about the answers of database queries.The first contribution of this thesis is the definition of the so-called language of linear programs with conjunctive queries, or LP(CQ) for short.It is a language to model linear programs with constructs that allow one to express linear constraints and linear sumsthat reason over the answer sets of database queries in the form of conjunctive queries.We then describe the natural semantics of the languageby showing how such models can be interpreted, in conjunction with a database, into actual linear programsthat can then be solved by any standard linear program solver and discuss the hardness of solving LP(CQ) models.Motivated by the hardness of solving LP(CQ) models in general, we then introducea process based on the so-called T-factorized interpretation to solve such models more efficiently.This approach is based on classical techniques from database theoryto exploit the structure of the queries using hypertree decompositions of small width.The T-factorized interpretation yields a linear programthat has the same optimal value as the natural semantics of the model but fewer variableswhich can thus be used to solve the model more efficiently.The third contribution is a generalization of the previous result to the framework of factorized databases.We introduce a specific circuit data-structure to succintly encode relations.We the define the so-called C-factorized interpretation that leverages the succintness of these circuitsto yield a linear program that has the same optimal value as the natural semantics of the model but fewer variablessimilarly to the T-factorized interpretation.Finally we show that we can explicitly compile the answer sets of conjunctive queries with small fractional hypertreewidthinto succinct circuits, thus allowing us to recapture the T-factorized interpretation
Grazziottin, Ribeiro Helena. "Un service de règles actives pour fédérations de bases de données." Université Joseph Fourier (Grenoble), 2000. http://www.theses.fr/2000GRE10084.
Dellal, Ibrahim. "Gestion et exploitation de larges bases de connaissances en présence de données incomplètes et incertaines." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0016/document.
In the era of digitilization, and with the emergence of several semantic Web applications, many new knowledge bases (KBs) are available on the Web. These KBs contain (named) entities and facts about these entities. They also contain the semantic classes of these entities and their mutual links. In addition, multiple KBs could be interconnected by their entities, forming the core of the linked data web. A distinctive feature of these KBs is that they contain millions to trillions of unreliable RDF triples. This uncertainty has multiple causes. It can result from the integration of data sources with various levels of intrinsic reliability or it can be caused by some considerations to preserve confidentiality. Furthermore, it may be due to factors related to the lack of information, the limits of measuring equipment or the evolution of information. The goal of this thesis is to improve the usability of modern systems aiming at exploiting uncertain KBs. In particular, this work proposes cooperative and intelligent techniques that could help the user in his decision-making when his query returns unsatisfactory results in terms of quantity or reliability. First, we address the problem of failing RDF queries (i.e., queries that result in an empty set of responses).This type of response is frustrating and does not meet the user’s expectations. The approach proposed to handle this problem is query-driven and offers a two fold advantage: (i) it provides the user with a rich explanation of the failure of his query by identifying the MFS (Minimal Failing Sub-queries) and (ii) it allows the computation of alternative queries called XSS (maXimal Succeeding Sub-queries), semantically close to the initial query, with non-empty answers. Moreover, from a user’s point of view, this solution offers a high level of flexibility given that several degrees of uncertainty can be simultaneously considered.In the second contribution, we study the dual problem to the above problem (i.e., queries whose execution results in a very large set of responses). Our solution aims at reducing this set of responses to enable their analysis by the user. Counterparts of MFS and XSS have been defined. They allow the identification, on the one hand, of the causes of the problem and, on the other hand, of alternative queries whose results are of reasonable size and therefore can be directly and easily used in the decision making process.All our propositions have been validated with a set of experiments on different uncertain and large-scale knowledge bases (WatDiv and LUBM). We have also used several Triplestores to conduct our tests
Boneva, Iovka. "Expressivité, satisfiabilité et model checking d'une logique spatiale pour arbres non ordonnés." Lille 1, 2006. https://ori-nuxeo.univ-lille1.fr/nuxeo/site/esupversions/dffac6b2-50d6-4e6d-9e4c-f8f5731c75e2.
Pech, Palacio Manuel Alfredo. "Spatial data modeling and mining using a graph-based representation." Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0118/these.pdf.
We propose a unique graph-based model to represent spatial data, non-spatial data and the spatial relations among spatial objects. We will generate datasets composed of graphs with a set of these three elements. We consider that by mining a dataset with these characteristics a graph-based mining tool can search patterns involving all these elements at the same time improving the results of the spatial analysis task. A significant characteristic of spatial data is that the attributes of the neighbors of an object may have an influence on the object itself. So, we propose to include in the model three relationship types (topological, orientation, and distance relations). In the model the spatial data (i. E. Spatial objects), non-spatial data (i. E. Non-spatial attributes), and spatial relations are represented as a collection of one or more directed graphs. A directed graph contains a collection of vertices and edges representing all these elements. Vertices represent either spatial objects, spatial relations between two spatial objects (binary relation), or non-spatial attributes describing the spatial objects. Edges represent a link between two vertices of any type. According to the type of vertices that an edge joins, it can represent either an attribute name or a spatial relation name. The attribute name can refer to a spatial object or a non-spatial entity. We use directed edges to represent directional information of relations among elements (i. E. Object x touches object y) and to describe attributes about objects (i. E. Object x has attribute z). We propose to adopt the Subdue system, a general graph-based data mining system developed at the University of Texas at Arlington, as our mining tool. A special feature named overlap has a primary role in the substructures discovery process and consequently a direct impact over the generated results. However, it is currently implemented in an orthodox way: all or nothing. Therefore, we propose a third approach: limited overlap, which gives the user the capability to set over which vertices the overlap will be allowed. We visualize directly three motivations issues to propose the implementation of the new algorithm: search space reduction, processing time reduction, and specialized overlapping pattern oriented search
Chardain, Antoine. "Innovation et régulation : cas de l'accès aux données bancaires." Electronic Thesis or Diss., Aix-Marseille, 2022. http://www.theses.fr/2022AIXM0398.
The accelerating pace of innovation and digitalisation is a challenge for those responsible for regulating and supervising the financial sector. The way they approach innovation has impacts far beyond the financial sector, on the daily lives of people, organisations and states. Regulation, by definition, aims to build and maintain balances that innovation, by its nature, upsets. So how can innovation and regulation be reconciled in the context of digital transformation? This thesis proposes to shed light on this issue through a case study, the case of access to banking data by non-banking actors, within the European Union. This unique longitudinal case study, conducted according to a comprehensive methodology, sheds light on the issue from three different angles. The first analysis highlights the roles of European and national regulators and supervisors, which have an impact on the temporalities of an innovation. The second analysis focuses on the way in which information and communication technologies are taken into account in the regulatory process, at EU level. Finally, the third analysis focuses on digital infrastructures which, in the digital age, coordinate the actions and interactions of many actors in innovative ecosystems and digital platforms. An analysis grid of the emergence of an infrastructure, based on the analysis of the state of legitimacy and illegitimacy of the infrastructure in the eyes of the different stakeholders, is proposed
Vigny, Alexandre. "Query enumeration and nowhere dense graphs." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC211.
The topic of my thesis lies between complexity, algorithmic and logic. In particular, we are interested in the complexity of evaluating query.More precisely, given G a finite graph. A query q defines a subset of k-tuples of vertices of G that we note q(G). We call k the arity of q and we then try to efficiently perform the following tasks:1) decide whether the set q G) is empty.2) decide whether a given k-tuplet belongs to the set of solutions q(G).3) calculate the number of solutions.4) enumerate the elements of q(G).Regarding the 4th task, an algorithm that will enumerate the solutions can be decomposed into two steps. The first is called preprocessing and is used to prepare the enumeration. Ideally this step only requires a time linear in the size of the graph. The second step is the enumeration properly speaking. The time needed to get a new solution is called the delay. Ideally we want the delay to not depend on the size of the graph but only on the size of the query. We then talk about constant delay enumeration after linear preprocessing.At the beginning of this thesis, a large part of the interrogations about classes of graphs for which a constant delay enumeration is possible seemed to be located around the classes of nowhere dense graphs
Djouadi, Yassine-Mansour. "Logique possibiliste & amélioration génétique pour la sélection et l'agencement d'objets cartographiques." Lyon 1, 1996. http://www.theses.fr/1996LYO10083.
Martel, Christian. "Développement d'un cadre théorique pour la gestion des représentations multiples dans les bases de données spatiales." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0020/MQ49037.pdf.
Ingalalli, Vijay. "Querying and Mining Multigraphs." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS080/document.
With the ever-increasing growth of data and information, extracting the right knowledge has become a real challenge.Further, the advanced applications demand the analysis of complex, interrelated data which cannot be adequately described using a propositional representation. The graph representation is of great interest for the knowledge extraction community, since graphs are versatile data structures and are one of the most general forms of data representation. Among several classes of graphs, textit{multigraphs} have been captivating the attention in the recent times, thanks to their inherent property of succinctly representing the entities by allowing the rich and complex relations among them.The focus of this thesis is streamlined into two themes of knowledge extraction; one being textit{knowledge retrieval}, where we focus on the subgraph query matching aspects in multigraphs, and the other being textit{knowledge discovery}, where we focus on the problem of frequent pattern mining in multigraphs.This thesis makes three main contributions in the field of query matching and data mining.The first contribution, which is very generic, addresses querying subgraphs in multigraphs that yields isomorphic matches, and this problem finds potential applications in the domains of remote sensing, social networks, bioinformatics, chemical informatics. The second contribution, which is focussed on knowledge graphs, addresses querying subgraphs in RDF multigraphs that yield homomorphic matches. In both the contributions, we introduce efficient indexing structures that capture the multiedge information. The query matching processes introduced have been carefully optimized, w.r.t. the time performance and the heuristics employed assure robust performance.The third contribution is in the field of data mining, where we propose an efficient frequent pattern mining algorithm for multigraphs. We observe that multigraphs pose challenges while exploring the search space, and hence we introduce novel optimization techniques and heuristic search methods to swiftly traverse the search space.For each proposed approach, we perform extensive experimental analysis by comparing with the existing state-of-the-art approaches in order to validate the performance and correctness of our approaches.In the end, we perform a case study analysis on a remote sensing dataset. Remote sensing dataset is modelled as a multigraph, and the mining and query matching processes are employed to discover some useful knowledge
Roncancio, Claudia Lucia. "Règles actives et règles déductives dans les bases de données à objets." Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10240.
Bousnina, Fatma Ezzahra. "Modeling and Querying Evidential Databases." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0007/document.
The theory of belief functions (a.k.a, the Evidence Theory) offers powerful tools to mode! and handle imperfect pieces of information. Thus, it provides an adequate framework able to represent conjointly uncertainty, imprecision and ignorance. In this context, data are stored in a specific database model called evidential databases. An evidential database includes two levels of uncertainty: (i) the attribute level uncertainty expressed via some degrees of truthfulness about the hypotheses in attributes; (ii) the tuple level uncertainty expressed through an interval of confidence about the existenceof the tuple in the table. An evidential database itself can be modeled in two forms:(i) the compact form represented as a set of attributes and a set of tuples; (ii) the possible worlds' form represented as a set of candidate databases where each candidate is a possible representation of the imperfect compact database. Querying the possible worlds' form is a fundamental step in order to check the querying methods over the compact one. In fact, a model is said to be a strong representation system when results of querying its compact form are equivalent to results of querying its non compact form.This thesis focuses on foundations of evidential databases in both modeling and querying. The main contributions are summarized as follows:(i) Modeling and querying the compact evidential database (EDB): We implement the compact evidential database (EDB) using the object-relational design which allows to introduce the querying of the database model under relational operators. We also propose the formalism, the algorithms and the experiments of other typesof queries: the evidential top-k and the evidential skyline that we apply over a real dataset extracted from TripAdvisor.(ii) Modeling the possible worlds' form of (EDB): We model the possible worlds' form of the evidential database (EDB) by treating both levels of uncertainty (the tuple leve! and the attribute level).(iii) Modeling and querying the evidential conditional database (ECD): After provingt hat the evidential database (EDB) is not a strong representation system, we develop a new evidential conditional database model named (ECD). Thus, we present the formalism of querying the compact and the possible worlds' forms of the (ECD) to evaluate the querying methods under relational operators. Finally, we discuss the results of these querying methods and the specificities of the (ECD)model
Lambert, de Cambray Béatrix. "Etude de la modélisation de la manipulation et de la représentation de l'information spatiale tridimensionnelle dans les bases de données géographiques." Paris 6, 1994. http://www.theses.fr/1994PA066518.
Ba, Mouhamadou Lamine. "Exploitation de la structure des données incertaines." Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0013.
This thesis addresses some fundamental problems inherent to the need of uncertainty handling in multi-source Web applications with structured information, namely uncertain version control in Web-scale collaborative editing platforms, integration of uncertain Web sources under constraints, and truth finding over structured Web sources. Its major contributions are: uncertainty management in version control of treestructured data using a probabilistic XML model; initial steps towards a probabilistic XML data integration system for uncertain and dependent Web sources; precision measures for location data and; exploration algorithms for an optimal partitioning of the input attribute set during a truth finding process over conflicting Web sources
Pradel, Camille. "D'un langage de haut niveau à des requêtes graphes permettant d'interroger le web sémantique." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2237/.
Graph models are suitable candidates for KR on the Web, where everything is a graph, from the graph of machines connected to the Internet, the "Giant Global Graph" as described by Tim Berners-Lee, to RDF graphs and ontologies. In that context, the ontological query answering problem is the following: given a knowledge base composed of a terminological component and an assertional component and a query, does the knowledge base implies the query, i. E. Is there an answer to the query in the knowledge base? Recently, new description logic languages have been proposed where the ontological expressivity is restricted so that query answering becomes tractable. The most prominent members are the DL-Lite and the EL families. In the same way, the OWL-DL language has been restricted and this has led to OWL2, based on the DL-Lite and EL families. We work in the framework of using graph formalisms for knowledge representation (RDF, RDF-S and OWL) and interrogation (SPARQL). Even if interrogation languages based on graphs have long been presented as a natural and intuitive way of expressing information needs, end-users do not think their queries in terms of graphs. They need simple languages that are as close as possible to natural language, or at least mainly limited to keywords. We propose to define a generic way of translating a query expressed in a high-level language into the SPARQL query language, by means of query patterns. The beginning of this work coincides with the current activity of the W3C that launches an initiative to prepare a possible new version of RDF and is in the process of standardizing SPARQL 1. 1 with entailments
François, Hélène. "Synthèse de la parole par concaténation d'unités acoustiques : construction et exploitation d'une base de parole continue." Rennes 1, 2002. http://www.theses.fr/2002REN10127.
Moreau, Aurélien. "How fuzzy set theory can help make database systems more cooperative." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S043/document.
In this thesis, we are interested in how we can leverage fuzzy logic to improve the interactions between relational database systems and humans. Cooperative answering techniques aim to help users harness the potential of DBMSs. These techniques are expected to be robust and always provide answer to users. Empty set (0,00 sec) is a typical example of answer that one may wish to never obtain. The informative nature of explanations is higher than that of actual answers in several cases, e.g. empty answer sets and plethoric answer sets, hence the interest of robust cooperative answering techniques capable of both explaining and improving an answer set. Using terms from natural language to describe data --- with labels from fuzzy vocabularies --- contributes to the interpretability of explanations. Offering to define and refine vocabulary terms increases the personalization experience and improves the interpretability by using the user's own words. We propose to investigate the use of explanations in a cooperative answering setting using three research axes: 1) in the presence of a plethoric set of answers; 2) in the context of recommendations; 3) in the context of a query/answering problem. These axes define cooperative techniques where the interest of explanations is to enable users to understand how results are computed in an effort of transparency. The informativeness of the explanations brings an added value to the direct results, and that in itself represents a cooperative answer
Abbaci, Katia. "Contribution à l'interrogation flexible et personnalisée d'objets complexes modélisés par des graphes." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S105/document.
Several application domains deal with complex objects whose structure and semantics of their components are crucial for their handling. For this, graph structure has been adopted, as a model of representation, in these areas to capture a maximum of information, related to the structure, semantics and behavior of such objects, necessary for effective representation and processing. Thus, when comparing two complex objects, a matching technique is applied between their graph structures. In this thesis, we are interested in approximate matching techniques which constitute suitable tools to automatically find and select the most similar graphs to user graph query. The aim of our work is to develop methods to personalized and flexible querying of repositories of complex objects modeled thanks to graphs and then to return the graphs results that fit best the users ’needs, often expressed partially and in an imprecise way. In a first time, we propose a flexible approach for Web service retrieval that relies both on preference satisfiability and structural similarity between process model graphs. This approach allows (i) to improve the matching process by integrating user preferences and the graph structural aspect, and (ii) to return the most relevant services. A second method for evaluating graph similarity queries is also presented. It retrieves graph similarity skyline of a user query by considering a vector of several graph distance measures instead of a single measure. Thus, graphs which are maximally similar to graph query are returned in an ordered way. Finally, refinement methods have been developed to reduce the size of the skyline when it is of a significant size. They aim to identify and order skyline points that match best the user query
Dachelet, Roland. "Sur la notion de sous-langage." Paris 8, 1994. http://www.theses.fr/1994PA080968.
The notion of sublanguage is part of z. S. Harris's linguistic theory. Among sublanguages, domain sublanguages enable to express sciences, one of them being linguistics where, specifically, metalanguage is internal to its object. Sublanguages embody a vision of semantics well different from classical semantics. This point is illustrated within the framework of the relationship between language and databases as one can see it through a technology : natural language database front-ends and an ergonomic problem : characterizing the universe of discourse of such systems users. In the first chapter we present relational databases, their associated logical approach and semantic models. In the second chapter we present natural language front-ends, in particular the semantic grammar type. The third chapter is devoted to a presentation of the notion of sublanguage and of its place within harris's theory of language. In chapter 4, we present the particular database on which the study is based. To the entity-relationship schema, we associate a set of sublanguage-type formulas, and to the formulas, a set of language items. In chapter 5, we present a query corpus and its production conditions. In chapter 6, we analyze the corpus using the sublanguage analysis methods. We show that the derived formulas are very few, that they fall into two classes, and that they express a universe of discourse different from the database one. In chapter 7, we compare semantics as it is embodied in the notion of sublanguage with classical semantics. We show that harriss's theory breaks-up conceptions most commonly taken for granted in the linguistic field
Conde, Cespedes Patricia. "Modélisations et extensions du formalisme de l'analyse relationnelle mathématique à la modularisation des grands graphes." Paris 6, 2013. http://www.theses.fr/2013PA066654.
Graphs are the mathematical representation of networks. Since a graph is a special type of binary relation, graph clustering (or modularization), can be mathematically modelled using the Mathematical Relational analysis. This modelling allows to compare numerous graph clustering criteria on the same type of formal representation. We give through a relational coding, the way of comparing different modularization criteria such as: Newman-Girvan, Zahn-Condorcet, Owsinski-Zadrozny, Demaine-Immorlica, Wei-Cheng, Profile Difference et Michalski-Goldberg. We introduce three modularization criteria: the Balanced Modularity, the deviation to Indetermination and the deviation to Uniformity. We identify the properties verified by those criteria and for some of those criteria, specially linear criteria, we characterize the partitions obtained by the optimization of these criteria. The final goal is to facilitate their understanding and their usefulness in some practical contexts, where their purposes become easily interpretable and understandable. Our results are tested by modularizing real networks of different sizes with the generalized Louvain algorithm