Tesi sul tema "Bases de données factorisées"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Bases de données factorisées".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Crosetti, Nicolas. "Enrichir et résoudre des programmes linéaires avec des requêtes conjonctives". Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILB003.
Mathematical optimization and data management are two major fields of computer science that are widely studied by mostly separate communities.However complex optimization problems often depend on large datasets that may be cumbersome to manage,while managing large amounts of data is only useful insofar as one analyzes this data to extract some knowledgein order to solve some practical problem, so these fields are often actually intertwined in practice.This thesis places itself at the crossroads between these two fields by studying linear programs that reason about the answers of database queries.The first contribution of this thesis is the definition of the so-called language of linear programs with conjunctive queries, or LP(CQ) for short.It is a language to model linear programs with constructs that allow one to express linear constraints and linear sumsthat reason over the answer sets of database queries in the form of conjunctive queries.We then describe the natural semantics of the languageby showing how such models can be interpreted, in conjunction with a database, into actual linear programsthat can then be solved by any standard linear program solver and discuss the hardness of solving LP(CQ) models.Motivated by the hardness of solving LP(CQ) models in general, we then introducea process based on the so-called T-factorized interpretation to solve such models more efficiently.This approach is based on classical techniques from database theoryto exploit the structure of the queries using hypertree decompositions of small width.The T-factorized interpretation yields a linear programthat has the same optimal value as the natural semantics of the model but fewer variableswhich can thus be used to solve the model more efficiently.The third contribution is a generalization of the previous result to the framework of factorized databases.We introduce a specific circuit data-structure to succintly encode relations.We the define the so-called C-factorized interpretation that leverages the succintness of these circuitsto yield a linear program that has the same optimal value as the natural semantics of the model but fewer variablessimilarly to the T-factorized interpretation.Finally we show that we can explicitly compile the answer sets of conjunctive queries with small fractional hypertreewidthinto succinct circuits, thus allowing us to recapture the T-factorized interpretation
Gross-Amblard, David. "Tatouage des bases de données". Habilitation à diriger des recherches, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00590970.
Waller, Emmanuel. "Méthodes et bases de données". Paris 11, 1993. http://www.theses.fr/1993PA112481.
Benchkron, Said Soumia. "Bases de données et logiciels intégrés". Paris 9, 1985. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1985PA090025.
Marie-Julie, Jean Michel. "Bases de données d'images- Calculateurs parallèles". Paris 6, 2000. http://www.theses.fr/2000PA066593.
Castelltort, Arnaud. "Historisation de données dans les bases de données NoSQLorientées graphes". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20076.
This thesis deals with data historization in the context of graphs. Graph data have been dealt with for many years but their exploitation in information systems, especially in NoSQL engines, is recent. The emerging Big Data and 3V contexts (Variety, Volume, Velocity) have revealed the limits of classical relational databases. Historization, on its side, has been considered for a long time as only linked with technical and backups issues, and more recently with decisional reasons (Business Intelligence). However, historization is now taking more and more importance in management applications.In this framework, graph databases that are often used have received little attention regarding historization. Our first contribution consists in studying the impact of historized data in management information systems. This analysis relies on the hypothesis that historization is taking more and more importance. Our second contribution aims at proposing an original model for managing historization in NoSQL graph databases.This proposition consists on the one hand in elaborating a unique and generic system for representing the history and on the other hand in proposing query features.We show that the system can support both simple and complex queries.Our contributions have been implemented and tested over synthetic and real databases
Voisard, Agnès. "Bases de données géographiques : du modèle de données à l'interface utilisateur". Paris 11, 1992. http://www.theses.fr/1992PA112354.
Nguyen, Gia Toan. "Quelques fonctionnalités de bases de données avancées". Habilitation à diriger des recherches, Grenoble 1, 1986. http://tel.archives-ouvertes.fr/tel-00321615.
Qian, Shunchu. "Restructuration de bases de données entité-association". Dijon, 1995. http://www.theses.fr/1995DIJOS064.
Gross-Amblard, David. "Approximation dans les bases de données contraintes". Paris 11, 2000. http://www.theses.fr/2000PA112304.
Collobert, Ronan. "Algorithmes d'Apprentissage pour grandes bases de données". Paris 6, 2004. http://www.theses.fr/2004PA066063.
Bossy, Robert. "Édition coopérative de bases de données scientifiques". Paris 6, 2002. http://www.theses.fr/2002PA066047.
Valceschini-Deza, Nathalie. "Accès sémantique aux bases de données textuelles". Nancy 2, 1999. http://www.theses.fr/1999NAN21021.
Souihli, Asma. "Interrogation des bases de données XML probabilistes". Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0046/document.
Probabilistic XML is a probabilistic model for uncertain tree-structured data, with applications to data integration, information extraction, or uncertain version control. We explore in this dissertation efficient algorithms for evaluating tree-pattern queries with joins over probabilistic XML or, more specifically, for approximating the probability of each item of a query result. The approach relies on, first, extracting the query lineage over the probabilistic XML document, and, second, looking for an optimal strategy to approximate the probability of the propositional lineage formula. ProApproX is the probabilistic query manager for probabilistic XML presented in this thesis. The system allows users to query uncertain tree-structured data in the form of probabilistic XML documents. It integrates a query engine that searches for an optimal strategy to evaluate the probability of the query lineage. ProApproX relies on a query-optimizer--like approach: exploring different evaluation plans for different parts of the formula and predicting the cost of each plan, using a cost model for the various evaluation algorithms. We demonstrate the efficiency of this approach on datasets used in a number of most popular previous probabilistic XML querying works, as well as on synthetic data. An early version of the system was demonstrated at the ACM SIGMOD 2011 conference. First steps towards the new query solution were discussed in an EDBT/ICDT PhD Workshop paper (2011). A fully redesigned version that implements the techniques and studies shared in the present thesis, is published as a demonstration at CIKM 2012. Our contributions are also part of an IEEE ICDE
Souihli, Asma. "Interrogation des bases de données XML probabilistes". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0046.
Probabilistic XML is a probabilistic model for uncertain tree-structured data, with applications to data integration, information extraction, or uncertain version control. We explore in this dissertation efficient algorithms for evaluating tree-pattern queries with joins over probabilistic XML or, more specifically, for approximating the probability of each item of a query result. The approach relies on, first, extracting the query lineage over the probabilistic XML document, and, second, looking for an optimal strategy to approximate the probability of the propositional lineage formula. ProApproX is the probabilistic query manager for probabilistic XML presented in this thesis. The system allows users to query uncertain tree-structured data in the form of probabilistic XML documents. It integrates a query engine that searches for an optimal strategy to evaluate the probability of the query lineage. ProApproX relies on a query-optimizer--like approach: exploring different evaluation plans for different parts of the formula and predicting the cost of each plan, using a cost model for the various evaluation algorithms. We demonstrate the efficiency of this approach on datasets used in a number of most popular previous probabilistic XML querying works, as well as on synthetic data. An early version of the system was demonstrated at the ACM SIGMOD 2011 conference. First steps towards the new query solution were discussed in an EDBT/ICDT PhD Workshop paper (2011). A fully redesigned version that implements the techniques and studies shared in the present thesis, is published as a demonstration at CIKM 2012. Our contributions are also part of an IEEE ICDE
Ripoche, Hugues. "Une construction interactive d'interprétations de données : application aux bases de données de séquences génétiques". Montpellier 2, 1995. http://www.theses.fr/1995MON20248.
Benzine, Mehdi. "Combinaison sécurisée des données publiques et sensibles dans les bases de données". Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0024.
Protection of sensitive data is a major issue in the databases field. Many software and hardware solutions have been designed to protect data when stored and during query processing. Moreover, it is also necessary to provide a secure manner to combine sensitive data with public data. To achieve this goal, we designed a new storage and processing architecture. Our solution combines a main server that stores public data and a secure server dedicated to the storage and processing of sensitive data. The secure server is a hardware token which is basically a combination of (i) a secured microcontroller and (ii) a large external NAND Flash memory. The queries which combine public and sensitive data are split in two sub queries, the first one deals with the public data, the second one deals with the sensitive data. Each sub query is processed on the server storing the corresponding data. Finally, the data obtained by the computation of the sub query on public data is sent to the secure server to be mixed with the result of the computation on sensitive data. For security reasons, the final result is built on the secure server. This architecture resolves the security problems, because all the computations dealing with sensitive data are done by the secure server, but brings performance problems (few RAM, asymmetric cost of read/write operations. . . ). These problems will be solved by different strategies of query optimization
Léonard, Michel. "Conception d'une structure de données dans les environnements de bases de données". Grenoble 1, 1988. http://tel.archives-ouvertes.fr/tel-00327370.
Smine, Hatem. "Outils d'aide à la conception : des bases de données relationnelles aux bases d'objets complexes". Nice, 1988. http://www.theses.fr/1988NICE4213.
Sahri, Soror. "Conception et implantation d'un système de bases de données distribuée & scalable : SD-SQL Server". Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090013.
Our thesis elaborates on the design of a scalable distributed database system (SD-DBS). A novel feature of an SD-DBS is the concept of a scalable distributed relational table, a scalable table in short. Such a table accommodates dynamic splits of its segments at SD-DBS storage nodes. A split occurs when an insert makes a segment to overflow, like in, e. G. , B-tree file. Current DBMSs provide the static partitioning only, requiring a cumbersome global reorganization from time to time. The transparency of the distribution of a scalable table is in this light an important step beyond the current technology. Our thesis explores the design issues of an SD-DBS, by constructing a prototype termed SD-SQL Server. As its name indicates, it uses the services of SQL-Server. SD-SQL Server repartitions a table when an insert overflows existing segments. With the comfort of a single node SQL Server user, the SD-SQL Server user has larger tables or a faster response time through the dynamic parallelism. We present the architecture of our system, its implementation and the performance analysis
Bost, Raphaël. "Algorithmes de recherche sur bases de données chiffrées". Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S001/document.
Searchable encryption aims at making efficient a seemingly easy task: outsourcing the storage of a database to an untrusted server, while keeping search features. With the development of Cloud storage services, for both private individuals and businesses, efficiency of searchable encryption became crucial: inefficient constructions would not be deployed on a large scale because they would not be usable. The key problem with searchable encryption is that any construction achieving ''perfect security'' induces a computational or a communicational overhead that is unacceptable for the providers or for the users --- at least with current techniques and by today's standards. This thesis proposes and studies new security notions and new constructions of searchable encryption, aiming at making it more efficient and more secure. In particular, we start by considering the forward and backward privacy of searchable encryption schemes, what it implies in terms of security and efficiency, and how we can realize them. Then, we show how to protect an encrypted database user against active attacks by the Cloud provider, and that such protections have an inherent efficiency cost. Finally, we take a look at existing attacks against searchable encryption, and explain how we might thwart them
Nunez, Del Prado Cortez Miguel. "Attaques d'inférence sur des bases de données géolocalisées". Phd thesis, INSA de Toulouse, 2013. http://tel.archives-ouvertes.fr/tel-00926957.
Najjar, Ahmed. "Forage de données de bases administratives en santé". Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/28162.
Current health systems are increasingly equipped with data collection and storage systems. Therefore, a huge amount of data is stored in medical databases. Databases, designed for administrative or billing purposes, are fed with new data whenever the patient uses the healthcare system. This specificity makes these databases a rich source of information and extremely interesting. These databases can unveil the constraints of reality, capturing elements from a great variety of real medical care situations. So, they could allow the conception and modeling the medical treatment process. However, despite the obvious interest of these administrative databases, they are still underexploited by researchers. In this thesis, we propose a new approach of the mining for administrative data to detect patterns from patient care trajectories. Firstly, we have proposed an algorithm able to cluster complex objects that represent medical services. These objects are characterized by a mixture of numerical, categorical and multivalued categorical variables. We thus propose to extract one projection space for each multivalued variable and to modify the computation of the distance between the objects to consider these projections. Secondly, a two-step mixture model is proposed to cluster these objects. This model uses the Gaussian distribution for the numerical variables, multinomial for the categorical variables and the hidden Markov models (HMM) for the multivalued variables. Finally, we obtain two algorithms able to cluster complex objects characterized by a mixture of variables. Once this stage is reached, an approach for the discovery of patterns of care trajectories is set up. This approach involves the followed steps: 1. preprocessing that allows the building and generation of medical services sets. Thus, three sets of medical services are obtained: one for hospital stays, one for consultations and one for visits. 2. modeling of treatment processes as a succession of labels of medical services. These complex processes require a sophisticated method of clustering. Thus, we propose a clustering algorithm based on the HMM. 3. creating an approach of visualization and analysis of the trajectory patterns to mine the discovered models. All these steps produce the knowledge discovery process from medical administrative databases. We apply this approach to databases for elderly patients over 65 years old who live in the province of Quebec and are suffering from heart failure. The data are extracted from the three databases: the MSSS MED-ÉCHO database, the RAMQ bank and the database containing death certificate data. The obtained results clearly demonstrated the effectiveness of our approach by detecting special patterns that can help healthcare administrators to better manage health treatments.
Thion-Goasdoue, Virginie. "Bases de données, contraintes d'intégrité et logiques modales". Paris 11, 2004. http://www.theses.fr/2004PA112134.
In this thesis, we use tableaux system for modal logics in order to solve databases problems related to integrity constraints. In first part, we use a tableaux system for first order modal logics in the context of a method testing integrity constraints preservation in an object oriented database. We develop a proof search strategy and we prove that it is sound and complete in its unbounded version. This leads to the implementation of a theorem prover for first order modal logics k, k4, d, t and s4. The prover can also be used for other applications where the test of validity of first order modal logics is needed (software verification, multi-agents systems, etc. ). In second part, we study hybrid multi-modal logic (hmml) as a formalism to express schemas and integrity constraints for semi-structured data. On the one hand we prove that hmml captures the notion of semi-structured data and constraints on it. On the other hand we generalize the notion of schema, by proposing a definition of schema where references are "well typed" (contrary to what happens with dtds), and we prove that this new notion can be formalized by sentences of hmml exactly like a constraint is. When a tableaux system for the hmml is added to this approach, some classical database problems can be treated (constraints implication, schemas inclusion, constraints satisfiability, etc. )
Guo, Yanli. "Confidentialité et intégrité de bases de données embarquées". Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0038.
As a decentralized way for managing personal data, the Personal Data Server approach (PDS) resorts to Secure Portable Token, combining the tamper resistance of a smart card microcontroller with the mass storage capacity of NAND Flash. The data is stored, accessed and its access rights controlled using such devices. To support powerful PDS application requirements, a full-fledged DBMS engine is embedded in the SPT. This thesis addresses two problems with the confidentiality and integrity of personal data: (i) the database stored on the NAND Flash remains outside the security perimeter of the microcontroller, thus potentially suffering from attacks; (ii) the PDS approach relies on supporting servers to provide durability, availability, and global processing functionalities. Appropriate protocols must ensure that these servers cannot breach the confidentiality of the manipulated data. The proposed solutions rely on cryptography techniques, without incurring large overhead
Lavergne-Boudier, Valérie. "Système dynamique d'interrogation des bases de données bibliographiques". Paris 7, 1990. http://www.theses.fr/1990PA077243.
Raïssi, Chedy. "Extraction de Séquences Fréquentes : Des Bases de Données Statiques aux Flots de Données". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2008. http://tel.archives-ouvertes.fr/tel-00351626.
Laurent, Anne. "Bases de données multidimensionnelles floues et leur utilisation pour la fouille de données". Paris 6, 2002. http://www.theses.fr/2002PA066426.
Raissi, Chedy. "Extraction de séquences fréquentes : des bases de données statiques aux flots de données". Montpellier 2, 2008. http://www.theses.fr/2008MON20063.
Laabi, Abderrazzak. "Étude et réalisation de la gestion des articles appartenant à des bases de données gérées par une machine bases de données". Paris 11, 1987. http://www.theses.fr/1987PA112338.
The work presented in this thesis is part of a study and development project concerning the design of three layers of the DBMS on the DORSAL-32 Data Base Machine. The first layer ensures record management within the storage areas, record and page locking organization according to the access mode and transaction coherency degree. It ensures also the handling of micro-logs which permit to guarantee the atomicity of an action. The second layer ensures handling of transaction logging and warm restarts which guarantee the atomicity and durability of a transaction. The third layer ensures simultaneous access management and handling of lock tables. Performance measures of the methods used are also presented. The last chapter of this report contains a research work concerning the implementation of the virtual linear hashing method in our DBMS. The problem studied is the transfer of records from one page to another. Under these conditions, the record pointers which are classically used don't permit direct access. We propose a new pointer which enables direct access to the record, on no matter which page it is contained at a given instant
Baptiste, Pierre. "Contribution à la conception d'un atelier flexible : définition de la base de données techniques, ordonnancement de taches à temps de réglage variables". Lyon, INSA, 1985. http://www.theses.fr/1985ISAL0038.
This work is a contribution to the elaboration of an information system for a PMS. It deals with the construction of a technical data base and with a scheduling method that minimizes the number of set-up operations. Pour main parts can be distinguished. - In the first part, the PMS studied is described. - The second is a review of the different methods used to construct a production manufacturing system (the ones based on the physical system and the ones based on the information system ). The choice of MERISE method is justified - In a third part, some conceptual models are presented (describing routings, tools, fixtures, DNC programmes, etc. . . ). - At last a scheduling method that minimizes set-up times is proposed. This method uses mathematical tools, such as Galois lattice, interval graphs. A prototype of this method gives very good results in numerous examples: about 50 % of set-up operations can be avoided in all cases studied
Mahfoudi, Abdelwahab. "Contribution a l'algorithmique pour l'analyse des bases de données statistiques hétérogènes". Dijon, 1995. http://www.theses.fr/1995DIJOS009.
Boullé, Marc. "Recherche d'une représentation des données efficace pour la fouille des grandes bases de données". Phd thesis, Télécom ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003023.
Curé, Olivier. "Relations entre bases de données et ontologies dans le cadre du web des données". Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00843284.
Charmpi, Konstantina. "Méthodes statistiques pour la fouille de données dans les bases de données de génomique". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM017/document.
Our focus is on statistical testing methods, that compare a given vector of numeric values, indexed by all genes in the human genome, to a given set of genes, known to be associated to a particular type of cancer for instance. Among existing methods, Gene Set Enrichment Analysis is the most widely used. However it has several drawbacks. Firstly, the calculation of p-values is very much time consuming, and insufficiently precise. Secondly, like most other methods, it outputs a large number of significant results, the majority of which are not biologically meaningful. The two issues are addressed here, by two new statistical procedures, the Weighted and Doubly Weighted Kolmogorov-Smirnov tests. The two tests have been applied both to simulated and real data, and compared with other existing procedures. Our conclusion is that, beyond their mathematical and algorithmic advantages, the WKS and DWKS tests could be more informative in many cases, than the classical GSEA test and efficiently address the issues that have led to their construction
Kezouit, Omar Abdelaziz. "Bases de données relationnelles et analyse de données : conception et réalisation d'un système intégré". Paris 11, 1987. http://www.theses.fr/1987PA112130.
Zelasco, José Francisco. "Gestion des données : contrôle de qualité des modèles numériques des bases de données géographiques". Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20232.
A Digital Surface Model (DSM) is a numerical surface model which is formed by a set of points, arranged as a grid, to study some physical surface, Digital Elevation Models (DEM), or other possible applications, such as a face, or some anatomical organ, etc. The study of the precision of these models, which is of particular interest for DEMs, has been the object of several studies in the last decades. The measurement of the precision of a DSM model, in relation to another model of the same physical surface, consists in estimating the expectancy of the squares of differences between pairs of points, called homologous points, one in each model which corresponds to the same feature of the physical surface. But these pairs are not easily discernable, the grids may not be coincident, and the differences between the homologous points, corresponding to benchmarks in the physical surface, might be subject to special conditions such as more careful measurements than on ordinary points, which imply a different precision. The generally used procedure to avoid these inconveniences has been to use the squares of vertical distances between the models, which only address the vertical component of the error, thus giving a biased estimate when the surface is not horizontal. The Perpendicular Distance Evaluation Method (PDEM) which avoids this bias, provides estimates for vertical and horizontal components of errors, and is thus a useful tool for detection of discrepancies in Digital Surface Models (DSM) like DEMs. The solution includes a special reference to the simplification which arises when the error does not vary in all horizontal directions. The PDEM is also assessed with DEM's obtained by means of the Interferometry SAR Technique
Ykhlef, Mourad. "Interrogation des données semistructurées". Bordeaux 1, 1999. http://www.theses.fr/1999BOR1A640.
Ykhlef, Mourad. "Interrogation des données semistructurées". Bordeaux 1, 1999. http://www.theses.fr/1999BOR10670.
Jomier, Geneviève. "Bases de données relationnelles : le système PEPIN et ses extensions". Paris 5, 1989. http://www.theses.fr/1989PA05S008.
Fallouh, Fouad. "Données complexes et relation universelle avec inclusions : une aide à la conception et à l'interrogation des bases de données". Lyon 1, 1994. http://www.theses.fr/1994LYO10217.
Jacob, Stéphane. "Protection cryptographique des bases de données : conception et cryptanalyse". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00738272.
Coulon, Cedric. "Réplication Préventive dans une grappe de bases de données". Phd thesis, Université de Nantes, 2006. http://tel.archives-ouvertes.fr/tel-00481299.
Collet, Christine. "Les formulaires complexes dans les bases de données multimédia". Phd thesis, Grenoble 1, 1987. http://tel.archives-ouvertes.fr/tel-00325851.
Bouganim, Luc. "Sécurisation du Contrôle d'Accès dans les Bases de Données". Habilitation à diriger des recherches, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00308620.
Verlaine, Lionel. "Optimisation des requêtes dans une machine bases de données". Paris 6, 1986. http://www.theses.fr/1986PA066532.
Jault, Claude. "Méthodologie de la conception des bases de données relationnelles". Paris 9, 1989. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1989PA090011.
This thesis analyses the different relational data base design methods and, because of their insufficiencies, propose a new method. The first chapter presents the concepts: conceptual and logical schemas and models, links between entities, connection cardinalities, relational model concepts (relations, dependencies, primary and foreign keys), normalization (with the demonstration of the 4th normal form not included into the 3rd), integrity constraints (domain, relation, reference), null values, and a new type of constraints, the constraints between links. The second chapter gives an account of the different methods which can be dispatched in three groups. Those which utilize the entity-relationship model: the American and French model-versions (with their extensions), the axial method, the remora method; those which don't utilize conceptual schema: universal relation approach, godd and date approach, view integration approach; and the IA method (NIAM) using the semantic networks. The third chapter exposes the entity-link-relation method, elaborated in this thesis. It is supported by a conceptual model representing the entities and their links, with the integrity constraints between these links. It proceeds in three phases: the total conceptual approach, centered on entities and links (1:n and 1:1, the links m:n converted to two links 1:n) ; the detail conceptual approach, which defines the attributes and the semantic domains, normalizes entities, examines no-permanent dependencies and the link-constraints ; the logical approach, which gives the relational schema, controls its normality, defines integrity constraints and solves referential deadlocks. The fourth chapter gives one concrete case of the entity-link-relation method
Fansi, Janvier. "Sécurité des bases de données XML (eXtensible Markup Language)". Pau, 2007. http://www.theses.fr/2007PAUU3007.
XML has emerged as the de facto standard for representing and exchanging information on the Internet. As Internet is a public network, corporations and organizations which use XML need mechanisms to protect XML data against unauthorised access. Thus, several schemes for XML access control have been proposed. They can be classified in two major categories: views materialization and queries rewriting techniques. In this thesis, we point out the drawbacks of views materialization approaches through the development of a prototype of secured XML database based on one of those approaches. Afterwards, we propose a technique aimed at securing XML by means of queries rewriting. We prove its correctness and show that it is more efficient than competing works. Finally, we extend our proposal in order to controlling the updating of XML databases
Hammiche, Samira. "Approximation de requêtes dans les bases de données multimédia". Lyon 1, 2007. http://www.theses.fr/2007LYO10080.
Grison, Thierry. "Intégration de schémas de bases de données entité-association". Dijon, 1994. http://www.theses.fr/1994DIJOS005.