Tesis sobre el tema "Fragmentation (informatique)"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 18 mejores tesis para su investigación sobre el tema "Fragmentation (informatique)".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Ranéa, Pierre-Guy. "La tolérance aux intrusions par fragmentation-dissémination". Toulouse, INPT, 1989. http://www.theses.fr/1989INPT007H.
Texto completoKapusta, Katarzyna. "Protecting data confidentiality combining data fragmentation, encryption, and dispersal over a distributed environment". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0061.
Texto completoThis thesis dissertation revisits state-of-the-art fragmentation techniques making them faster and cost-efficient. The main focus is put on increasing data confidentiality without deteriorating the processing performance. The ultimate goal is to provide a user with a set of fast fragmentation methods that could be directly applied inside an industrial context to reinforce the confidentiality of the stored data and/or accelerate the fragmentation processing. First, a rich survey on fragmentation as a way of preserving data confidentiality is presented. Second, the family of all-or-nothing transforms is extended with three new proposals. They all aim at protecting encrypted and fragmented data against the exposure of the encryption key but are designed to be employed in three different contexts: for data fragmentation in a multi-cloud environment, a distributed storage system, and an environment composed of one storage provider and one private device. Third, a way of accelerating fragmentation is presented that achieves better performance than data encryption using the most common symmetric-key encryption algorithm. Fourth, a lightweight fragmentation scheme based on data encoding, permuting, and dispersing is introduced. It totally gets rid of data encryption allowing the fragmentation to be performed even faster; up to twice as fast as data encryption. Finally, fragmentation inside sensor networks is revisited, particularly in the Unattended Wireless Sensor Networks. The main focus in this case is put not solely on the fragmentation performance, but also on the reduction of storage and transmission costs by using data aggregation
Trouessin, Gilles. "Traitements fiables de données confidentielles par fragmentation-redondance-dissémination". Toulouse 3, 1991. http://www.theses.fr/1991TOU30260.
Texto completoLin, Ping. "Commande adaptative et régulation automatique d'une unité de broyage du cru en cimenterie". Lyon, INSA, 1992. http://www.theses.fr/1992ISAL0003.
Texto completoIn cement manufactory, the raw material blending process, between prehomogeneization silo and clinker kiln, is a multivariable system (several raw materials in use) and coupled with considerable time delay due toX-ray fluorescence analyser. The main goal of blending control is to maintain close to the standard value and to decrease the variance of chemical composition (in terms of composition moduli) of the raw meal by using correcting products. The proposed control strategy consists of four parts: (1) a multivariable predictive control system based on an internai madel; (2) an adaptive control due to on-line estimation for chemical composition of the raw materials; (3) an optimization unit to minimize a qualitycost cri teri on; ( 4) a self-adjustment function of reference values for the special batch process. In order to improve the control performance, an heuristic adaptive supervision is developped to adjust the regulator parameters in lower level. The proposed control policy bas been applied to cement plants of Lafarge Coppée
Minier, Josselin. "Fragmentation cognitive/informatique de la musique populaire amplifiée : construction d'un système numérique dirigé par une notion de simulacre cinétique". Paris 1, 2011. http://www.theses.fr/2011PA010678.
Texto completoNicolas, Jean-Christophe. "Machines bases de données parallèles : contribution aux problèmes de la fragmentation et de la distribution". Lille 1, 1991. http://www.theses.fr/1991LIL10025.
Texto completoQiu, Han. "Une architecture de protection des données efficace basée sur la fragmentation et le cryptage". Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0049.
Texto completoIn this thesis, a completely revisited data protection scheme based on selective encryption is presented. First, this new scheme is agnostic in term of data format, second it has a parallel architecture using GPGPU allowing performance to be at least comparable to full encryption algorithms. Bitmap, as a special uncompressed multimedia format, is addressed as a first use case. Discrete Cosine Transform (DCT) is the first transformation for splitting fragments, getting data protection, and storing data separately on local device and cloud servers. This work has largely improved the previous published ones for bitmap protection by providing new designs and practical experimentations. General purpose graphic processing unit (GPGPU) is exploited as an accelerator to guarantee the efficiency of the calculation compared with traditional full encryption algorithms. Then, an agnostic selective encryption based on lossless Discrete Wavelet Transform (DWT) is presented. This design, with practical experimentations on different hardware configurations, provides strong level of protection and good performance at the same time plus flexible storage dispersion schemes. Therefore, our agnostic data protection and transmission solution combining fragmentation, encryption, and dispersion is made available for a wide range of end-user applications. Also a complete set of security analysis are deployed to test the level of provided protection
Cherrueau, Ronan-Alexandre. "Un langage de composition des techniques de sécurité pour préserver la vie privée dans le nuage". Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0233/document.
Texto completoA cloud service can use security techniques to ensure information privacy. These techniques protect privacy by converting the client’s personal data into unintelligible text. But they can also cause the loss of some functionalities of the service. For instance, a symmetric-key cipher protects privacy by converting readable personal data into unreadable one. However, this causes the loss of computational functionalities on this data.This thesis claims that a cloud service has to compose security techniques to ensure information privacy without the loss of functionalities. This claim is based on the study of the composition of three techniques: symmetric cipher, vertical data fragmentation and client-side computation. This study shows that the composition makes the service privacy preserving, but makes its formulation overwhelming. In response, the thesis offers a new language for the writing of cloud services that enforces information privacy using the composition of security techniques. This language comes with a set of algebraic laws to systematically transform a local service without protection into its cloud equivalent protected by composition. An Idris implementation harnesses the Idris expressive type system to ensure the correct composition of security techniques. Furthermore, an encoding translates the language intoProVerif, a model checker for automated reasoning about the security properties found in cryptographic protocols. This translation checks that the service preserves the privacy of its client
Lecler, Philippe. "Une approche de la programmation des systèmes distribués fondée sur la fragmentation des données et des calculs et sa mise en oeuvre dans le système GOTHIC". Rennes 1, 1989. http://www.theses.fr/1989REN10103.
Texto completoBenkrid, Soumia. "Le déploiement, une phase à part entière dans le cycle de vie des entrepôts de données : application aux plateformes parallèles". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2014. http://www.theses.fr/2014ESMA0027/document.
Texto completoDesigning a parallel data warehouse consists of choosing the hardware architecture, fragmenting the data warehouse schema, allocating the generated fragments, replicating fragments to ensure high system performance and defining the treatment strategy and load balancing.The major drawback of this design cycle is its ignorance of the interdependence between subproblems related to the design of PDW and the use of heterogeneous metrics to achieve thesame goal. Our first proposal defines an analytical cost model for parallel processing of OLAP queries in a cluster environment. Our second takes into account the interdependence existing between fragmentation and allocation. In this context, we proposed a new approach to designa PDW on a cluster machine. During the fragmentation process, our approach determines whether the fragmentation pattern generated is relevant to the allocation process or not. The results are very encouraging and validation is done on Teradata. For our third proposition, we presented a design method which is an extension of our work. In this phase, an original method of replication, based on fuzzy logic is integrated
Benallal, Mohammed Wehbi. "Contributions à la gestion des processus métier configurables : une approche orientée base de connaissances, fragmentation, et mesure d'entropies". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1121.
Texto completoWe propose an approach for managing configurable Business Processes (cBP) that offer a consolidated view over these processes’ variants. A variant is the result of adjusting a BP in response to functional and/or structural needs. The approach uses a knowledge base to track the specificities of each variant that is represented as configurable Process Structure Tree (cPST). An implementation of the approach that consists of generating a cPST from a cBP is proposed. We apply this approach to compute configurable business process complexity and to prove its advantage for improving cBP quality
Roland, Jérémy. "Dynamique et mécanique de la fragmentation de filaments d'actine par l'ADF/cofiline : comparaison entre expériences et modèles". Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00566088.
Texto completoKhelil, Abdallah. "Gestion et optimisation des données massives issues du Web Combining graph exploration and fragmentation for scalable rdf query processing Should We Be Afraid of Querying Billions of Triples in a Graph-Based Centralized System? EXGRAF : Exploration et Fragmentation de Graphes au Service du Traitement Scalable de Requˆetes RDF". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2020. http://www.theses.fr/2020ESMA0009.
Texto completoBig Data represents a challenge not only for the socio-economic world but also for scientific research. Indeed, as has been pointed out in several scientific articles and strategic reports, modern computer applications are facing new problems and issues that are mainly related to the storage and the exploitation of data generated by modern observation and simulation instruments. The management of such data represents a real bottleneck which has the effect of slowing down the exploitation of the various data collected not only in the framework of international scientific programs but also by companies, the latter relying increasingly on the analysis of large-scale data. Much of this data is published today on the WEB. Indeed, we are witnessing an evolution of the traditional web, designed basically to manage documents, to a web of data that allows to offer mechanisms for querying semantic information. Several data models have been proposed to represent this information on the Web. The most important is the Resource Description Framework (RDF) which provides a simple and abstract representation of knowledge for resources on the Web. Each semantic Web fact can be encoded with an RDF triple. In order to explore and query structured information expressed in RDF, several query languages have been proposed over the years. In 2008,SPARQL became the official W3C Recommendation language for querying RDF data.The need to efficiently manage and query RDF data has led to the development of new systems specifically designed to process this data format. These approaches can be categorized as centralized that rely on a single machine to manage RDF data and distributed that can combine multiple machines connected with a computer network. Some of these approaches are based on an existing data management system such as Virtuoso and Jena, others relies on an approach specifically designed for the management of RDF triples such as GRIN, RDF3X and gStore. With the evolution ofRDF datasets (e.g. DBPedia) and Sparql, most systems have become obsolete and/or inefficient. For example, no one of existing centralized system is able to manage 1 billion triples provided under the WatDiv benchmark. Distributed systems would allow under certain conditions to improve this point but consequently leads a performance degradation. In this Phd thesis, we propose the centralized system "RDF_QDAG" that allows to find a good compromise between scalability and performance. We propose to combine physical data fragmentation and data graph exploration."RDF_QDAG" supports multiple types of queries based not only on basic graph patterns but also that incorporate filters based on regular expressions and aggregation and sorting functions. "RDF_QDAG" relies on the Volcano execution model, which allows controlling the main memory, avoiding any overflow even if the hardware configuration is limited. To the best of our knowledge, "RDF_QDAG" is the only centralized system that good performance when manage several billion triples. We compared this system with other systems that represent the state of the art in RDF data management: a relational approach (Virtuoso), a graph-based approach (g-Store), an intensive indexing approach (RDF-3X) and two parallel approaches (CliqueSquare and g-Store-D). "RDF_QDAG" surpasses existing systems when it comes to ensuring both scalability and performance
Galicia, Auyón Jorge Armando. "Revisiting Data Partitioning for Scalable RDF Graph Processing Combining Graph Exploration and Fragmentation for RDF Processing Query Optimization for Large Scale Clustered RDF Data RDFPart- Suite: Bridging Physical and Logical RDF Partitioning. Reverse Partitioning for SPARQL Queries: Principles and Performance Analysis. ShouldWe Be Afraid of Querying Billions of Triples in a Graph-Based Centralized System? EXGRAF: Exploration et Fragmentation de Graphes au Service du Traitement Scalable de Requˆetes RDF". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2021. http://www.theses.fr/2021ESMA0001.
Texto completoThe Resource Description Framework (RDF) and SPARQL are very popular graph-based standards initially designed to represent and query information on the Web. The flexibility offered by RDF motivated its use in other domains and today RDF datasets are great information sources. They gather billions of triples in Knowledge Graphs that must be stored and efficiently exploited. The first generation of RDF systems was built on top of traditional relational databases. Unfortunately, the performance in these systems degrades rapidly as the relational model is not suitable for handling RDF data inherently represented as a graph. Native and distributed RDF systems seek to overcome this limitation. The former mainly use indexing as an optimization strategy to speed up queries. Distributed and parallel RDF systems resorts to data partitioning. The logical representation of the database is crucial to design data partitions in the relational model. The logical layer defining the explicit schema of the database provides a degree of comfort to database designers. It lets them choose manually or automatically (through advisors) the tables and attributes to be partitioned. Besides, it allows the partitioning core concepts to remain constant regardless of the database management system. This design scheme is no longer valid for RDF databases. Essentially, because the RDF model does not explicitly enforce a schema since RDF data is mostly implicitly structured. Thus, the logical layer is inexistent and data partitioning depends strongly on the physical implementations of the triples on disk. This situation contributes to have different partitioning logics depending on the target system, which is quite different from the relational model’s perspective. In this thesis, we promote the novel idea of performing data partitioning at the logical level in RDF databases. Thereby, we first process the RDF data graph to support logical entity-based partitioning. After this preparation, we present a partitioning framework built upon these logical structures. This framework is accompanied by data fragmentation, allocation, and distribution procedures. This framework was incorporated to a centralized (RDF_QDAG) and a distributed (gStoreD) triple store. We conducted several experiments that confirmed the feasibility of integrating our framework to existent systems improving their performances for certain queries. Finally, we design a set of RDF data partitioning management tools including a data definition language (DDL) and an automatic partitioning wizard
Qaddah, Baraa. "Modélisation numérique de la dynamique et de l'évolution thermique d'une goutte métallique en chute libre dans un milieu visqueux". Thesis, Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0139.
Texto completoDuring the last stages of planetary accretion, impacts between differentiated protoplanets have considerably influenced the thermochemical conditions of future telluric planets. The result of these impacts is a two-phase flow. After each impact and the formation of a magma ocean, the metallic phase of the impactor underwent strong deformation and fragmentation processes before reaching the bottom of the magma ocean. The challenges of this thesis are to determine the role of the viscosity ratio between the two phases and the initial drop size on the dynamics, fragmentation and thermochemical evolution of the metallic drop. To do so, we develop numerical models using the Comsol Multiphysics software. We determine the fragmentation modes as a function of Reynolds and Weber numbers and viscosity ratio. We then compare the fragmentation time and distance with previous studies and propose scaling laws of the maximum stable radius of the drop and critical Weber as a function of magma ocean viscosity and viscosity ratio, respectively. Then we estimate the potential thermo-chemical exchanges between the drop and the magma ocean by applying a geophysical model depending on our numerical results. Finally, we study the thermal evolution of a drop in a magmatic ocean and the influence of a temperature-dependent viscosity on the dynamics. We propose time scale and length laws of thermal equilibrium and Nusselt number as a function of the Peclet number
Boukhalfa, Kamel. "De la conception physique aux outils d'administration et de tuning des entrepôts de données". Phd thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aéronautique, 2009. http://tel.archives-ouvertes.fr/tel-00410411.
Texto completoMots clés : Conception physique, Tuning, Techniques d'optimisation, Fragmentation Horizontale, Index de Jointure Binaires.
Bouriquet, Bertrand. "Relaxation en forme et multifragmentation nucléaire". Phd thesis, Université de Caen, 2001. http://tel.archives-ouvertes.fr/tel-00003803.
Texto completoCharlton, Martin. "Fragmentation de graphes et applications au génie logiciel". Thèse, 2005. http://constellation.uqac.ca/555/1/24584500.pdf.
Texto completo