Dissertations / Theses on the topic 'Stockage des données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Stockage des données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Jemel, Mayssa. "Stockage des données locales : sécurité et disponibilité." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0053.
Full textDue to technological advancements, people are constantly manipulating multiple connected and smart devices in their daily lives. Cross-device data management, therefore, remains the concern of several academic and industrial studies. The proposed frameworks are mainly based on proprietary solutions called private or closed solutions. This strategy has shown its deficiency on security issues, cost, developer support and customization. In recent years, however, the Web has faced a revolution in developing standardized solutions triggered by the significant improvements of HTML5. With this new version, innovative features and APIs are introduced to follow business and user requirements. The main purpose is to provide the web developer with a vendor-neutral language that enables the implementation of competing application with lower cost. These applications are related neither to the used devices nor to the installed software. The main motivation of this PhD thesis is to migrate towards the adoption of standardized solutions to ensure secure and reliable cross-device data management in both the client and server side. There is already a proposed standardized Cloud Digital Safe on the server side storage that follows the AFNOR specification while there is no standardized solution yet on the client-side. This thesis is focused on two main areas : 1) the proposal of a standardized Client Digital Safe where user data are stored locally and 2) the synchronization of these data between the Client and the Cloud Digital Safe and between the different user devices. We contribute in this research area in three ways. First, we propose a Client Digital Safe based on HTML5 Local Storage APIs. We start by strengthening the security of these APIs to be used by our Client Digital Safe. Second, we propose an efficient synchronization protocol called SyncDS with minimum resource consumption that ensures the synchronization of user data between the Client and the Cloud Digital Safe. Finally, we address security concerns, in particular, the access control on data sharing following the Digital Safe requirements
Bouabache, Fatiha. "Stockage fiable des données dans les grilles, application au stockage des images de checkpoint." Paris 11, 2010. http://www.theses.fr/2010PA112329.
Full textRollback/recovery solutions rely on checkpoint storage reliability (after a failure, if the checkpoint images are not available, the rollback operation fails). The goal of this thesis is to propose a reliable and an efficient checkpoint storage service. By reliable, we mean that whatever the scenario of failures is, as long as it respects the assumptions made by the algorithms, the checkpoint images are still available. And we mean by efficient, minimizing the time required to transfer and to store the checkpoint images. This will minimize the global execution time of the checkpoint waves. To ensure those two points (reliability and efficiency), we propose: 1. A new coordinated checkpoint protocol which tolerates checkpoint server failures and clusters failures, and ensures a checkpoint storage reliability in a grid environment; 2. A distributed storage service structured on three layers architecture: a) The replication layer: to ensure the checkpoint storage reliability, we propose to replicate the images over the network. Ln this direction, we propose two hierarchical replication strategies adapted to the considered architecture and that exploit the locality of checkpoint images in order to minimize inter-cluster communication. B) The scheduling layer: at this level we work on the storage efficiency by reducing the data transfer time. We propose an algorithm based on the uniform random sampling of possible schedules. C) The scheduling engine: at this layer, we develop a tool that implements the scheduling plan calculated in the scheduling layer
Devigne, Julien. "Protocoles de re-chiffrement pour le stockage de données." Caen, 2013. http://www.theses.fr/2013CAEN2032.
Full textPrivacy is one of the main issues of our modern day society in which the Internet is omnipotent. In this thesis, we study some technics allowing to realise a privacy-preserving cloud storage. In this way, we focus to protect stored data while allowing their owner to share them with people of his choice. Proxy re-encryption, one of the primitives offered by cryptography, is the solution we decide to consider. First, we give a definition of a proxy re-encryption system unifying all existing conventional models. We also describe usual characteristics that this primitive may present and we provide its security model. Then, we focus more precisely on some specific schemes in order to improve their security. In this meaning, we expose a method which turns a scheme secure against a replayable chosen ciphertext attack into a secure scheme against a chosen ciphertext attack. We study schemes based on the Hash ElGamal encryption too and propose some modifications in order to reach a better security. Finally and in order to obtain the most functional cloud storage, we propose two new models. The first one, that we call combined proxy re-encryption, offers dynamic right access. The second one, that we call selective proxy re-encryption, enables a more fine-grained access right control than the one offered by the conditional proxy re-encryption
Khelil, Amar. "Elaboration d'un système de stockage et exploitation de données pluviométriques." Lyon, INSA, 1985. http://www.theses.fr/1985ISAL0034.
Full textThe Lyon District Urban Area (CO. UR. LY. ) may be explained from an hydrological point of view as a 600 km2 area equipped with a sewerage system estimated by 2 000 km of pipes. Due to the complexity of the sewerage network of the area, it must therefore be controlled by an accurate and reliable system of calculation to avoid any negative consequences of its function. The capacity of the present computerising system SERAIL, allows an overall simulation of the functioning of drainage / sewerage system. This model requires an accurate information of the rainfall rate which was not previously available. Therefore a 30 rain gages network (with cassette in sit recording) was set up within the Urban District Area in 1983. This research however introduces the experiment of three steps: 1) to install the network; 2) to build up a data checking and storage system; 3) to analyse the data. The characteristic nature of this work deals with the data analysis system. It allows to extract easily and analyse any rainfall event important to the hydrologist. Two aims were defined: 1) to get a better understanding of the phenomena (punctual representations ); 2) to build up models. In order to achieve the second aim, it was necessary to think about the fitting of the propounded models and their limits which led to the setting up of several other programmes for checking and comparison. For example a complete analysis of a rainfall event is given with comments and conclusion
Jule, Alan. "Etude des codes en graphes pour le stockage de données." Thesis, Cergy-Pontoise, 2014. http://www.theses.fr/2014CERG0739.
Full textFor two decades, the numerical revolution has been amplified. The spread of digital solutions associated with the improvement of the quality of these products tends to create a growth of the amount of data stored. The cost per Byte reveals that the evolution of hardware storage solutions cannot follow this expansion. Therefore, data storage solutions need deep improvement. This is feasible by increasing the storage network size and by reducing data duplication in the data center. In this thesis, we introduce a new algorithm that combines sparse graph code construction and node allocation. This algorithm may achieve the highest performance of MDS codes in terms of the ratio R between the number of parity disks and the number of failures that can be simultaneously reconstructed. In addition, encoding and decoding with sparse graph codes helps lower the complexity. By this algorithm, we allow to generalize coding in the data center, in order to reduce the amount of copies of original data. We also study Spatially-Coupled LDPC (SC-LDPC) codes which are known to have optimal asymptotic performance over the binary erasure channel, to anticipate the behavior of these codes decoding for distributed storage applications. It is usually necessary to compromise between different parameters for a distributed storage system. To complete the state of the art, we include two theoretical studies. The first study deals with the computation complexity of data update and we determine whether linear code used for data storage are update efficient or not. In the second study, we examine the impact on the network load when the code parameters are changed. This can be done when the file status changes (from a hot status to a cold status for example) or when the size of the network is modified by adding disks. All these studies, combined with the new algorithm for sparse graph codes, could lead to the construction of new flexible and dynamical networks with low encoding and decoding complexities
Obame, Meye Pierre. "Sûreté de fonctionnement dans le nuage de stockage." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S091/document.
Full textThe quantity of data in the world is steadily increasing bringing challenges to storage system providers to find ways to handle data efficiently in term of dependability and in a cost-effectively manner. We have been interested in cloud storage which is a growing trend in data storage solution. For instance, the International Data Corporation (IDC) predicts that by 2020, nearly 40% of the data in the world will be stored or processed in a cloud. This thesis addressed challenges around data access latency and dependability in cloud storage. We proposed Mistore, a distributed storage system that we designed to ensure data availability, durability, low access latency by leveraging the Digital Subscriber Line (xDSL) infrastructure of an Internet Service Provider (ISP). Mistore uses the available storage resources of a large number of home gateways and Points of Presence for content storage and caching facilities. Mistore also targets data consistency by providing multiple types of consistency criteria on content and a versioning system. We also considered the data security and confidentiality in the context of storage systems applying data deduplication which is becoming one of the most popular data technologies to reduce the storage cost and we design a two-phase data deduplication that is secure against malicious clients while remaining efficient in terms of network bandwidth and storage space savings
Secret, Ghislain. "La maintenance des données dans les systèmes de stockage pair à pair." Amiens, 2009. http://www.theses.fr/2009AMIE0111.
Full textPeer to peer systems are designed to share resources on the Internet. The independence of the architecture from a centralized server provides the peer-to-peer networks a very high fault tolerance (no peer is essential to the functioning of the network). This property makes the use of this architecture very suitable for permanent storage of data on a large scale. However, peer to peer systems are characterised by peer’s volatility. Peers connect and disconnect randomly. The challenge is to ensure the continuity of data in a storage media constantly changing. For this, to cope with peer’s volatility, data redundancy schemes coupled with reconstruction mechanism of lost data are introduced. But the reconstructions needed to maintain the continuity of data are not neutral in terms of burden on the system. To investigate factors that impact the higher the data maintenance cost, a model of peer to peer storage system was designed. This model is based on an IDA (Information Dispersal Algorithm) redundancy scheme. Built on this model, a simulator was developed and the system behaviour for the cost of regeneration of the data was analyzed. Two reconstruction strategies are observed. The first mechanism is based on a threshold from the level of data redundancy. It requires constant monitoring of the state data. The second strategy involves a number of reconstructions by a system of quota allocation for a defined period of time. It is less comfortable psychologically because it significantly reduces the control of the data state by abstracting the threshold mechanism. Based on a stochastic analysis of the strategies, keys are provided to define the parameters of the system according to the target level of durability desired
Kiefer, Renaud. "Etude et conception d'un système de stockage et d'adressage photonique de données." Université Louis Pasteur (Strasbourg) (1971-2008), 2002. http://www.theses.fr/2002STR13199.
Full textThe increase in the speed of microprocessors, the evolution of multimedia and of the Internet has created a growing need of data storage solutions. Encouraged by the rapid technological progress over the past decade, this need has grown exponentially. Even if DVD technology satisfies the present data storage demand (about 10 bit/æmø), certain new applications such as 3D imaging and huge data bases need the development of new technology. The objective of this thesis has been to study and conceive a data storage and addressing system based on holographic memories. This kind of memory shows interesting possibilities for massive volume data storage (about 100 bit/æm3). The system allows a rapid access time (ms), on a large angular bandwidth, at any informations stored on the diffractive memory. Analysis of optical memories based on dichromated gelatin has allowed the determination of their domain of use and set the constrains of the addressing system. The originality of the work has been to associate MEMS (integrated micro mirrors) and an acousto-optic cell. We have measured the deformation of the MEMS to evaluate the influence on the reading of the information stored in diffractive memories. Experimental results show the possibility of obtaining an address rate of 100Gbits/s. The reading system limitations are due to the low oscillating frequency of the MEMS and principally to the low acquisition rate of the CCD camera. The use of high speed cameras will allow to increase the readout rate
Barrabe, Patrice. "Acquisition et transmission optique de données." Grenoble 1, 1990. http://www.theses.fr/1990GRE10121.
Full textBarkat, Okba. "Utilisation conjointe des ontologies et du contexte pour la conception des systèmes de stockage de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0001/document.
Full textWe are witnessing an era when any company is strongly interested in collecting and analyzing data from heterogeneous and varied sources. These sources also have another specificity, namely con- text awareness. Three complementary problems are identified: the resolution of the heterogeneity of the sources, (ii) the construction of a decisional integrating system, and (iii) taking into account the context in this integration. To solve these problems, we are interested in this thesis in the design of contextual applications based on a domain ontology.To do this, we first propose a context model that integrates the main dimensions identified in the literature. Once built, it is linked to the ontology model. This approach increases flexibility in the design of advanced applications. Then, we propose two case studies: (1) the contextualization of semantic data sources where we extend the OntoBD/OntoQL system to take the context into account, and (2) the design of a contextual data warehouse where the context model is projected on the different phases of the life cycle design. To validate our proposal, we present a tool implementing the different phases of the proposed design approach
Yin, Shaoyi. "Un modèle de stockage et d'indexation pour des données embarquées en mémoire flash." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0008.
Full textNAND Flash has become the most popular stable storage medium for embedded systems. Efficient storage and indexing techniques are very challenging to design due to a combination of NAND Flash constraints and embedded system constraints. In this thesis, we propose a new model relying on two basic principles: database serialization and database stratification. An indexing technique called PBFilter is presented to illustrate these principles. Analytical and experimental results show that the new approach meets very well the embedded system requirements. The PBFilter technique has been integrated into a complete embedded DBMS engine PlugDB. PlugDB is used in a real-life application implementing a secure and portable medico-social folder. PlugDB can be also seen as a central building block for a global vision named Personal Data Server, whose objective is to manage personal information in a secure, privacy-preserving and user-controlled way
Gabsi, Nesrine. "Extension et interrogation de résumé de flux de données." Paris, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00613122.
Full textIn the last few years, a new environment, in which data have to be collected and processed instantly when arriving, has emerged. To handle the large volume of data associated with this environment, new data processing model and techniques have to be set up ; they are referred as data stream management. Data streams are usually continuous, voluminous, and cannot be registered integrally as persistent data. Many research works have handled this issue. Therefore, new systems called DSMS (Data Stream Management Systems) appeared. The DSMS evaluates continuous queries on a stream or a window (finite subset of streams). These queries have to be specified before the stream's arrival. Nevertheless, in case of some applications, some data could be required after their expiration from the DSMS in-memory. In this case, the system cannot treat the queries as such data are definitely lost. To handle this issue, it is essential to keep a ummary of data stream. Many summaries algorithms have been developed. The selection of a summarizing method depends on the kind of data and the associated issue. In this thesis, we are first interested with the elaboration of a generic summary structure while coming to a compromise between the summary elaboration time and the quality of the summary. We introduce a new summary approach which is more efficient for querying very old data. Then, we focus on the uerying methods for these summaries. Our objective is to integrate the structure of generic summaries in the architecture of the existing DSMS. By this way, we extend the range of the possible queries. Thus, the processing of the queries on old stream data (expired data) becomes possible as well as queries on new stream data. To this end, we introduced two approaches. The difference between them is the role played by summary module when the query is evaluated
Chiky, Raja. "Résumé de flux de données distribués." Paris, ENST, 2009. https://pastel.hal.science/pastel-00005137.
Full textIn this thesis, we consider a distributed computing environment, describing a collection of multiple remote sensors that feed a unique central server with numeric and uni-dimensional data streams (also called curves). The central server has a limited memory but should be able to compute aggregated value of any subset of the stream sources from a large time horizon including old and new data streams. Two approaches are studied to reduce the size of data : (1) spatial sampling only consider a random sample of the sources observed at every instant ; (2) temporal sampling consider all sources but samples the instants to be stored. In this thesis, we propose a new approach for summarizing temporally a set of distributed data streams : From the observation of what is happening during a period t -1, we determine a data collection model to apply to the sensors for period t. The computation of aggregates involves statistical inference in the case of spatial sampling and interpolation in the case of temporal sampling. To the best of our knowledge, there is no method for estimating interpolation errors at each timestamp that would take into account some curve features such as the knowledge of the integral of the curve during the period. We propose two approaches : one uses the past of the data curve (naive approach) and the other uses a stochastic process for interpolation (stochastic approach)
Fournié, Laurent Henri. "Stockage et manipulation transactionnels dans une base de données déductives à objets : techniques et performances." Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0017.
Full textKhouri, Selma. "Cycle de vie sémantique de conception de systèmes de stockage et manipulation de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2013. http://www.theses.fr/2013ESMA0016/document.
Full textData Warehouses (DWs) become essential components for companies and organizations.DWdesign field has been actively researched in recent years. The main limitation of the proposedapproaches is the lack of an overall vision covering the DW design cycle. Our main contributionin this thesis is to propose a method adapted to recent evolutions of the DW design cycle,and covering all its phases. These evolutions have given rise to new data storage models andnew deployment architectures, which offers different design choices for designers and administrators.DW literature recognizes the importance of user requirements in the design process, andthe importance of accessing and representing data semantics. We propose an ontology drivendesign method that valorizes users’ requirements by providing them a persistent view in theDW structure. This view allows anticipating diverse design tasks and simulating different designchoices. Our second proposal revisits the design cycle by executing the ETL phase (extractiontransformation-loading of data) in the conceptual stage. This proposal allows a deployment à lacarte of the DW using the different deployment platforms available
Khouri, Selma, and Selma Khouri. "Cycle de vie sémantique de conception de systèmes de stockage et manipulation de données." Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00926657.
Full textRomito, Benoit. "Stockage décentralisé adaptatif : autonomie et mobilité des données dans les réseaux pair-à-pair." Caen, 2012. http://www.theses.fr/2012CAEN2072.
Full textLe, Hung-Cuong. "Optimisation d'accès au médium et stockage de données distribuées dans les réseaux de capteurs." Besançon, 2008. http://www.theses.fr/2008BESA2052.
Full textWireless sensor network is a very hot research topic tendency for the last few years. This technology can be applied into different domains as environment, industry, commerce, medicine, military etc. Depending on the application type, the problems and requirements might be different. In this thesis, we are interested in two major problems: the medium access control and the distributed data storage. The document is divided to two parts where the first part is a state of the art of different existing works and the second part describes our contribution. In the first contribution, we have proposed two MAC protocols. The first one optimizes the wireless sensor networks lifetime for surveillance applications and the second one reduces the transmission latency in event-driven wireless sensor networks for critical applications. In the second contribution, we have worked with several data storage models in wireless sensor network and we focus on the data-centric storage model. We have proposed a clustering structure for sensors to improve the routing and reduce the number of transmissions in order to prolong the network lifetime
Crespo-Monteiro, Nicolas. "Photochromisme de films mésoporeux d'oxyde de titane dopés argent appliqué au stockage de données." Thesis, Saint-Etienne, 2012. http://www.theses.fr/2012STET4027.
Full textSilver species adsorbed on colloidal titania have been known for a long time to exhibit photochromism. The color change is due to the reduction of silver salts from metallic nanoparticles under UV illumination and oxidation of these latters under visible illumination. Recently, a new functionality inducing multicolor photochromism has been reported in nanocomposite materials constituted by silver nanoparticles introduced in nanoporous titania film. In this dissertation, we study the influence of mesoporous titania matrix with controlled pores sizes on the photochromisrn behavior of such films. We show that the films porosity allows to control the formed particles under UV illumination and that is possible to bleach the photo-induced patterns with a monochromatic visible light although usually this type of illumination color the film. The utilization of these materials allows also to sensibly improve the temporal stability of photo-induced inscriptions, which allows to use them as rewritable data carriers. We demonstrate also that above an intensity threshold, it is possible to inscribe permanent pattern with an UV or visible illumination, which allows to use these films like permanent data carriers. Finally, in the last part, we show that it is possible to photo-induce in visible light dichroic color highly reflective without prior reduction of silver salts
Diallo, Thierno Ahmadou. "GRAPP&S, une solution totalement répartie pour le stockage des données et Services." Thesis, Reims, 2016. http://www.theses.fr/2016REIMS006.
Full textData storage is a crucial point for applications development and particularly for distributed applications. There are many issues related to data storage: the insurance of sustainability, identification and indexing of the data, the warranty of searching and accessing the data, and possibly the fetching of the data for applications use. So, there is a need to design methods for ensuring effectiveness of all of these properties and operations in heterogeneous environments, both in terms of data formats, exchange protocols, and applications.In this thesis, we propose GRAPP&S (Grid APPlication & Services), a multi-scale framework for an unified storage and indexing of both data and services. Indeed, GRAPP&S was designed to support different data formats such as files, stream or database queries, but also to support the access to distant services (web services, cloud services or HPC computing services, for example). GRAPP&S coordinates an hierarchical routing architecture that allows data indexing and access, thanks to a multi-scale network of local-area communities. Transparent access is provided by a network of specialized proxies, which handle most aspects related to data location, request handling, data preprocessing or summary and also data consistence.Finally, we exploit GRAPP&S in the particular context of E-Learning. Our solution will reduce the cost of merging distributed educational resources over several organizations and ensure the exploitation of learners
Kumar, Sathiya Prabhu. "Cohérence de données répliquées partagées adaptative pour architectures de stockage à fort degré d’élasticité." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1035/document.
Full textThe main contributions of this thesis are three folds. The first contribution of the thesis focuses on an efficient way to control stale reads in modern database systems with the help of a new consistency protocol called LibRe. LibRe is an acronym for Library for Replication. The main goal of the LibRe protocol is to ensure data consistency by contacting a minimum number of replica nodes during read and write operations with the help of a library information. According to the protocol, during write operations each replica node updates a registry (library) asynchronously with the recent version identifier of the updated data. Forwarding the read requests to a right replica node referring the registry information helps to control stale reads during read operations. Evaluation of data consistency remains challenging both via simulation as well as in a real world setup. Hence, we implemented a new simulation toolkit called Simizer that helps to evaluate the performance of different consistency policies in a fast and efficient way. We also extended an existing benchmark tool YCSB that helps to evaluate the consistency-latency tradeoff offered by modern database systems. The codebase of the simulator and the extended YCSB are made open-source for public access. The performance of the LibRe protocol is validated both via simulation as well as in a real setup with the help of extended YCSB.Although the modern database systems adapt the consistency guarantees of the system per query basis, anticipating the consistency level of an application query in advance during application development time remains challenging for the application developers. In order to overcome this limitation, the second contribution of the thesis focuses on enabling the database system to override the application-defined consistency options during run time with the help of an external input. The external input could be given by a data administrator or by an external service. The thesis validates the proposed model with the help of a prototype implementation inside the Cassandra distributed storage system.The third contribution of the thesis focuses on resolving update conflicts. Resolving update conflicts often involve maintaining all possible values and perform the resolution via domain-specific knowledge at the client side. This involves additional cost in terms of network bandwidth and latency, and considerable complexity. In this thesis, we discuss the motivation and design of a novel data type called priority register that implements a domain-specific conflict detection and resolution scheme directly at the database side, while leaving open the option of additional reconciliation at the application level. Our approach uses the notion of an application-defined replacement ordering and we show that a data type parameterized by such an order can provide an efficient solution for applications that demand domain-specific conflict resolution. We also describe the proof of concept implementation of the priority register inside Cassandra. The conclusion and perspectives of the thesis work are summarized at the end
Guittenit, Christophe. "Placement d'objets multimédias sur un groupe hétérogène de dispositifs de stockage." Toulouse 3, 2002. http://www.theses.fr/2002TOU30098.
Full textThe data administration of storage system consists in providing to each application a storage space having a quality of service appropriate to the needs for this application: quality expressed in term of storage capacity, reliability and availability of storage, and of performances in access time and throughput (bandwidth). This thesis proposes to make the study of the automatic administration of a heterogeneous storage system dedicated to the service of multimedia objects. After having studied and having carried out the classification of the various policies of placement designed to exploit this type of storage system, we propose a new data placement, the EFLEX (Entrelacement FLEXible - that is "flexible interleaving") that makes it possible to jointly exploit the bandwidth and the storage capacity of the system. .
Kerhervé, Brigitte. "Vues relationnelles : implantation dans les systèmes de gestion de bases de données centralisés et répartis." Paris 6, 1986. http://www.theses.fr/1986PA066090.
Full textLaga, Arezki. "Optimisation des performance des logiciels de traitement de données sur les périphériques de stockage SSD." Thesis, Brest, 2018. http://www.theses.fr/2018BRES0087/document.
Full textThe growing volume of data poses a real challenge to data processing software like DBMS (DataBase Management Systems) and data storage infrastructure. New technologies have emerged in order to face the data volume challenges. We considered in this thesis the emerging new external memories like flash memory-based storage devices named SSD (Solid State Drive).SSD storage devices offer a performance gain compared to the traditional magnetic devices.However, SSD devices offer a new performance model that involves 10 cost optimization for data processing and management algorithms.We proposed in this thesis an 10 cost model to evaluate the data processing algorithms. This model considers mainly the SSD 10 performance and the data distribution.We also proposed a new external sorting algorithm: MONTRES. This algorithm includes optimizations to reduce the 10 cost when the volume of data is greater than the allocated memory space by an order of magnitude. We proposed finally a data prefetching mechanism: Lynx. This one makes use of a machine learning technique to predict and to anticipate future access to the external memory
Borba, Ribeiro Heverson. "L'Exploitation de Codes Fontaines pour un Stockage Persistant des Données dans les Réseaux d'Overlay Structurés." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00763284.
Full textCarpen-Amarie, Alexandra. "Utilisation de BlobSeer pour le stockage de données dans les Clouds: auto-adaptation, intégration, évaluation." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00696012.
Full textDandoush, Abdulhalim. "L'Analyse et l'Optimisation des Systèmes de Stockage de Données dans les Réseaux Pair-à-Pair." Phd thesis, Université de Nice Sophia-Antipolis, 2010. http://tel.archives-ouvertes.fr/tel-00470493.
Full textLoisel, Loïc. "Claquage Electrique et Optique d'Allotropes du Carbone : Mécanismes et Applications pour le Stockage de Données." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX021/document.
Full textToday, data storage applications rely mainly on two types of materials: chalcogenides for optical storage (e.g. Blu-Ray) and silicon for electronic storage (e.g. Flash memory). While these materials have proven to be the most efficient for widespread applications, both have limitations. Recently, with the rise of graphene, carbon allotropes have been studied both for their intrinsic properties and for applications; graphene and other carbon allotropes have very interesting electronic, thermal and mechanical properties that can make these materials more efficient than either chalcogenides or silicon for certain applications. In this thesis, we study the feasibility and potential of the usage of carbon as a data storage material.Firstly, we focus on developing optical data storage. It is found that both continuous-wave and pulsed lasers can be used to induce reversible phase changes in carbon thin films, thus opening the way toward carbon-based data storage. Along the way, several phenomena are discovered, shown and explained by using advanced characterization techniques and thermal modelling.Secondly, we focus on electronic data storage by developing graphene-based memories that are found to switch reliably between two well-separated resistance states for a large number of cycles. To assess the potential of this new technology, we characterize the switching mechanism and develop an electro-mechanical model enabling to predict the best performances attainable: these memories would potentially be much faster than Flash memories while playing the same role (non-volatile storage)
Jaiman, Vikas. "Amélioration de la prédictibilité des performances pour les environnements de stockage de données dans les nuages." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM016/document.
Full textToday, users of interactive services such as e-commerce, web search have increasingly high expectations on the performance and responsiveness of these services. Indeed, studies have shown that a slow service (even for short periods of time) directly impacts the revenue. Enforcing predictable performance has thus been a priority of major service providers in the last decade. But avoiding latency variability in distributed storage systems is challenging since end user requests go through hundreds of servers and performance hiccups at any of these servers may inflate the observed latency. Even in well-provisioned systems, factors such as the contention on shared resources or the unbalanced load between servers affect the latencies of requests and in particular the tail (95th and 99th percentile) of their distribution.The goal of this thesis to develop mechanisms for reducing latencies and achieve performance predictability in cloud data stores. One effective countermeasure for reducing tail latency in cloud data stores is to provide efficient replica selection algorithms. In replica selection, a request attempting to access a given piece of data (also called value) identified by a unique key is directed to the presumably best replica. However, under heterogeneous workloads, these algorithms lead to increased latencies for requests with a short execution time that get scheduled behind requests with large execution times. We propose Héron, a replica selection algorithm that supports workloads of heterogeneous request execution times. We evaluate Héron in a cluster of machines using a synthetic dataset inspired from the Facebook dataset as well as two real datasets from Flickr and WikiMedia. Our results show that Héron outperforms state-of-the-art algorithms by reducing both median and tail latency by up to 41%.In the second contribution of the thesis, we focus on multiget workloads to reduce the latency in cloud data stores. The challenge is to estimate the bottleneck operations and schedule them on uncoordinated backend servers with minimal overhead. To reach this objective, we present TailX, a task aware multiget scheduling algorithm that reduces tail latencies under heterogeneous workloads. We implement TailX in Cassandra, a widely used key-value store. The result is an improved overall performance of the cloud data stores for a wide variety of heterogeneous workloads
Soyez, Olivier. "Stockage dans les systèmes pair à pair." Phd thesis, Université de Picardie Jules Verne, 2005. http://tel.archives-ouvertes.fr/tel-00011443.
Full textDans un premier temps, nous avons créé un prototype Us et conçu une interface utilisateur, nommée UsFS, de type système de fichiers. Un procédé de journalisation des données est inclus dans UsFS.
Ensuite, nous nous sommes intéressés aux distributions de données au sein du réseau Us. Le but de ces distributions est de minimiser le dérangement occasionné par le processus de reconstruction pour chaque pair. Enfin, nous avons étendu notre schéma de distribution pour gérer le comportement dynamique des pairs et prendre en compte les corrélations de panne.
Schaaf, Thomas. "Couplage inversion et changement d'échelle pour l'intégration des données dynamiques dans les modèles de réservoirs pétroliers." Paris 9, 2003. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2003PA090046.
Full textPamba, Capo-Chichi Medetonhan Shambhalla Eugène William. "Conception d’une architecture hiérarchique de réseau de capteurs pour le stockage et la compression de données." Besançon, 2010. http://www.theses.fr/2010BESA2031.
Full textRecent advances in various aeras related to micro-electronics, computer science and wireless networks have resulted in the development of new research topics. Sensor networks are one of them. The particularity of this new research direction is the reduced performances of nodes in terms of computation, memory and energy. The purpose of this thesis is the definition of a new hierarchical architecture of sensor networks usable in different contexts by taking into account the sensors constraints and providing a high quality data such as multimedia to the end-users. We present our hierachical architecture with different nodes and the wireless technologies that connect them. Because of the high consumtpionof data transmission, we have developped two data compression algortithms in order to optimize the use of the channel by reducing data transmitted. We also present a solution for storing large amount of data on nodes by integrating the file system FAT16 under TinyOS-2. X
Muñoz-Baca, Guadalupe. "Stockage et exploitation de dossiers médicaux multimédia au moyen d'une base de données généralisée : projet TIGRE." Université Joseph Fourier (Grenoble), 1987. http://tel.archives-ouvertes.fr/tel-00324082.
Full textSavel, Paul. "Absorption à deux photons et photochromisme de complexes de ruthénium : application au stockage optique de données." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S090/document.
Full textThe development of new technologies , computer and internet in recent decades has been accompanied by an increasing demand for media storage information. In particular , the optical data storage .Conventional media ( CD-ROM , Blu-ray ... ) based on a storage disk surface, have now reached their limits. A new technology being developed , based on a data storage in three dimensions, is a promising alternative to replace conventional materials. Materials must include entities for photochromic properties (molecular switch) and multi- photon absorption demonstrated. In this thesis , we considered the synthesis of functional molecules which present these two characteristics. As a first step , we are interested in the synthesis and comparative study of homo and heteroleptic ruthenium complex having a certain potential for two-photon absorption . We have shown that these systems were very active and they allowed to host a photochromic entity lossless bi- photonic properties. We then studied the properties of originals photochromic ruthenium tris- bipyridine containing an azobenzene motif. The metal complexing profoundly changes the photochromism of azobenzene with significantly different from those ligands kinetics . Finally, we studied the properties of hybrid complexes of ligands for the two-photon absorption and others to photochromism , these compounds are active in both areas. We finally discuss the potential of the optical behavior of compounds of these complex films. We conducted preliminary tests of the SHG signal modulation on these films. We want to maximize all components of the process to determine the potential of these compounds in optical data storage
Damak, Mohamed. "Un logiciel de stockage, de traitement et de visualisation graphique et cartographique des données géologiques et géotechniques." Phd thesis, Grenoble 1, 1990. http://tel.archives-ouvertes.fr/tel-00785637.
Full textTraboulsi, Salam. "Virtualisation du stockage dans les grilles informatiques : administration et monitoring." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/385/.
Full textVirtualization in grid environments is a recent way to improve platform usage. ViSaGe is a middleware designed to provide set of functionalities needed for storage virtualization: transparent reliable remote access to data and distributed data management. ViSaGe aggregates distributed physical storage resources. However, ensuring the performances of data access in grid environment is a major issue, as large amount of data are stored and constantly accessed, and directly involved into tasks execution time. Especially, the placement and selection of replicated data are made particularly difficult because of the dynamic nature of grid environments -- grid nodes workload variations. The workload variations represent the state of the system resources (CPU, disks and networks). These variations are mainly perceived by a monitoring system. Several monitoring systems exist in the literature. They monitor system resources consumption and applications but none of these systems presents the whole of the pertinent characteristics for ViSaGe. ViSaGe needs a system that analyzes nodes workload during runtime execution for improving data storage management. Therefore, ViSaGe Administration and monitoring service, namely Admon, is proposed. We present Admon efficiency that allowing to dynamically placing data according to resources usage ensuring the best performances while limiting the monitoring overhead
Marcu, Ovidiu-Cristian. "KerA : Un Système Unifié d'Ingestion et de Stockage pour le Traitement Efficace du Big Data : Un Système Unifié d'Ingestion et de Stockage pour le Traitement Efficace du Big Data." Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0028/document.
Full textBig Data is now the new natural resource. Current state-of-the-art Big Data analytics architectures are built on top of a three layer stack:data streams are first acquired by the ingestion layer (e.g., Kafka) and then they flow through the processing layer (e.g., Flink) which relies on the storage layer (e.g., HDFS) for storing aggregated data or for archiving streams for later processing. Unfortunately, in spite of potential benefits brought by specialized layers (e.g., simplified implementation), moving large quantities of data through specialized layers is not efficient: instead, data should be acquired, processed and stored while minimizing the number of copies. This dissertation argues that a plausible path to follow to alleviate from previous limitations is the careful design and implementation of a unified architecture for stream ingestion and storage, which can lead to the optimization of the processing of Big Data applications. This approach minimizes data movement within the analytics architecture, finally leading to better utilized resources. We identify a set of requirements for a dedicated stream ingestion/storage engine. We explain the impact of the different Big Data architectural choices on end-to-end performance. We propose a set of design principles for a scalable, unified architecture for data ingestion and storage. We implement and evaluate the KerA prototype with the goal of efficiently handling diverse access patterns: low-latency access to streams and/or high throughput access to streams and/or objects
Moreira, José. "Un modèle d'approximation pour la représentation du mouvement dans les bases de données spatiales." Paris, ENST, 2001. http://www.theses.fr/2001ENST0016.
Full textKerkad, Amira. "L'interaction au service de l'optimisation à grande échelle des entrepôts de données relationnels." Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00954469.
Full textLachaize, Renaud. "Un canevas logiciel pour la construction de systèmes de stockage reconfigurables pour grappes de machines." Phd thesis, Grenoble INPG, 2005. http://tel.archives-ouvertes.fr/tel-00010198.
Full textCutillo, Leucio Antonio. "Protection des données privées dans les réseaux sociaux." Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00932360.
Full textOmnès, Thierry J.-F. "Acropolis : un précompilateur de spécification pour l'exploration du transfert et du stockage des données en conception de systèmes embarqués à Haut Débit." Paris, ENMP, 2001. http://www.theses.fr/2001ENMP0995.
Full textTran, Viet-Trung. "Sur le passage à l'échelle des systèmes de gestion des grandes masses de données." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00783724.
Full textDuminuco, Alessandro. "Redondance et maintenance des données dans les systèmes de sauvegarde de fichiers pair-à-pair." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005541.
Full textMonteiro, Julian. "Modélisation et analyse des systèmes de stockage fiable de données dans des réseaux pair-à-pair." Phd thesis, Université de Nice Sophia-Antipolis, 2010. http://tel.archives-ouvertes.fr/tel-00545724.
Full textNguyen, Cong-Danh. "Workload- and Data-based Automated Design for a Hybrid Row-Column Storage Model and Bloom Filter-Based Query Processing for Large-Scale DICOM Data Management." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC019/document.
Full textIn the health care industry, the ever-increasing medical image data, the development of imaging technologies, the long-term retention of medical data and the increase of image resolution are causing a tremendous growth in data volume. In addition, the variety of acquisition devices and the difference in preferences of physicians or other health-care professionals have led to a high variety in data. Although today DICOM (Digital Imaging and Communication in Medicine) standard has been widely adopted to store and transfer the medical data, DICOM data still has the 3Vs characteristics of Big Data: high volume, high variety and high velocity. Besides, there is a variety of workloads including Online Transaction Processing (OLTP), Online Analytical Processing (OLAP) and mixed workloads. Existing systems have limitations dealing with these characteristics of data and workloads. In this thesis, we propose new efficient methods for storing and querying DICOM data. We propose a hybrid storage model of row and column stores, called HYTORMO, together with data storage and query processing strategies. First, HYTORMO is designed and implemented to be deployed on large-scale environment to make it possible to manage big medical data. Second, the data storage strategy combines the use of vertical partitioning and a hybrid store to create data storage configurations that can reduce storage space demand and increase workload performance. To achieve such a data storage configuration, one of two data storage design approaches can be applied: (1) expert-based design and (2) automated design. In the former approach, experts manually create data storage configurations by grouping attributes and selecting a suitable data layout for each column group. In the latter approach, we propose a hybrid automated design framework, called HADF. HADF depends on similarity measures (between attributes) that can take into consideration the combined impact of both workload- and data-specific information to generate data storage configurations: Hybrid Similarity (a weighted combination of Attribute Access and Density Similarity measures) is used to group the attributes into column groups; Inter-Cluster Access Similarity is used to determine whether two column groups will be merged together or not (to reduce the number of joins); and Intra-Cluster Access Similarity is applied to decide whether a column group will be stored in a row or a column store. Finally, we propose a suitable and efficient query processing strategy built on top of HYTORMO. It considers the use of both inner joins and left-outer joins. Furthermore, an Intersection Bloom filter () is applied to reduce network I/O cost.We provide experimental evaluations to validate the benefits of the proposed methods over real DICOM datasets. Experimental results show that the mixed use of both row and column stores outperforms a pure row store and a pure column store. The combined impact of both workload-and data-specific information is helpful for HADF to be able to produce good data storage configurations. Moreover, the query processing strategy with the use of the can improve the execution time of an experimental query up to 50% when compared to the case where no is applied
Dimopoulou, Melpomeni. "Techniques de codage pour le stockage à long terme d’images numériques dans l’ADN synthétique." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4073.
Full textData explosion is one of the greatest challenges of digital evolution, causing the storage demand to grow at such a rate that it cannot compete with the actual capabilities of devices. The digital universe is forecast to grow to over 175 zettabytes by 2025 while 80% is infrequently accessed (“cold” data), yet safely archived in off-line tape drives due to security and regulatory compliance reasons. At the same time, conventional storage devices have a limited lifespan of 10 to 20 years and therefore should be frequently replaced to ensure data reliability, a process which is expensive both in terms of money and energy. Recent studies have shown that due to its biological properties, DNA is a very promising candidate for the long-term archiving of “cold” digital data for centuries or even longer under the condition that the information is encoded in a quaternary stream made up of the symbols A, T, C and G, to represent the 4 components of the DNA molecule, while also respecting some important encoding constraints. Pioneering works have proposed different algorithms for DNA coding leaving room for further improvement. In this thesis we present some novel image coding techniques for the efficient storage of digital images into DNA. We implemented a novel fixed length algorithm for the construction of a robust quaternary code that respects the biological constraints and proposed two different mapping functions to allow flexibility according to the encoding needs. Furthermore, one of the main challenges of DNA data storage being the expensive cost of DNA synthesis, we make a very first attempt to introduce controlled compression in the proposed encoding workflow. The, proposed codec is competitive compared to the state of the art. Furthermore, our end-to-end coding/decoding solution has been experimented in a wet lab experiment to prove feasibility of the theoretical study in practice
Contreras, Villalobos Kevin. "Conception, validation et mise en oeuvre d'une architecture de stockage de données de très haute capacité basée sur le principe de la photographie Lippmann." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00580714.
Full textMao, Fei. "Réalisation des nanostructures désirées en or et en argent par effet thermique local induit optiquement : Application au stockage de données et à l’imprimante couleur." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN011.
Full textThis work focuses on the investigation of plasmonic Gold (Au) and Silver (Ag) nanoparticles (NPs) by using optically induced local thermal dewetting technique and their applications. Firstly, Au and Ag NPs are fabricated by a thermal annealing method using a hot oven. This technique allows obtaining Au and Ag NPs, which are randomly distributed in a large area. The NPs sizes and properties are controlled by annealing conditions, such as annealing temperature and duration. Plasmonic properties of Au and Ag NPs are experimentally characterized and compared with the simulation ones performed by the FDTD method. These large-area Au and Ag NPs are demonstrated to be useful for applications in fluorescence enhancement and random laser. Secondly, we demonstrate a robust way to realize desired plasmonic nanostructures by using a direct laser writing method. This technique bases on optically induced local thermal effect allowing the realization of NPs at a small area, i.e. focusing area. By moving thus the laser spot, any desired plasmonic structure can be realized. The NPs sizes and distributions can be controlled by exposure doses (laser power and exposure time) and moving trajectory of the focusing spot resulting in different reflection or transmission colors. By focusing a continuous-wave laser at 532 nm on Au films having 50 nm thickness, we demonstrated for the first time the direct fabrication of plasmonic nanoholes array. These fabricated structures are demonstrated to be very potential for many applications such as data storage, color nanoprinter, fluorescence enhancement, and plasmonics based random laser
Contreras, Villalobos Kevin. "Conception, validation et mise en oeuvre d’une architecture de stockage de données de très haute capacité basée sur le principe de la photographie Lippmann." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112017/document.
Full textNowadays, the holographic data storage presents a renewed interest. It seems well placed to lead a new generation of optical storage capacity and playback speeds much higher than current optical discs based on the recording onto a surface. In this thesis, we propose a new architecture for optical data storage that is based on the principle of Lippmann photography interferential. Information are included in the volume of the recording material in the form of pages of data multiplexing in wavelength by exploiting the Bragg selectivity. This technique, although very similar to holography, had never been considered for high storage capacities. The aim of the thesis was to analyze this new architecture to determine the conditions that can lead to very high capacities. This analysis was based on a numerical simulation tool of diffraction process involved in this memory interferential. It allowed us to define two conditions under which these high capacities are achievable. In accordance with these conditions, we developed a demonstrator called "Lippmann’s memory" and have thus demonstrated experimentally that the capacity is proportional to the thickness of the recording material. With such an architecture, Terabyte disks of 12 cm in diameter are expected