Dissertations / Theses on the topic 'Réduction du stockage des données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Réduction du stockage des données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Jemel, Mayssa. "Stockage des données locales : sécurité et disponibilité." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0053.
Full textDue to technological advancements, people are constantly manipulating multiple connected and smart devices in their daily lives. Cross-device data management, therefore, remains the concern of several academic and industrial studies. The proposed frameworks are mainly based on proprietary solutions called private or closed solutions. This strategy has shown its deficiency on security issues, cost, developer support and customization. In recent years, however, the Web has faced a revolution in developing standardized solutions triggered by the significant improvements of HTML5. With this new version, innovative features and APIs are introduced to follow business and user requirements. The main purpose is to provide the web developer with a vendor-neutral language that enables the implementation of competing application with lower cost. These applications are related neither to the used devices nor to the installed software. The main motivation of this PhD thesis is to migrate towards the adoption of standardized solutions to ensure secure and reliable cross-device data management in both the client and server side. There is already a proposed standardized Cloud Digital Safe on the server side storage that follows the AFNOR specification while there is no standardized solution yet on the client-side. This thesis is focused on two main areas : 1) the proposal of a standardized Client Digital Safe where user data are stored locally and 2) the synchronization of these data between the Client and the Cloud Digital Safe and between the different user devices. We contribute in this research area in three ways. First, we propose a Client Digital Safe based on HTML5 Local Storage APIs. We start by strengthening the security of these APIs to be used by our Client Digital Safe. Second, we propose an efficient synchronization protocol called SyncDS with minimum resource consumption that ensures the synchronization of user data between the Client and the Cloud Digital Safe. Finally, we address security concerns, in particular, the access control on data sharing following the Digital Safe requirements
Bouabache, Fatiha. "Stockage fiable des données dans les grilles, application au stockage des images de checkpoint." Paris 11, 2010. http://www.theses.fr/2010PA112329.
Full textRollback/recovery solutions rely on checkpoint storage reliability (after a failure, if the checkpoint images are not available, the rollback operation fails). The goal of this thesis is to propose a reliable and an efficient checkpoint storage service. By reliable, we mean that whatever the scenario of failures is, as long as it respects the assumptions made by the algorithms, the checkpoint images are still available. And we mean by efficient, minimizing the time required to transfer and to store the checkpoint images. This will minimize the global execution time of the checkpoint waves. To ensure those two points (reliability and efficiency), we propose: 1. A new coordinated checkpoint protocol which tolerates checkpoint server failures and clusters failures, and ensures a checkpoint storage reliability in a grid environment; 2. A distributed storage service structured on three layers architecture: a) The replication layer: to ensure the checkpoint storage reliability, we propose to replicate the images over the network. Ln this direction, we propose two hierarchical replication strategies adapted to the considered architecture and that exploit the locality of checkpoint images in order to minimize inter-cluster communication. B) The scheduling layer: at this level we work on the storage efficiency by reducing the data transfer time. We propose an algorithm based on the uniform random sampling of possible schedules. C) The scheduling engine: at this layer, we develop a tool that implements the scheduling plan calculated in the scheduling layer
Lopez, Olivier. "Réduction de dimension en présence de données censurées." Phd thesis, Rennes 1, 2007. http://tel.archives-ouvertes.fr/tel-00195261.
Full textvariable explicative. Nous développons une nouvelle approche de réduction de la dimension afin de résoudre ce problème.
Devigne, Julien. "Protocoles de re-chiffrement pour le stockage de données." Caen, 2013. http://www.theses.fr/2013CAEN2032.
Full textPrivacy is one of the main issues of our modern day society in which the Internet is omnipotent. In this thesis, we study some technics allowing to realise a privacy-preserving cloud storage. In this way, we focus to protect stored data while allowing their owner to share them with people of his choice. Proxy re-encryption, one of the primitives offered by cryptography, is the solution we decide to consider. First, we give a definition of a proxy re-encryption system unifying all existing conventional models. We also describe usual characteristics that this primitive may present and we provide its security model. Then, we focus more precisely on some specific schemes in order to improve their security. In this meaning, we expose a method which turns a scheme secure against a replayable chosen ciphertext attack into a secure scheme against a chosen ciphertext attack. We study schemes based on the Hash ElGamal encryption too and propose some modifications in order to reach a better security. Finally and in order to obtain the most functional cloud storage, we propose two new models. The first one, that we call combined proxy re-encryption, offers dynamic right access. The second one, that we call selective proxy re-encryption, enables a more fine-grained access right control than the one offered by the conditional proxy re-encryption
Hadjar, Abdelkader. "Catalyseurs électrochimiques pour le stockage et la réduction des oxydes d'azote (NOx)." Thesis, Lyon 1, 2009. http://www.theses.fr/2009LYO10111.
Full textThe main objective of this study was to demonstrate the coupling between NOx storage/reduction process on barium, with an electrochemical reduction of NOx (micro fuel cell effect) on the same catalyst. The micro fuel cell effect is ensured by a an electromotive force (potential) which is created between catalytic nanoparticules (Pt and Rh) in contact with an ionic conductor (YSZ) and an electronic conductor (doped SiC). The micro fuel cell effect was observed, during the regeneration phase of the catalysts (rich period), on a Pt/Ba/doped α-SiC-YSZ/Rh monolithic system under lean-burn gasoline conditions at 400°C with an enhancement of about 10 % of the NOx conversion over a complete cycle lean/rich. This electrochemical effect was characterized by the electrochemical oxidation of CO (produced by steam reforming) into CO2 by using O2- ions coming from YSZ. Under Diesel conditions, the micro fuel cell system was found to work at low temperature especially at 300°C. In the second part of the work, a new generation of NOx Storage and reduction catalyst was developed consisting only of noble metals (Pt and/or Rh) deposited on YSZ support (Ba free catalyst). The catalytic measurements revealed that YSZ can be used as a NOx storage material in lean burn conditions (Gasoline and Diesel) especially when it was previously reduced under hydrogen. The storage mechanism would take place on the oxygen vacancies created by the removal of O-2 ions from the YSZ structure
Berland, Sébastien. "Préparation, caractérisation et activité de matériaux pour la réduction des NOx par l'ammoniac ; Association au catalyseur de stockage-réduction." Poitiers, 2011. http://nuxeo.edel.univ-poitiers.fr/nuxeo/site/esupversions/5113f776-4c92-453d-8e7b-4655c49cce2f.
Full textThis work deals with clean exhaust motor and more particularly on the combination of two processes for NOx reduction systems: NSR (NOx Storage-Reduction) and SCR (Selective Catalytic Reduction). In operating conditions, the NSR catalysts are likely to emit ammonia, which is also a good reducer of NOx. The addition of an acidic and active SCR-NH3 material on a second catalytic bed downstream, can use this ammonia to increase the overall reduction of NOx. The chosen NSR catalyst is type of Pt-Ba/Al which, in the functioning of the system (alternating oxidizing phases of NOx storage and short pulses of reducers) led to high selectivity in ammonia when H2 is used as a reducing agent. For the second catalytic bed, three types of material have been studied: industrial materials, WO3/Ce-Zr of Ce-Zr variable composition and materials synthesized in the laboratory (sol-gel route): from an alumina base, successive incorporations of Ce, Ti, and Si have been formulated active materials, improved by addition of tungsten. The materials have been characterized by different techniques: XRD, BET, measures of acidity (NH3, pyridine adsorption storage), of reducibility (RTP-H2, OSC), test of reactivity (NH3+ NOx, NH3 + O2),. . . The association of two processes (NSR + SCR) showed that on materials of SCR-NH3, NOx are reduced according to two reactions: the "fast SCR-NH3"(200, 300 and 400°C), and "standard SCR-NH3"(at 200°C). Furthermore, a part of ammonia may also react with O2 to give N2 (300-400°C) and the storage of NH3 at 400°C remains insufficient
Khelil, Amar. "Elaboration d'un système de stockage et exploitation de données pluviométriques." Lyon, INSA, 1985. http://www.theses.fr/1985ISAL0034.
Full textThe Lyon District Urban Area (CO. UR. LY. ) may be explained from an hydrological point of view as a 600 km2 area equipped with a sewerage system estimated by 2 000 km of pipes. Due to the complexity of the sewerage network of the area, it must therefore be controlled by an accurate and reliable system of calculation to avoid any negative consequences of its function. The capacity of the present computerising system SERAIL, allows an overall simulation of the functioning of drainage / sewerage system. This model requires an accurate information of the rainfall rate which was not previously available. Therefore a 30 rain gages network (with cassette in sit recording) was set up within the Urban District Area in 1983. This research however introduces the experiment of three steps: 1) to install the network; 2) to build up a data checking and storage system; 3) to analyse the data. The characteristic nature of this work deals with the data analysis system. It allows to extract easily and analyse any rainfall event important to the hydrologist. Two aims were defined: 1) to get a better understanding of the phenomena (punctual representations ); 2) to build up models. In order to achieve the second aim, it was necessary to think about the fitting of the propounded models and their limits which led to the setting up of several other programmes for checking and comparison. For example a complete analysis of a rainfall event is given with comments and conclusion
Jule, Alan. "Etude des codes en graphes pour le stockage de données." Thesis, Cergy-Pontoise, 2014. http://www.theses.fr/2014CERG0739.
Full textFor two decades, the numerical revolution has been amplified. The spread of digital solutions associated with the improvement of the quality of these products tends to create a growth of the amount of data stored. The cost per Byte reveals that the evolution of hardware storage solutions cannot follow this expansion. Therefore, data storage solutions need deep improvement. This is feasible by increasing the storage network size and by reducing data duplication in the data center. In this thesis, we introduce a new algorithm that combines sparse graph code construction and node allocation. This algorithm may achieve the highest performance of MDS codes in terms of the ratio R between the number of parity disks and the number of failures that can be simultaneously reconstructed. In addition, encoding and decoding with sparse graph codes helps lower the complexity. By this algorithm, we allow to generalize coding in the data center, in order to reduce the amount of copies of original data. We also study Spatially-Coupled LDPC (SC-LDPC) codes which are known to have optimal asymptotic performance over the binary erasure channel, to anticipate the behavior of these codes decoding for distributed storage applications. It is usually necessary to compromise between different parameters for a distributed storage system. To complete the state of the art, we include two theoretical studies. The first study deals with the computation complexity of data update and we determine whether linear code used for data storage are update efficient or not. In the second study, we examine the impact on the network load when the code parameters are changed. This can be done when the file status changes (from a hot status to a cold status for example) or when the size of the network is modified by adding disks. All these studies, combined with the new algorithm for sparse graph codes, could lead to the construction of new flexible and dynamical networks with low encoding and decoding complexities
Atigui, Faten. "Approche dirigée par les modèles pour l’implantation et la réduction d’entrepôts de données." Thesis, Toulouse 1, 2013. http://www.theses.fr/2013TOU10044/document.
Full textOur work handles decision support systems based on multidimensional Data Warehouse (DW). A Data Warehouse (DW) is a huge amount of data, often historical, used for complex and sophisticated analysis. It supports the business process within an organization. The relevant data for the decision-making process are collected from data sources by means of software processes commonly known as ETL (Extraction-Transformation-Loading) processes. The study of existing systems and methods shows two major limits. Actually, when building a DW, the designer deals with two major issues. The first issue treats the DW's design, whereas the second addresses the ETL processes design. Current frameworks provide partial solutions that focus either on the multidimensional structure or on the ETL processes, yet both could benefit from each other. However, few studies have considered these issues in a unified framework and have provided solutions to automate all of these tasks. Since its creation, the DW has a large amount of data, mainly due to the historical data. Looking into the decision maker's analysis over time, we can see that they are usually less interested in old data.To overcome these shortcomings, this thesis aims to formalize the development of a time-varying (with a temporal dimension) DW from its design to its physical implementation. We use the Model Driven Engineering (MDE) that automates the process and thus significantly reduce development costs and improve the software quality. The contributions of this thesis are summarized as follows: 1. To formalize and to automate the development of a time-varying DW within a model-driven approach that provides: - A set of unified (conceptual, logical and physical) metamodels that describe data and transformation operations. - An OCL (Object Constraint Language) extension that aims to conceptually formalize the transformation operations. - A set of transformation rules that maps the conceptual model to logical and physical models. - A set of transformation rules that generates the code. 2. To formalize and to automate historical data reduction within a model-driven approach that provides : - A set of (conceptual, logical and physical) metamodels that describe the reduced data. - A set of reduction operations. - A set of transformation rules that implement these operations at the physical level.In order to validate our proposals, we have developed a prototype composed of three parts. The first part performs the transformation of models to lower level models. The second part transforms the physical model into code. The last part allows the DW reduction
Kwémou, Djoukoué Marius. "Réduction de dimension en régression logistique, application aux données actu-palu." Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0030/document.
Full textThis thesis is devoted to variables selection or model selection in logistic regression. The applied part focuses on the analysis of data from a large socioepidémiological survey, called actu-palu. These large socioepidemiological survey typically involve a considerable number of explanatory variables. This is well-known as high-dimensional setting. Due to the curse of dimensionality, logistic regression model is no longer reliable. We proceed in two steps, a first step of reducing the number of variables by the Lasso, Group Lasso ans random forests methods. The second step is to apply the logistic model to the sub-set of variables selected in the first step. These methods have helped to select relevant variables for the identification of households at risk of having febrile episode amongst children from 2 to 10 years old in Dakar. In the methodological part, as a first step, we propose weighted versions of Lasso and group Lasso estimators for nonparametric logistic model. We prove non asymptotic oracle inequalities for these estimators. Secondly we extend the model selection principle introduced by Birgé and Massart (2001) to logistic regression model. This selection is done using penalized macimum likelihood criteria. We propose in this context a completely data-driven criteria based on the slope heuristics. We prove non asymptotic oracle inequalities for selected estimators. The results of the methodological part are illustrated through simulation studies
Wright, Sophie. "Données obstétricales et néonatales précoces des grossesses gémellaires après réduction embryonnaire." Montpellier 1, 1994. http://www.theses.fr/1994MON11106.
Full textBel, Liliane. "Sur la réduction des modèles linéaires : analyse de données en automatique." Paris 11, 1985. http://www.theses.fr/1985PA112306.
Full textTwo state space model reduction methods are studied: aggregation method and the balanced state space representation method. In the case of aggregation a new method of selecting eigenvalues is proposed, which is both geometrical and sequential. Problems of robustness of aggregation are evoked and resolved in some particular cases. The balanced state space representation is approached by means of contralibility and observability degrees. The notion of perturbability degree is introduced. Then we study the application of those two methods to reduced order compensator design. The two methods are finally applied to the system representing the launch booster Ariane flying
Obame, Meye Pierre. "Sûreté de fonctionnement dans le nuage de stockage." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S091/document.
Full textThe quantity of data in the world is steadily increasing bringing challenges to storage system providers to find ways to handle data efficiently in term of dependability and in a cost-effectively manner. We have been interested in cloud storage which is a growing trend in data storage solution. For instance, the International Data Corporation (IDC) predicts that by 2020, nearly 40% of the data in the world will be stored or processed in a cloud. This thesis addressed challenges around data access latency and dependability in cloud storage. We proposed Mistore, a distributed storage system that we designed to ensure data availability, durability, low access latency by leveraging the Digital Subscriber Line (xDSL) infrastructure of an Internet Service Provider (ISP). Mistore uses the available storage resources of a large number of home gateways and Points of Presence for content storage and caching facilities. Mistore also targets data consistency by providing multiple types of consistency criteria on content and a versioning system. We also considered the data security and confidentiality in the context of storage systems applying data deduplication which is becoming one of the most popular data technologies to reduce the storage cost and we design a two-phase data deduplication that is secure against malicious clients while remaining efficient in terms of network bandwidth and storage space savings
Secret, Ghislain. "La maintenance des données dans les systèmes de stockage pair à pair." Amiens, 2009. http://www.theses.fr/2009AMIE0111.
Full textPeer to peer systems are designed to share resources on the Internet. The independence of the architecture from a centralized server provides the peer-to-peer networks a very high fault tolerance (no peer is essential to the functioning of the network). This property makes the use of this architecture very suitable for permanent storage of data on a large scale. However, peer to peer systems are characterised by peer’s volatility. Peers connect and disconnect randomly. The challenge is to ensure the continuity of data in a storage media constantly changing. For this, to cope with peer’s volatility, data redundancy schemes coupled with reconstruction mechanism of lost data are introduced. But the reconstructions needed to maintain the continuity of data are not neutral in terms of burden on the system. To investigate factors that impact the higher the data maintenance cost, a model of peer to peer storage system was designed. This model is based on an IDA (Information Dispersal Algorithm) redundancy scheme. Built on this model, a simulator was developed and the system behaviour for the cost of regeneration of the data was analyzed. Two reconstruction strategies are observed. The first mechanism is based on a threshold from the level of data redundancy. It requires constant monitoring of the state data. The second strategy involves a number of reconstructions by a system of quota allocation for a defined period of time. It is less comfortable psychologically because it significantly reduces the control of the data state by abstracting the threshold mechanism. Based on a stochastic analysis of the strategies, keys are provided to define the parameters of the system according to the target level of durability desired
Kiefer, Renaud. "Etude et conception d'un système de stockage et d'adressage photonique de données." Université Louis Pasteur (Strasbourg) (1971-2008), 2002. http://www.theses.fr/2002STR13199.
Full textThe increase in the speed of microprocessors, the evolution of multimedia and of the Internet has created a growing need of data storage solutions. Encouraged by the rapid technological progress over the past decade, this need has grown exponentially. Even if DVD technology satisfies the present data storage demand (about 10 bit/æmø), certain new applications such as 3D imaging and huge data bases need the development of new technology. The objective of this thesis has been to study and conceive a data storage and addressing system based on holographic memories. This kind of memory shows interesting possibilities for massive volume data storage (about 100 bit/æm3). The system allows a rapid access time (ms), on a large angular bandwidth, at any informations stored on the diffractive memory. Analysis of optical memories based on dichromated gelatin has allowed the determination of their domain of use and set the constrains of the addressing system. The originality of the work has been to associate MEMS (integrated micro mirrors) and an acousto-optic cell. We have measured the deformation of the MEMS to evaluate the influence on the reading of the information stored in diffractive memories. Experimental results show the possibility of obtaining an address rate of 100Gbits/s. The reading system limitations are due to the low oscillating frequency of the MEMS and principally to the low acquisition rate of the CCD camera. The use of high speed cameras will allow to increase the readout rate
Barrabe, Patrice. "Acquisition et transmission optique de données." Grenoble 1, 1990. http://www.theses.fr/1990GRE10121.
Full textDuquesne, Marie. "Résolution et réduction d'un modèle non-linéaire de stockage d'énergie par adsorption sur des zéolithes." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00821894.
Full textBarkat, Okba. "Utilisation conjointe des ontologies et du contexte pour la conception des systèmes de stockage de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0001/document.
Full textWe are witnessing an era when any company is strongly interested in collecting and analyzing data from heterogeneous and varied sources. These sources also have another specificity, namely con- text awareness. Three complementary problems are identified: the resolution of the heterogeneity of the sources, (ii) the construction of a decisional integrating system, and (iii) taking into account the context in this integration. To solve these problems, we are interested in this thesis in the design of contextual applications based on a domain ontology.To do this, we first propose a context model that integrates the main dimensions identified in the literature. Once built, it is linked to the ontology model. This approach increases flexibility in the design of advanced applications. Then, we propose two case studies: (1) the contextualization of semantic data sources where we extend the OntoBD/OntoQL system to take the context into account, and (2) the design of a contextual data warehouse where the context model is projected on the different phases of the life cycle design. To validate our proposal, we present a tool implementing the different phases of the proposed design approach
Labbé, Sébastien. "Réduction paramétrée de spécifications formées d'automates communicants : algorithmes polynomiaux pour la réduction de modèles." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00180174.
Full textL'idée que nous proposons consiste à contourner ce phénomène en appliquant des techniques de réduction paramétrée, pouvant être désignées sous le terme anglo-saxon "slicing'', en amont d'une analyse complexe. Cette analyse peut ainsi être effectuée a posteriori sur une spécification réduite, donc potentiellement moins sujette à l'explosion combinatoire. Notre méthode de réduction paramétrée est basée sur des relations de dépendances dans la spécification sous analyse, et est fondée principalement sur les travaux effectués par les communautés de la compilation et du slicing de programmes. Dans cette thèse nous établissons un cadre théorique pour les analyses statiques de spécifications formées d'automates communicants, dans lequel nous définissons formellement les relations de dépendances mentionnées ci-dessus, ainsi que le concept de "tranche" de spécification par rapport à un "critère" de réduction. Ensuite, nous décrivons et démontrons les algorithmes efficaces que nous avons mis au point pour calculer les relations de dépendances et les tranches de spécifications, et enfin nous décrivons notre mise en oeuvre de ces algorithmes dans l'outil "Carver", pour la réduction paramétrée de spécifications formées d'automates communicants.
Gabsi, Nesrine. "Extension et interrogation de résumé de flux de données." Paris, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00613122.
Full textIn the last few years, a new environment, in which data have to be collected and processed instantly when arriving, has emerged. To handle the large volume of data associated with this environment, new data processing model and techniques have to be set up ; they are referred as data stream management. Data streams are usually continuous, voluminous, and cannot be registered integrally as persistent data. Many research works have handled this issue. Therefore, new systems called DSMS (Data Stream Management Systems) appeared. The DSMS evaluates continuous queries on a stream or a window (finite subset of streams). These queries have to be specified before the stream's arrival. Nevertheless, in case of some applications, some data could be required after their expiration from the DSMS in-memory. In this case, the system cannot treat the queries as such data are definitely lost. To handle this issue, it is essential to keep a ummary of data stream. Many summaries algorithms have been developed. The selection of a summarizing method depends on the kind of data and the associated issue. In this thesis, we are first interested with the elaboration of a generic summary structure while coming to a compromise between the summary elaboration time and the quality of the summary. We introduce a new summary approach which is more efficient for querying very old data. Then, we focus on the uerying methods for these summaries. Our objective is to integrate the structure of generic summaries in the architecture of the existing DSMS. By this way, we extend the range of the possible queries. Thus, the processing of the queries on old stream data (expired data) becomes possible as well as queries on new stream data. To this end, we introduced two approaches. The difference between them is the role played by summary module when the query is evaluated
Yin, Shaoyi. "Un modèle de stockage et d'indexation pour des données embarquées en mémoire flash." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0008.
Full textNAND Flash has become the most popular stable storage medium for embedded systems. Efficient storage and indexing techniques are very challenging to design due to a combination of NAND Flash constraints and embedded system constraints. In this thesis, we propose a new model relying on two basic principles: database serialization and database stratification. An indexing technique called PBFilter is presented to illustrate these principles. Analytical and experimental results show that the new approach meets very well the embedded system requirements. The PBFilter technique has been integrated into a complete embedded DBMS engine PlugDB. PlugDB is used in a real-life application implementing a secure and portable medico-social folder. PlugDB can be also seen as a central building block for a global vision named Personal Data Server, whose objective is to manage personal information in a secure, privacy-preserving and user-controlled way
Chiky, Raja. "Résumé de flux de données distribués." Paris, ENST, 2009. https://pastel.hal.science/pastel-00005137.
Full textIn this thesis, we consider a distributed computing environment, describing a collection of multiple remote sensors that feed a unique central server with numeric and uni-dimensional data streams (also called curves). The central server has a limited memory but should be able to compute aggregated value of any subset of the stream sources from a large time horizon including old and new data streams. Two approaches are studied to reduce the size of data : (1) spatial sampling only consider a random sample of the sources observed at every instant ; (2) temporal sampling consider all sources but samples the instants to be stored. In this thesis, we propose a new approach for summarizing temporally a set of distributed data streams : From the observation of what is happening during a period t -1, we determine a data collection model to apply to the sensors for period t. The computation of aggregates involves statistical inference in the case of spatial sampling and interpolation in the case of temporal sampling. To the best of our knowledge, there is no method for estimating interpolation errors at each timestamp that would take into account some curve features such as the knowledge of the integral of the curve during the period. We propose two approaches : one uses the past of the data curve (naive approach) and the other uses a stochastic process for interpolation (stochastic approach)
Fournié, Laurent Henri. "Stockage et manipulation transactionnels dans une base de données déductives à objets : techniques et performances." Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0017.
Full textHanczar, Blaise. "Réduction de dimension pour l'apprentissage supervisé de données issues de puce à ADN." Paris 13, 2006. http://www.theses.fr/2006PA132012.
Full textDurbiano, Sophie. "Vecteurs caractéristiques de modèles océaniques pour la réduction d'ordre en assimilation de données." Université Joseph Fourier (Grenoble), 2001. http://www.theses.fr/2001GRE10228.
Full textGuittenit, Christophe. "Placement d'objets multimédias sur un groupe hétérogène de dispositifs de stockage." Toulouse 3, 2002. http://www.theses.fr/2002TOU30098.
Full textThe data administration of storage system consists in providing to each application a storage space having a quality of service appropriate to the needs for this application: quality expressed in term of storage capacity, reliability and availability of storage, and of performances in access time and throughput (bandwidth). This thesis proposes to make the study of the automatic administration of a heterogeneous storage system dedicated to the service of multimedia objects. After having studied and having carried out the classification of the various policies of placement designed to exploit this type of storage system, we propose a new data placement, the EFLEX (Entrelacement FLEXible - that is "flexible interleaving") that makes it possible to jointly exploit the bandwidth and the storage capacity of the system. .
Khouri, Selma. "Cycle de vie sémantique de conception de systèmes de stockage et manipulation de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2013. http://www.theses.fr/2013ESMA0016/document.
Full textData Warehouses (DWs) become essential components for companies and organizations.DWdesign field has been actively researched in recent years. The main limitation of the proposedapproaches is the lack of an overall vision covering the DW design cycle. Our main contributionin this thesis is to propose a method adapted to recent evolutions of the DW design cycle,and covering all its phases. These evolutions have given rise to new data storage models andnew deployment architectures, which offers different design choices for designers and administrators.DW literature recognizes the importance of user requirements in the design process, andthe importance of accessing and representing data semantics. We propose an ontology drivendesign method that valorizes users’ requirements by providing them a persistent view in theDW structure. This view allows anticipating diverse design tasks and simulating different designchoices. Our second proposal revisits the design cycle by executing the ETL phase (extractiontransformation-loading of data) in the conceptual stage. This proposal allows a deployment à lacarte of the DW using the different deployment platforms available
Khouri, Selma, and Selma Khouri. "Cycle de vie sémantique de conception de systèmes de stockage et manipulation de données." Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00926657.
Full textRomito, Benoit. "Stockage décentralisé adaptatif : autonomie et mobilité des données dans les réseaux pair-à-pair." Caen, 2012. http://www.theses.fr/2012CAEN2072.
Full textLe, Hung-Cuong. "Optimisation d'accès au médium et stockage de données distribuées dans les réseaux de capteurs." Besançon, 2008. http://www.theses.fr/2008BESA2052.
Full textWireless sensor network is a very hot research topic tendency for the last few years. This technology can be applied into different domains as environment, industry, commerce, medicine, military etc. Depending on the application type, the problems and requirements might be different. In this thesis, we are interested in two major problems: the medium access control and the distributed data storage. The document is divided to two parts where the first part is a state of the art of different existing works and the second part describes our contribution. In the first contribution, we have proposed two MAC protocols. The first one optimizes the wireless sensor networks lifetime for surveillance applications and the second one reduces the transmission latency in event-driven wireless sensor networks for critical applications. In the second contribution, we have worked with several data storage models in wireless sensor network and we focus on the data-centric storage model. We have proposed a clustering structure for sensors to improve the routing and reduce the number of transmissions in order to prolong the network lifetime
Crespo-Monteiro, Nicolas. "Photochromisme de films mésoporeux d'oxyde de titane dopés argent appliqué au stockage de données." Thesis, Saint-Etienne, 2012. http://www.theses.fr/2012STET4027.
Full textSilver species adsorbed on colloidal titania have been known for a long time to exhibit photochromism. The color change is due to the reduction of silver salts from metallic nanoparticles under UV illumination and oxidation of these latters under visible illumination. Recently, a new functionality inducing multicolor photochromism has been reported in nanocomposite materials constituted by silver nanoparticles introduced in nanoporous titania film. In this dissertation, we study the influence of mesoporous titania matrix with controlled pores sizes on the photochromisrn behavior of such films. We show that the films porosity allows to control the formed particles under UV illumination and that is possible to bleach the photo-induced patterns with a monochromatic visible light although usually this type of illumination color the film. The utilization of these materials allows also to sensibly improve the temporal stability of photo-induced inscriptions, which allows to use them as rewritable data carriers. We demonstrate also that above an intensity threshold, it is possible to inscribe permanent pattern with an UV or visible illumination, which allows to use these films like permanent data carriers. Finally, in the last part, we show that it is possible to photo-induce in visible light dichroic color highly reflective without prior reduction of silver salts
Diallo, Thierno Ahmadou. "GRAPP&S, une solution totalement répartie pour le stockage des données et Services." Thesis, Reims, 2016. http://www.theses.fr/2016REIMS006.
Full textData storage is a crucial point for applications development and particularly for distributed applications. There are many issues related to data storage: the insurance of sustainability, identification and indexing of the data, the warranty of searching and accessing the data, and possibly the fetching of the data for applications use. So, there is a need to design methods for ensuring effectiveness of all of these properties and operations in heterogeneous environments, both in terms of data formats, exchange protocols, and applications.In this thesis, we propose GRAPP&S (Grid APPlication & Services), a multi-scale framework for an unified storage and indexing of both data and services. Indeed, GRAPP&S was designed to support different data formats such as files, stream or database queries, but also to support the access to distant services (web services, cloud services or HPC computing services, for example). GRAPP&S coordinates an hierarchical routing architecture that allows data indexing and access, thanks to a multi-scale network of local-area communities. Transparent access is provided by a network of specialized proxies, which handle most aspects related to data location, request handling, data preprocessing or summary and also data consistence.Finally, we exploit GRAPP&S in the particular context of E-Learning. Our solution will reduce the cost of merging distributed educational resources over several organizations and ensure the exploitation of learners
Kumar, Sathiya Prabhu. "Cohérence de données répliquées partagées adaptative pour architectures de stockage à fort degré d’élasticité." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1035/document.
Full textThe main contributions of this thesis are three folds. The first contribution of the thesis focuses on an efficient way to control stale reads in modern database systems with the help of a new consistency protocol called LibRe. LibRe is an acronym for Library for Replication. The main goal of the LibRe protocol is to ensure data consistency by contacting a minimum number of replica nodes during read and write operations with the help of a library information. According to the protocol, during write operations each replica node updates a registry (library) asynchronously with the recent version identifier of the updated data. Forwarding the read requests to a right replica node referring the registry information helps to control stale reads during read operations. Evaluation of data consistency remains challenging both via simulation as well as in a real world setup. Hence, we implemented a new simulation toolkit called Simizer that helps to evaluate the performance of different consistency policies in a fast and efficient way. We also extended an existing benchmark tool YCSB that helps to evaluate the consistency-latency tradeoff offered by modern database systems. The codebase of the simulator and the extended YCSB are made open-source for public access. The performance of the LibRe protocol is validated both via simulation as well as in a real setup with the help of extended YCSB.Although the modern database systems adapt the consistency guarantees of the system per query basis, anticipating the consistency level of an application query in advance during application development time remains challenging for the application developers. In order to overcome this limitation, the second contribution of the thesis focuses on enabling the database system to override the application-defined consistency options during run time with the help of an external input. The external input could be given by a data administrator or by an external service. The thesis validates the proposed model with the help of a prototype implementation inside the Cassandra distributed storage system.The third contribution of the thesis focuses on resolving update conflicts. Resolving update conflicts often involve maintaining all possible values and perform the resolution via domain-specific knowledge at the client side. This involves additional cost in terms of network bandwidth and latency, and considerable complexity. In this thesis, we discuss the motivation and design of a novel data type called priority register that implements a domain-specific conflict detection and resolution scheme directly at the database side, while leaving open the option of additional reconciliation at the application level. Our approach uses the notion of an application-defined replacement ordering and we show that a data type parameterized by such an order can provide an efficient solution for applications that demand domain-specific conflict resolution. We also describe the proof of concept implementation of the priority register inside Cassandra. The conclusion and perspectives of the thesis work are summarized at the end
Flitti, Farid. "Techniques de réduction de données et analyse d'images multispectrales astronomiques par arbres de Markov." Phd thesis, Université Louis Pasteur - Strasbourg I, 2005. http://tel.archives-ouvertes.fr/tel-00156963.
Full textFlitti, Farid. "Techniques de réduction de données et analyse d'images multispéctrales astronomiques par arbres de Markov." Université Louis Pasteur (Strasbourg) (1971-2008), 2005. https://publication-theses.unistra.fr/public/theses_doctorat/2005/FLITTI_Farid_2005.pdf.
Full textThe development of astronomical multispectral sensors allows data of a great richness. Nevertheless, the classification of multidimensional images is often limited by Hughes phenomenon: when dimensionality increases the number of parameters of the model grows and the precision of their estimates falls inevitably, therefore the quality of the segmentation dramatically decreases. It is thus imperative to discard redundant information in order to carry out robust segmentation or classification. In this thesis, we have proposed two methods for multispectral image dimensionnality reduction: 1) bands regrouping followed by local projections; 2) radio cubes reduction by a mixture of Gaussians model. We have also proposed joint reduction/segmentation scheme based on the regularization of the mixture of probabilistic principal components analyzers (MPPCA). For the segmentation task, we have used a Bayesian approach based on hierarchical Markov models namely the hidden Markov tree and the pairwise Markov tree. These models allow fast and exact computation of the a posteriori probabilities. For the data driven term, we have used three formulations: 1) the classical multidimensional Gaussian distribution 2) the multidimensional generalized Gaussian distribution formulated using copulas theory 3) the likelihood of the probabilistic PCA model (within the framework of the regularized MPPCA). The major contribution of this work consists in introducing various hierarchical Markov models for multidimensional and multiresolution data segmentation. Their exploitation for data issued from wavelets analysis, adapted to the astronomical context, enabled us to develop new denoising and fusion techniques of multispectral astronomical images. All our algorithms are unsupervised and were validated on synthetic and real images
Chen, Fati. "Réduction de l'encombrement visuel : Application à la visualisation et à l'exploration de données prosopographiques." Thesis, Université de Montpellier (2022-….), 2022. http://www.theses.fr/2022UMONS023.
Full textProsopography is used by historians to designate biographical records in order to study common characteristics of a group of historical actors through a collective analysis of their lives. Information visualization presents interesting perspectives for analyzing prosopographic data. It is in this context that the work presented in this thesis is situated. First, we present the ProsoVis platform to analyze and navigate through prosopographic data. We describe the different needs expressed and detail the design choices as well as the different views. We illustrate its use with the Siprojuris database which contains data on the careers of law teachers from 1800 to 1950. Visualizing so much data induces visual cluttering problems. In this context, we address the problem of overlapping nodes in a graph. Even if approaches exist, it is difficult to compare them because their respective evaluations are not based on the same quality criteria. We therefore propose a study of the state-of-the-art algorithms by comparing their results on the same criteria. Finally, we address a similar problem of visual cluttering within a map and propose an agglomeration spatial clustering approach, F-SAC, which is much faster than the state-of-the-art proposals while guaranteeing the same quality of results
Teixeira, Ramachrisna. "Traitement global des observations méridiennes de l'Observatoire de Bordeaux." Bordeaux 1, 1990. http://www.theses.fr/1990BOR10618.
Full textTissot, Gilles. "Réduction de modèle et contrôle d'écoulements." Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT2284/document.
Full textControl of turbulent flows is still today a challenge in aerodynamics. Indeed, the presence of a high number of active degrees of freedom and of a complex dynamics leads to the need of strong modelling efforts for an efficient control design. During this PhD, various directions have been followed in order to develop reduced-order models of flows in realistic situations and to use it for control. First, dynamic mode decomposition (DMD), and some of its variants, have been exploited as reduced basis for extracting at best the dynamical behaviour of the flow. Thereafter, we were interested in 4D-variational data assimilation which combines inhomogeneous informations coming from a dynamical model, observations and an a priori knowledge of the system. POD and DMD reduced-order models of a turbulent cylinder wake flow have been successfully derived using data assimilation of PIV measurements. Finally, we considered flow control in a fluid-structure interaction context. After showing that the immersed body motion can be represented as an additional constraint in the reduced-order model, we stabilized a cylinder wake flow by vertical oscillations
Laga, Arezki. "Optimisation des performance des logiciels de traitement de données sur les périphériques de stockage SSD." Thesis, Brest, 2018. http://www.theses.fr/2018BRES0087/document.
Full textThe growing volume of data poses a real challenge to data processing software like DBMS (DataBase Management Systems) and data storage infrastructure. New technologies have emerged in order to face the data volume challenges. We considered in this thesis the emerging new external memories like flash memory-based storage devices named SSD (Solid State Drive).SSD storage devices offer a performance gain compared to the traditional magnetic devices.However, SSD devices offer a new performance model that involves 10 cost optimization for data processing and management algorithms.We proposed in this thesis an 10 cost model to evaluate the data processing algorithms. This model considers mainly the SSD 10 performance and the data distribution.We also proposed a new external sorting algorithm: MONTRES. This algorithm includes optimizations to reduce the 10 cost when the volume of data is greater than the allocated memory space by an order of magnitude. We proposed finally a data prefetching mechanism: Lynx. This one makes use of a machine learning technique to predict and to anticipate future access to the external memory
Borba, Ribeiro Heverson. "L'Exploitation de Codes Fontaines pour un Stockage Persistant des Données dans les Réseaux d'Overlay Structurés." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00763284.
Full textCarpen-Amarie, Alexandra. "Utilisation de BlobSeer pour le stockage de données dans les Clouds: auto-adaptation, intégration, évaluation." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00696012.
Full textDandoush, Abdulhalim. "L'Analyse et l'Optimisation des Systèmes de Stockage de Données dans les Réseaux Pair-à-Pair." Phd thesis, Université de Nice Sophia-Antipolis, 2010. http://tel.archives-ouvertes.fr/tel-00470493.
Full textLoisel, Loïc. "Claquage Electrique et Optique d'Allotropes du Carbone : Mécanismes et Applications pour le Stockage de Données." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX021/document.
Full textToday, data storage applications rely mainly on two types of materials: chalcogenides for optical storage (e.g. Blu-Ray) and silicon for electronic storage (e.g. Flash memory). While these materials have proven to be the most efficient for widespread applications, both have limitations. Recently, with the rise of graphene, carbon allotropes have been studied both for their intrinsic properties and for applications; graphene and other carbon allotropes have very interesting electronic, thermal and mechanical properties that can make these materials more efficient than either chalcogenides or silicon for certain applications. In this thesis, we study the feasibility and potential of the usage of carbon as a data storage material.Firstly, we focus on developing optical data storage. It is found that both continuous-wave and pulsed lasers can be used to induce reversible phase changes in carbon thin films, thus opening the way toward carbon-based data storage. Along the way, several phenomena are discovered, shown and explained by using advanced characterization techniques and thermal modelling.Secondly, we focus on electronic data storage by developing graphene-based memories that are found to switch reliably between two well-separated resistance states for a large number of cycles. To assess the potential of this new technology, we characterize the switching mechanism and develop an electro-mechanical model enabling to predict the best performances attainable: these memories would potentially be much faster than Flash memories while playing the same role (non-volatile storage)
Jaiman, Vikas. "Amélioration de la prédictibilité des performances pour les environnements de stockage de données dans les nuages." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM016/document.
Full textToday, users of interactive services such as e-commerce, web search have increasingly high expectations on the performance and responsiveness of these services. Indeed, studies have shown that a slow service (even for short periods of time) directly impacts the revenue. Enforcing predictable performance has thus been a priority of major service providers in the last decade. But avoiding latency variability in distributed storage systems is challenging since end user requests go through hundreds of servers and performance hiccups at any of these servers may inflate the observed latency. Even in well-provisioned systems, factors such as the contention on shared resources or the unbalanced load between servers affect the latencies of requests and in particular the tail (95th and 99th percentile) of their distribution.The goal of this thesis to develop mechanisms for reducing latencies and achieve performance predictability in cloud data stores. One effective countermeasure for reducing tail latency in cloud data stores is to provide efficient replica selection algorithms. In replica selection, a request attempting to access a given piece of data (also called value) identified by a unique key is directed to the presumably best replica. However, under heterogeneous workloads, these algorithms lead to increased latencies for requests with a short execution time that get scheduled behind requests with large execution times. We propose Héron, a replica selection algorithm that supports workloads of heterogeneous request execution times. We evaluate Héron in a cluster of machines using a synthetic dataset inspired from the Facebook dataset as well as two real datasets from Flickr and WikiMedia. Our results show that Héron outperforms state-of-the-art algorithms by reducing both median and tail latency by up to 41%.In the second contribution of the thesis, we focus on multiget workloads to reduce the latency in cloud data stores. The challenge is to estimate the bottleneck operations and schedule them on uncoordinated backend servers with minimal overhead. To reach this objective, we present TailX, a task aware multiget scheduling algorithm that reduces tail latencies under heterogeneous workloads. We implement TailX in Cassandra, a widely used key-value store. The result is an improved overall performance of the cloud data stores for a wide variety of heterogeneous workloads
Soyez, Olivier. "Stockage dans les systèmes pair à pair." Phd thesis, Université de Picardie Jules Verne, 2005. http://tel.archives-ouvertes.fr/tel-00011443.
Full textDans un premier temps, nous avons créé un prototype Us et conçu une interface utilisateur, nommée UsFS, de type système de fichiers. Un procédé de journalisation des données est inclus dans UsFS.
Ensuite, nous nous sommes intéressés aux distributions de données au sein du réseau Us. Le but de ces distributions est de minimiser le dérangement occasionné par le processus de reconstruction pour chaque pair. Enfin, nous avons étendu notre schéma de distribution pour gérer le comportement dynamique des pairs et prendre en compte les corrélations de panne.
Pasquier, Nicolas. "Data Mining : algorithmes d'extraction et de réduction des règles d'association dans les bases de données." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2000. http://tel.archives-ouvertes.fr/tel-00467764.
Full textSoler, Maxime. "Réduction et comparaison de structures d'intérêt dans des jeux de données massifs par analyse topologique." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS364.
Full textIn this thesis, we propose different methods, based on topological data analysis, in order to address modern problematics concerning the increasing difficulty in the analysis of scientific data. In the case of scalar data defined on geometrical domains, extracting meaningful knowledge from static data, then time-varying data, then ensembles of time-varying data proves increasingly challenging. Our approaches for the reduction and analysis of such data are based on the idea of defining structures of interest in scalar fields as topological features. In a first effort to address data volume growth, we propose a new lossy compression scheme which offers strong topological guarantees, allowing topological features to be preserved throughout compression. The approach is shown to yield high compression factors in practice. Extensions are proposed to offer additional control over the geometrical error. We then target time-varying data by designing a new method for tracking topological features over time, based on topological metrics. We extend the metrics in order to overcome robustness and performance limitations. We propose a new efficient way to compute them, gaining orders of magnitude speedups over state-of-the-art approaches. Finally, we apply and adapt our methods to ensemble data related to reservoir simulation, for modeling viscous fingering in porous media. We show how to capture viscous fingers with topological features, adapt topological metrics for capturing discrepancies between simulation runs and a ground truth, evaluate the proposed metrics with feedback from experts, then implement an in-situ ranking framework for rating the fidelity of simulation runs
Benaceur, Amina. "Réduction de modèles en thermo-mécanique." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1140/document.
Full textThis thesis introduces three new developments of the reduced basis method (RB) and the empirical interpolation method (EIM) for nonlinear problems. The first contribution is a new methodology, the Progressive RB-EIM (PREIM) which aims at reducing the cost of the phase during which the reduced model is constructed without compromising the accuracy of the final RB approximation. The idea is to gradually enrich the EIM approximation and the RB space, in contrast to the standard approach where both constructions are separate. The second contribution is related to the RB for variational inequalities with nonlinear constraints. We employ an RB-EIM combination to treat the nonlinear constraint. Also, we build a reduced basis for the Lagrange multipliers via a hierarchical algorithm that preserves the non-negativity of the basis vectors. We apply this strategy to elastic frictionless contact for non-matching meshes. Finally, the third contribution focuses on model reduction with data assimilation. A dedicated method has been introduced in the literature so as to combine numerical models with experimental measurements. We extend the method to a time-dependent framework using a POD-greedy algorithm in order to build accurate reduced spaces for all the time steps. Besides, we devise a new algorithm that produces better reduced spaces while minimizing the number of measurements required for the final reduced problem
Schaaf, Thomas. "Couplage inversion et changement d'échelle pour l'intégration des données dynamiques dans les modèles de réservoirs pétroliers." Paris 9, 2003. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2003PA090046.
Full textPamba, Capo-Chichi Medetonhan Shambhalla Eugène William. "Conception d’une architecture hiérarchique de réseau de capteurs pour le stockage et la compression de données." Besançon, 2010. http://www.theses.fr/2010BESA2031.
Full textRecent advances in various aeras related to micro-electronics, computer science and wireless networks have resulted in the development of new research topics. Sensor networks are one of them. The particularity of this new research direction is the reduced performances of nodes in terms of computation, memory and energy. The purpose of this thesis is the definition of a new hierarchical architecture of sensor networks usable in different contexts by taking into account the sensors constraints and providing a high quality data such as multimedia to the end-users. We present our hierachical architecture with different nodes and the wireless technologies that connect them. Because of the high consumtpionof data transmission, we have developped two data compression algortithms in order to optimize the use of the channel by reducing data transmitted. We also present a solution for storing large amount of data on nodes by integrating the file system FAT16 under TinyOS-2. X