Tesis sobre el tema "Gestion des centres de données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Gestion des centres de données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Ostapenco, Vladimir. "Modélisation, évaluation et orchestration des leviers hétérogènes pour la gestion des centres de données cloud à grande échelle". Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0096.
Texto completoThe Information and Communication Technology (ICT) sector is constantly growing due to the increasing number of Internet users and the democratization of digital services, leading to a significant and ever-increasing carbon footprint. The share of greenhouse gas (GHG) emissions related to ICT is estimated to be between 1.8% and 3.9% of global GHG emissions in 2020, with a risk of almost doubling and reaching more than 7% by 2025. Data centers are at the center of this growth, estimated to be responsible for a significant portion of the ICT industry's global GHG emissions (ranging from 17% to 45% in 2020) and to consume approximately 1% of global electricity in 2018.Numerous leverages exist and can help cloud providers and data center managers to reduce some of these impacts. These leverages can operate on multiple facets such as turning off unused resources, slowing down resources to adapt to the real needs of applications and services, optimizing or consolidating services to reduce the number of physical resources mobilized. These leverages can be very heterogeneous and involve hardware, software layers or more logistical constraints at the data center scale. Activating, deactivating and orchestrating these heterogeneous leverages on a large scale can be a challenging task, allowing for potential gains in terms of reducing energy consumption and GHG emissions.In this thesis, we address the modeling, evaluation and orchestration of heterogeneous leverages in the context of a large-scale cloud data center by proposing for the first time the combination of heterogeneous leverages: both technological (turning on/off resources, migration, slowdown) and logistical (installation of new machines, decommissioning, functional or geographical changes of IT resources).First, we propose a novel heterogeneous leverage modeling approach covering leverages impacts, costs and combinations, the concepts of an environmental Gantt Chart containing leverages applied to the cloud provider's infrastructure and of a leverage management framework that aims to improve the overall energy and environmental performance of a cloud provider's entire infrastructure. Then, we focus on metric monitoring and collection, including energy and environmental data. We discuss power and energy measurement and conduct an experimental comparison of software-based power meters. Next, we study of a single technological leverage by conducting a thorough analysis of Intel RAPL leverage for power capping purposes on a set of heterogeneous nodes for a variety of CPU- and memory-intensive workloads. Finally, we validate the proposed heterogeneous leverage modeling approach on a large scale by exploring three distinct scenarios that show the pertinence of the proposed approach in terms of resource management and potential impacts reduction
Dellal, Ibrahim. "Gestion et exploitation de larges bases de connaissances en présence de données incomplètes et incertaines". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0016/document.
Texto completoIn the era of digitilization, and with the emergence of several semantic Web applications, many new knowledge bases (KBs) are available on the Web. These KBs contain (named) entities and facts about these entities. They also contain the semantic classes of these entities and their mutual links. In addition, multiple KBs could be interconnected by their entities, forming the core of the linked data web. A distinctive feature of these KBs is that they contain millions to trillions of unreliable RDF triples. This uncertainty has multiple causes. It can result from the integration of data sources with various levels of intrinsic reliability or it can be caused by some considerations to preserve confidentiality. Furthermore, it may be due to factors related to the lack of information, the limits of measuring equipment or the evolution of information. The goal of this thesis is to improve the usability of modern systems aiming at exploiting uncertain KBs. In particular, this work proposes cooperative and intelligent techniques that could help the user in his decision-making when his query returns unsatisfactory results in terms of quantity or reliability. First, we address the problem of failing RDF queries (i.e., queries that result in an empty set of responses).This type of response is frustrating and does not meet the user’s expectations. The approach proposed to handle this problem is query-driven and offers a two fold advantage: (i) it provides the user with a rich explanation of the failure of his query by identifying the MFS (Minimal Failing Sub-queries) and (ii) it allows the computation of alternative queries called XSS (maXimal Succeeding Sub-queries), semantically close to the initial query, with non-empty answers. Moreover, from a user’s point of view, this solution offers a high level of flexibility given that several degrees of uncertainty can be simultaneously considered.In the second contribution, we study the dual problem to the above problem (i.e., queries whose execution results in a very large set of responses). Our solution aims at reducing this set of responses to enable their analysis by the user. Counterparts of MFS and XSS have been defined. They allow the identification, on the one hand, of the causes of the problem and, on the other hand, of alternative queries whose results are of reasonable size and therefore can be directly and easily used in the decision making process.All our propositions have been validated with a set of experiments on different uncertain and large-scale knowledge bases (WatDiv and LUBM). We have also used several Triplestores to conduct our tests
Bessenay, Carole. "La gestion des données environnementales dans un espace naturel sensible : le système d'information géographique des Hautes-Chaumes foréziennes (Massif central)". Saint-Etienne, 1995. http://www.theses.fr/1995STET2024.
Texto completoThe object of this research is to present and to apply to a specific territory the geographical information systems' concepts and potentialities that can help understand the functioning and evolution processes of natural spaces. The GIS of the "Hautes-Chaumes foreziennes" underlines the interest of a computerization of "ecological planning" methods whose aims are to integrate environment into management practices thanks to the analysis of the specific aptitudes or sensitivities of one space. This study is based on the inventory and the mapping ot the Hautes-Chaumes principal natural and human characteristics : topography, vegetation, humidity, pastoral activities. . . The selection of several criteria allows the elaboration of a pluridisciplinary diagnosis which underlines the important sensitivity of this area. This diagnosis is then compared with an evaluation model of anthropic frequenting in a way to define a zoning of the most vulnerable sectors, which are both sensitive and subject to important pressures. This analysis should urge politicians to conceive differentiated management measures related with the incentives at stake in each area in order to conciliate all anthropic activities while respecting the aptitudes of this natural space
Ho, Anh Dung. "Contribution a l'étude de supports logiciels de base de données pour un système de diagnostic applique aux centrales électronucléaires". Paris 7, 1985. http://www.theses.fr/1985PA07F064.
Texto completoAlili, Hiba. "Intégration de données basée sur la qualité pour l'enrichissement des sources de données locales dans le Service Lake". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED019.
Texto completoIn the Big Data era, companies are moving away from traditional data-warehouse solutions whereby expensive and timeconsumingETL (Extract, Transform, Load) processes are used, towards data lakes in order to manage their increasinglygrowing data. Yet the stored knowledge in companies’ databases, even though in the constructed data lakes, can never becomplete and up-to-date, because of the continuous production of data. Local data sources often need to be augmentedand enriched with information coming from external data sources. Unfortunately, the data enrichment process is one of themanual labors undertaken by experts who enrich data by adding information based on their expertise or select relevantdata sources to complete missing information. Such work can be tedious, expensive and time-consuming, making itvery promising for automation. We present in this work an active user-centric data integration approach to automaticallyenrich local data sources, in which the missing information is leveraged on the fly from web sources using data services.Accordingly, our approach enables users to query for information about concepts that are not defined in the data sourceschema. In doing so, we take into consideration a set of user preferences such as the cost threshold and the responsetime necessary to compute the desired answers, while ensuring a good quality of the obtained results
Petitdemange, Eva. "SAMUFLUX : une démarche outillée de diagnostic et d'amélioration à base de doubles numériques : application aux centres d'appels d'urgence de trois SAMU". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0012.
Texto completoThe demand for emergency medical services has been significant and increasing over the last decade. In a constrained medico-economic context, the maintenance of operational capacities represents a strategic strake in front of the risk of congestion and insufficient accessibility for the population. Recent events such as the COVID-19 pandemic show the limits of the current system to face crisis situations. Reinforcement in human resources cannot be the only solution in front of this observation and it becomes unavoidable to build new organizational models while aiming at a quality of service allowing to answer 99% of the incoming calls in less than 60 seconds (90% in 15s and 99% in 30s MARCUS report and HAS recommendation October 2020). However, these models must take into account the great heterogeneity of EMS and their operation. In the light of these findings, the research work presented in this manuscript aims to evaluate the organizational effiectiveness and resilience of EMS in managing the flow of emergency telephone calls to deal with daily life and crisis situations. This evaluation allows us to propose and test new organizational schemes in order to make recommendations adpated to the particularities of emergency call centers. In a first part, we propose a methodology equipped for the diagnosis and improvement of emergency call centers. It can be broken down into two main parts: the study of data from emergency call centers, and then the design and use of a dual digital system. For each step of this methodology, we propose an associated tool. In a second part, we apply the first part of the methodology to our partner EMS data. The aim is to be able to extract information and knowledge from the telephony data as well as from the business processes for handling emergency calls. The knowledge thus extracted makes it possible to design a digital duplicate that is close to the real behavior of the EMS. Finally, in a third part, we use the material produced previously to model and parameterize a digital duplicate deployed on a discrete event simulation engine. It allows us to test several scenarios by playing on different call management organizations. Thanks to this, we make recommendations on the types of organizations to adopt in order to improve the performance of call centers
Segalini, Andrea. "Alternatives à la migration de machines virtuelles pour l'optimisation des ressources dans les centres informatiques hautement consolidés". Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4085.
Texto completoServer virtualization is a technology of prime importance in contemporary data centers. Virtualization provides two key mechanisms, virtual instances and migration, that enable the maximization of the resource utilization to decrease the capital expenses in a data center. In this thesis, we identified and studied two contexts where traditional virtual instance migration falls short in providing the optimal tools to utilize at best the resource available in a cluster: idle virtual machines and largescale hypervisor upgrades.Idle virtual machines permanently lock the resources they are assigned only to await incoming user requests. Indeed, while they are most of the time idle, they cannot be shut down, which would release resources for more demanding services. To address this issue, we propose SEaMLESS, a solution that leverages a novel VM-to-container migration that transforms idle Linux virtual machines into resource-less proxies. SEaMLESS intercepts new user requests while virtual machines are disabled, transparently resuming their execution upon new signs of activity. Furthermore, we propose an easy-to-adopt technique to disable virtual machines based on the traditional hypervisor memory swapping. With our novel suspend-to-swap, we are able to release the majority of the memory and CPU seized by the idle instances, yet providing a fast resume.In the second part of the thesis, we tackle the problem of large-scale upgrades of the hypervisor software. Hypervisor upgrades often require a machine reboot, forcing data center administrators to evacuate the hosts, relocating elsewhere the virtual machines to protect their execution. As this evacuation is costly, both in terms of network transfers and spare resources needed in the data center, hypervisor upgrades hardly scale. We propose Hy-FiX and Multi-FiX, two in-place upgrade that do not consume resource external to the host. Both solutions leverage a zero-copy migration of virtual machines within the host, preserving their execution state across the hypervisor upgrade. Hy-FiX and Multi-FiX achieve scalable upgrades, with only limited impact on the running instances
Jlassi, Aymen. "Optimisation de la gestion des ressources sur une plate-forme informatique du type Big Data basée sur le logiciel Hadoop". Thesis, Tours, 2017. http://www.theses.fr/2017TOUR4042.
Texto completo"Cyres-Group" is working to improve the response time of his clusters Hadoop and optimize how the resources are exploited in its data center. That is, the goals are to finish work as soon as possible and reduce the latency of each user of the system. Firstly, we decide to work on the scheduling problem in the Hadoop system. We consider the problem as the problem of scheduling a set of jobs on a homogeneous platform. Secondly, we decide to propose tools, which are able to provide more flexibility during the resources management in the data center and ensure the integration of Hadoop in Cloud infrastructures without unacceptable loss of performance. Next, the second level focuses on the review of literature. We conclude that, existing works use simple mathematical models that do not reflect the real problem. They ignore the main characteristics of Hadoop software. Hence, we propose a new model ; we take into account the most important aspects like resources management and the relations of precedence among tasks and the data management and transfer. Thus, we model the problem. We begin with a simplistic model and we consider the minimisation of the Cmax as the objective function. We solve the model with mathematical solver CPLEX and we compute a lower bound. We propose the heuristic "LocFirst" that aims to minimize the Cmax. In the third level, we consider a more realistic modelling of the scheduling problem. We aim to minimize the weighted sum of the following objectives : the weighted flow time ( ∑ wjCj) and the makespan (Cmax). We compute a lower bound and we propose two heuristics to resolve the problem
Medina, Marquez Alejandro. "L'analyse des données évolutives". Paris 9, 1985. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1985PA090022.
Texto completoDumont, Frédéric. "Analyses et préconisations pour les centres de données virtualisés". Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0249/document.
Texto completoThis thesis presents two contributions. The first contribution is the study of key performance indicators to monitor physical and virtual machines activity running on VMware and KVM hypervisors. This study highlights performance metrics and provides advanced analysis with the aim to prevent or detect abnormalities related to the four main resources of a datacenter: CPU, memory, disk and network. Thesecond contribution relates to a tool for virtual machines with pre-determined and / or atypical behaviors detection. The detection of these virtual machines has several objectives. First, optimize the use of hardware resources by freeing up resources by removing unnecessary virtual machines or by resizing those oversized. Second, optimize infrastructure performance by detecting undersized or overworked virtual machines and those having an atypical activity
Toure, Carine. "Capitalisation pérenne de connaissances industrielles : Vers des méthodes de conception incrémentales et itératives centrées sur l’activité". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI095/document.
Texto completoIn this research, we are interested in the question of sustainability of the use of knowledge management systems (KMS) in companies. KMS are those IT environments that are set up in companies to share and build common expertise through collaborators. Findings show that, despite the rigor employed by companies in the implementation of these KMS, the risk of knowledge management initiatives being unsuccessful, particularly related to the acceptance and continuous use of these environments by users remains prevalent. The persistence of this fact in companies has motivated our interest to contribute to this general research question. As contributions to this problem, we have 1) identified from the state of the art, four facets that are required to promote the perennial use of a platform managing knowledge; 2) proposed a theoretical model of mixed regulation that unifies tools for self-regulation and tools to support change, and allows the continuous implementation of the various factors that stimulate the sustainable use of CMS; 3) proposed a design methodology, adapted to this model and based on the Agile concepts, which incorporates a mixed evaluation methodology of satisfaction and effective use as well as CHI tools for the completion of different iterations of our methodology; 4) implemented the methodology in real context at the Société du Canal de Provence, which allowed us to test its feasibility and propose generic adjustments / recommendations to designers for its application in context. The tool resulting from our implementation was positively received by the users in terms of satisfaction and usages
Politaki, Dimitra. "Vers la modélisation de clusters de centres de données vertes". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4116.
Texto completoData center clusters energy consumption is rapidly increasing making them the fastest-growing consumers of electricity worldwide. Renewable electricity sources and especially solar energy as a clean and abundant energy can be used, in many locations, to cover their electricity needs and make them "green" namely fed by photovoltaics. This potential can be explored by predicting solar irradiance and assessing the capacity provision for data center clusters. In this thesis we develop stochastic models for solar energy; one at the surface of the Earth and a second one which models the photovoltaic output current. We then compare them to the state of the art on-off model and validate them against real data. We conclude that the solar irradiance model can better capture the multiscales correlations and is suitable for small scale cases. We then propose a new job life-cycle of a complex and real cluster system and a model for data center clusters that supports batch job submissions and cons iders both impatient and persistent customer behavior. To understand the essential computer cluster characteristics, we analyze in detail two different workload type traces; the first one is the published complex Google trace and the second, simpler one, which serves scientific purposes, is from the Nef cluster located at the research center Inria Sophia Antipolis. We then implement the marmoteCore-Q, a tool for the simulation of a family of queueing models based on our multi-server model for data center clusters with abandonments and resubmissions
Ben, Meftah Salma. "Structuration sématique de documents XML centres-documents". Thesis, Toulouse 1, 2017. http://www.theses.fr/2017TOU10061/document.
Texto completoLe résumé en anglais n'a pas été communiqué par l'auteur
Le, Béchec Antony. "Gestion, analyse et intégration des données transcriptomiques". Rennes 1, 2007. http://www.theses.fr/2007REN1S051.
Texto completoAiming at a better understanding of diseases, transcriptomic approaches allow the analysis of several thousands of genes in a single experiment. To date, international standard initiatives have allowed the utilization of large quantity of data generated using transcriptomic approaches by the whole scientific community, and a large number of algorithms are available to process and analyze the data sets. However, the major challenge remaining to tackle is now to provide biological interpretations to these large sets of data. In particular, their integration with additional biological knowledge would certainly lead to an improved understanding of complex biological mechanisms. In my thesis work, I have developed a novel and evolutive environment for the management and analysis of transcriptomic data. Micro@rray Integrated Application (M@IA) allows for management, processing and analysis of large scale expression data sets. In addition, I elaborated a computational method to combine multiple data sources and represent differentially expressed gene networks as interaction graphs. Finally, I used a meta-analysis of gene expression data extracted from the literature to select and combine similar studies associated with the progression of liver cancer. In conclusion, this work provides a novel tool and original analytical methodologies thus contributing to the emerging field of integrative biology and indispensable for a better understanding of complex pathophysiological processes
Maniu, Silviu. "Gestion des données dans les réseaux sociaux". Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0053/document.
Texto completoWe address in this thesis some of the issues raised by the emergence of social applications on the Web, focusing on two important directions: efficient social search inonline applications and the inference of signed social links from interactions between users in collaborative Web applications. We start by considering social search in tagging (or bookmarking) applications. This problem requires a significant departure from existing, socially agnostic techniques. In a network-aware context, one can (and should) exploit the social links, which can indicate how users relate to the seeker and how much weight their tagging actions should have in the result build-up. We propose an algorithm that has the potential to scale to current applications, and validate it via extensive experiments. As social search applications can be thought of as part of a wider class of context-aware applications, we consider context-aware query optimization based on views, focusing on two important sub-problems. First, handling the possible differences in context between the various views and an input query leads to view results having uncertain scores, i.e., score ranges valid for the new context. As a consequence, current top-k algorithms are no longer directly applicable and need to be adapted to handle such uncertainty in object scores. Second, adapted view selection techniques are needed, which can leverage both the descriptions of queries and statistics over their results. Finally, we present an approach for inferring a signed network (a "web of trust")from user-generated content in Wikipedia. We investigate mechanisms by which relationships between Wikipedia contributors - in the form of signed directed links - can be inferred based their interactions. Our study sheds light into principles underlying a signed network that is captured by social interaction. We investigate whether this network over Wikipedia contributors represents indeed a plausible configuration of link signs, by studying its global and local network properties, and at an application level, by assessing its impact in the classification of Wikipedia articles.javascript:nouvelleZone('abstract');_ajtAbstract('abstract')
Maniu, Silviu. "Gestion des données dans les réseaux sociaux". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0053.
Texto completoWe address in this thesis some of the issues raised by the emergence of social applications on the Web, focusing on two important directions: efficient social search inonline applications and the inference of signed social links from interactions between users in collaborative Web applications. We start by considering social search in tagging (or bookmarking) applications. This problem requires a significant departure from existing, socially agnostic techniques. In a network-aware context, one can (and should) exploit the social links, which can indicate how users relate to the seeker and how much weight their tagging actions should have in the result build-up. We propose an algorithm that has the potential to scale to current applications, and validate it via extensive experiments. As social search applications can be thought of as part of a wider class of context-aware applications, we consider context-aware query optimization based on views, focusing on two important sub-problems. First, handling the possible differences in context between the various views and an input query leads to view results having uncertain scores, i.e., score ranges valid for the new context. As a consequence, current top-k algorithms are no longer directly applicable and need to be adapted to handle such uncertainty in object scores. Second, adapted view selection techniques are needed, which can leverage both the descriptions of queries and statistics over their results. Finally, we present an approach for inferring a signed network (a "web of trust")from user-generated content in Wikipedia. We investigate mechanisms by which relationships between Wikipedia contributors - in the form of signed directed links - can be inferred based their interactions. Our study sheds light into principles underlying a signed network that is captured by social interaction. We investigate whether this network over Wikipedia contributors represents indeed a plausible configuration of link signs, by studying its global and local network properties, and at an application level, by assessing its impact in the classification of Wikipedia articles.javascript:nouvelleZone('abstract');_ajtAbstract('abstract')
Benchkron, Said Soumia. "Bases de données et logiciels intégrés". Paris 9, 1985. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1985PA090025.
Texto completoCastelltort, Arnaud. "Historisation de données dans les bases de données NoSQLorientées graphes". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20076.
Texto completoThis thesis deals with data historization in the context of graphs. Graph data have been dealt with for many years but their exploitation in information systems, especially in NoSQL engines, is recent. The emerging Big Data and 3V contexts (Variety, Volume, Velocity) have revealed the limits of classical relational databases. Historization, on its side, has been considered for a long time as only linked with technical and backups issues, and more recently with decisional reasons (Business Intelligence). However, historization is now taking more and more importance in management applications.In this framework, graph databases that are often used have received little attention regarding historization. Our first contribution consists in studying the impact of historized data in management information systems. This analysis relies on the hypothesis that historization is taking more and more importance. Our second contribution aims at proposing an original model for managing historization in NoSQL graph databases.This proposition consists on the one hand in elaborating a unique and generic system for representing the history and on the other hand in proposing query features.We show that the system can support both simple and complex queries.Our contributions have been implemented and tested over synthetic and real databases
Imbaud, Claire. "Influence des technologies de santé dans les parcours de soins des personnes âgées : quel plateau médico-technique ? : éléments de réponse par l’analyse des données de santé". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2380/document.
Texto completoThis work questions the answer to be given in terms of organization of the health technical offer and its fair distribution in the territories especially for the elderly patients with multimorbidities. It is based on the assumption that there is space for a concept of small multi-disciplinary outpatient health facilities, with a small health-technical platform, which would help to streamline and optimize care pathways. The method consisted on the one hand to study in Germany smaller community interdisciplinary health care center (the MVZ) in operation for a longer time than the the French multidisciplinary médical care centers. And on the other hand it analyzed the national heath data to reveal both the existence of comorbidités related groups and homogeneous care pathways related groups. The results are positive, both in network science analysis and in the automation of representations of complex care pathways. They made it possible to create representative patterns of groups, to characterize the consumption of care, in terms of medical devices and human resources, to quantify the cumulative distances traveled and the costs accumulated by patients according to their place of residence and the health institutions to which they are sent. We get addition elements for the definition and labeling of small community health centers, satellite of larger hospitals. This work represents a particularly useful step, both conceptual and practical, for complex health data studies of elderly
Ali, Muhammad. "Stockage de données codées et allocation de tâches pour les centres de données à faible consommation d'énergie". Electronic Thesis or Diss., CY Cergy Paris Université, 2023. http://www.theses.fr/2023CYUN1243.
Texto completoData centers are responsible for a significant portion of global energy consumption. This consumption is expected to grow in the coming years, driven by the increasing demand for data center services. Therefore, the need for energy-efficient, low-carbon data center operations is growing rapidly.This research focuses on designing and implementing a low-carbon, energy-efficient data center powered by solar and hydrogen, granting it independence from the power grid. As a result, the data center is limited by the upper bound on the energy consumption, which is 10KWh. The maximum usage of energy-constraint imposes several challenges to the design, energy usage, and sustainability of the data center.The work first contributes to designing a low-power budget data center while respecting the overall energy constraint. We tried to save the energy usage of the data center through the right choice of hardware while keeping the performance of the data center intact. The second contribution of our work provides valuable protocols like lazy repair in distributed data storage, job placement, and power management techniques to further reduce the data center's energy usage. With the combined efforts of the right choice of hardware, protocols, and techniques, we significantly reduced the overall energy consumption of the data center
Malleret, Véronique. "Une approche de la performance des services fonctionnels : l'évaluation des centres de coûts discrétionnaires". Paris 9, 1993. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1993PA090042.
Texto completoThe thesis is divided into three parts. The first part is devoted to performance evaluation in functional departments. It describes the strategic importance of these departments, defines their role and the nature of their contribution to the achievement of the organization's general goals. This first part then analyses the concept of performance, showing the specific difficulties of this problem in functional departments and suggests a performance evaluation model designed for functional activities. The second part of the thesis addresses the subject of discretionary activities, as covered in management control literature. It proposes a definition and a typology of these activities. An analysis of the various processes and models of control suggested for these activities and of the available performance evaluation methods leads to a hypothesis relating the various characteristics of the discretionary activities to specific listed methods. The third part presents the results of two empirical studies; one of them was a survey addressed to controllers; the second study consisted of interviews of either controllers or functional managers. Both studies describe performance evaluation systems and practices in functional departments in companies operating in France, as well as the respondent’s perception of the problem
Chardonnens, Anne. "La gestion des données d'autorité archivistiques dans le cadre du Web de données". Doctoral thesis, Universite Libre de Bruxelles, 2020. https://dipot.ulb.ac.be/dspace/bitstream/2013/315804/5/Contrat.pdf.
Texto completoThe subject of this thesis is the management of authority records for persons. The research was conducted in an archival context in transition, which was marked by the evolution of international standards of archival description and a shift towards the application of knowledge graphs. The aim of this thesis is to explore how the archival sector can benefit from the developments concerning Linked Data in order to ensure the sustainable management of authority records. Attention is not only devoted to the creation of the records and how they are made available but also to their maintenance and their interlinking with other resources.The first part of this thesis addresses the state of the art of the developments concerning the international standards of archival description as well as those regarding the Wikibase ecosystem. The second part presents an analysis of the possibilities and limits associated with an approach in which the free software Wikibase is used. The analysis is based on an empirical study carried out with data of the Study and Documentation Centre War and Contemporary Society (CegeSoma). It explores the options that are available to institutions that have limited resources and that have not yet implemented Linked Data. Datasets that contain information of people linked to the Second World War were used to examine the different stages involved in the publication of data as Linked Open Data.The experiment carried out in the second part of the thesis shows how a knowledge base driven by software such as Wikibase streamlines the creation of multilingual structured authority data. Examples illustrate how these entities can then be reused and enriched by using external data in interfaces aimed at the general public. This thesis highlights the possibilities of Wikibase, particularly in the context of data maintenance, without ignoring the limitations associated with its use. Due to its empirical nature and the formulated recommendations, this thesis contributes to the efforts and reflections carried out within the framework of the transition of archival metadata.
Doctorat en Information et communication
info:eu-repo/semantics/nonPublished
Tos, Uras. "Réplication de données dans les systèmes de gestion de données à grande échelle". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30066/document.
Texto completoIn recent years, growing popularity of large-scale applications, e.g. scientific experiments, Internet of things and social networking, led to generation of large volumes of data. The management of this data presents a significant challenge as the data is heterogeneous and distributed on a large scale. In traditional systems including distributed and parallel systems, peer-to-peer systems and grid systems, meeting objectives such as achieving acceptable performance while ensuring good availability of data are major challenges for service providers, especially when the data is distributed around the world. In this context, data replication, as a well-known technique, allows: (i) increased data availability, (ii) reduced data access costs, and (iii) improved fault-tolerance. However, replicating data on all nodes is an unrealistic solution as it generates significant bandwidth consumption in addition to exhausting limited storage space. Defining good replication strategies is a solution to these problems. The data replication strategies that have been proposed for the traditional systems mentioned above are intended to improve performance for the user. They are difficult to adapt to cloud systems. Indeed, cloud providers aim to generate a profit in addition to meeting tenant requirements. Meeting the performance expectations of the tenants without sacrificing the provider's profit, as well as managing resource elasticities with a pay-as-you-go pricing model, are the fundamentals of cloud systems. In this thesis, we propose a data replication strategy that satisfies the requirements of the tenant, such as performance, while guaranteeing the economic profit of the provider. Based on a cost model, we estimate the response time required to execute a distributed database query. Data replication is only considered if, for any query, the estimated response time exceeds a threshold previously set in the contract between the provider and the tenant. Then, the planned replication must also be economically beneficial to the provider. In this context, we propose an economic model that takes into account both the expenditures and the revenues of the provider during the execution of any particular database query. Once the data replication is decided to go through, a heuristic placement approach is used to find the placement for new replicas in order to reduce the access time. In addition, a dynamic adjustment of the number of replicas is adopted to allow elastic management of resources. Proposed strategy is validated in an experimental evaluation carried out in a simulation environment. Compared with another data replication strategy proposed in the cloud systems, the analysis of the obtained results shows that the two compared strategies respond to the performance objective for the tenant. Nevertheless, a replica of data is created, with our strategy, only if this replication is profitable for the provider
Duquet, Mario. "Gestion des données agrométéorologiques pour l'autoroute de l'information". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ61339.pdf.
Texto completoRhin, Christophe. "Modélisation et gestion de données géographiques multi-sources". Versailles-St Quentin en Yvelines, 1997. http://www.theses.fr/1997VERS0010.
Texto completoJarma, Yesid. "Protection de ressources dans des centres de données d'entreprise : architectures et protocoles". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00666232.
Texto completoHnayno, Mohamad. "Optimisation des performances énergétiques des centres de données : du composant au bâtiment". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS021.
Texto completoData centers consume vast amounts of electrical energy to power their IT equipment, cooling systems, and supporting infrastructure. This high energy consumption contributes to the overall demand on the electrical grid and release of greenhouse gas emissions. By optimizing energy performance, data centers can reduce their electricity bills, overall operating costs and their environmental impact. This includes implementing energy-efficient technologies, improving cooling systems, and adopting efficient power management practices. Adopting new cooling solutions, such as liquid cooling and indirect evaporative cooling, offer higher energy efficiency and can significantly reduce the cooling-related energy consumption in data centres.In this work, two experimental investigations on a new cooling topologies for information technology racks are conducted. In the first topology, the rack-cooling system is based on a combination of close-coupled cooling and direct-to-chip cooling. Five racks with operational servers were tested. Two temperature difference (15 K and 20 K) was validated for all the IT racks. The impact of these temperature difference profiles on the data-centre performance was analysed using three heat rejection systems under four climatic conditions for a data centre of 600 kW. The impact of the water temperature profile on the partial power usage effectiveness and water usage effectiveness of data centre was analysed to optimise the indirect free cooling system equipped with an evaporative cooling system through two approaches: rack temperature difference and by increasing the water inlet temperature of the data centre. In the second topology, an experimental investigation conducted on a new single-phase immersion/liquid-cooling technique is developed. The experimental setup tested the impact of three dielectric fluids, the effect of the water circuit configuration, and the server power/profile. Results suggest that the system cooling demand depends on the fluid’s viscosity. As the viscosity increased from 4.6 to 9.8 mPa.s, the cooling performance decreased by approximately 6 %. Moreover, all the IT server profiles were validated at various water inlet temperatures up to 45°C and flow rates. The energy performance of this technique and the previous technique was compared. This technique showed a reduction in the DC electrical power consumption by at least 20.7 % compared to the liquid-cooling system. The cooling performance of the air- and liquid-cooled systems and the proposed solution was compared computationally at the server level. When using the proposed solution, the energy consumed per server was reduced by at least 20 % compared with the air-cooling system and 7 % compared with liquid-cooling system.In addition, a new liquid cooling technology for 600 kW Uninterruptible Power Supply (UPS) units. This cooling architecture gives more opportunities to use free cooling as a main and unique cooling system for optimal data centres (DCs). Five thermal hydraulic tests are conducted with different thermal conditions. A 20 K temperature difference profile was validated with a safe operation for all UPS electronic equipment resulting with a thermal efficiency of 82.27 %. The impact of decreasing water flow rate and increasing water and air room temperatures was also analysed. A decrease in inlet water and air temperatures from 41°C to 32°C and from 47°C to 40°C respectively increases the thermal efficiency by 8.64 %. Furthermore, an energy performance analysis comparison is made between air cooled and water cooled UPS units on both UPS and infrastructure levels
Zelasco, José Francisco. "Gestion des données : contrôle de qualité des modèles numériques des bases de données géographiques". Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20232.
Texto completoA Digital Surface Model (DSM) is a numerical surface model which is formed by a set of points, arranged as a grid, to study some physical surface, Digital Elevation Models (DEM), or other possible applications, such as a face, or some anatomical organ, etc. The study of the precision of these models, which is of particular interest for DEMs, has been the object of several studies in the last decades. The measurement of the precision of a DSM model, in relation to another model of the same physical surface, consists in estimating the expectancy of the squares of differences between pairs of points, called homologous points, one in each model which corresponds to the same feature of the physical surface. But these pairs are not easily discernable, the grids may not be coincident, and the differences between the homologous points, corresponding to benchmarks in the physical surface, might be subject to special conditions such as more careful measurements than on ordinary points, which imply a different precision. The generally used procedure to avoid these inconveniences has been to use the squares of vertical distances between the models, which only address the vertical component of the error, thus giving a biased estimate when the surface is not horizontal. The Perpendicular Distance Evaluation Method (PDEM) which avoids this bias, provides estimates for vertical and horizontal components of errors, and is thus a useful tool for detection of discrepancies in Digital Surface Models (DSM) like DEMs. The solution includes a special reference to the simplification which arises when the error does not vary in all horizontal directions. The PDEM is also assessed with DEM's obtained by means of the Interferometry SAR Technique
Colin, Clément. "Gestion et visualisation multiscalaire du territoire au bâtiment : Application à la Gestion et Maintenance assistée par Ordinateur". Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20010.
Texto completoCities and the objects that make them up, such as buildings, water, electricity and road networks, have increasingly precise digital twins that play an important role in understanding territories. The growing use of Geographic Information Systems (GIS), Building Information Model (BIM) and City Information Model (CIM) has led to the creation of a large number of geospatial representations of these urban objects, made up of geometric and semantic data, structured by numerous standards. These representations provide a variety of thematic and spatial information to describe what these objects are physically, functionally and operationally. A better understanding of these urban objects can be provided by applications enabling users to access, visualize and analyze these urban objects using these different representations.In this thesis, we focus on multiscalar interactive web navigation and visualization of multiple representations of the same object. We will consider various heterogeneous standards for representing the interior and exterior of a building and a city. Our first two contributions enable the creation of navigable and contextual views of these heterogeneous representations in a single web context, using approaches based on data integration methods. To this end, we propose a methodology and an open-source tool, Py3DTilers, for extracting, manipulating and visualizing the geometry of geospatial data, as well as a model-based semantic data integration methodology, to ensure that all the information present in these data can be brought and understood by the users. Our third contribution is the formalization of the concepts of Variant - instance or set of instances representing the same entity- and Variant Identifier to reference and navigate through a set of representations of the same object. Finally, our last contribution focuses on the choice of geometric representation of an object to be displayed, depending on the users' 3D context. We propose a study of the levels of detail described in different geospatial data standards, as well as a metric for describing the complexity of a geometric representation to enable this choice.Finally, this thesis was carried out in partnership with Carl Software - Berger-Levrault, a publisher of computer-aided maintenance software and asset management solutions. Particular attention was paid to interoperability (use of standards), reusability (creation of shared software architecture based on open-source tools) and reproducibility of the proposed solutions. This thesis aims to improve the understanding of equipment to facilitate its maintenance and management, by allowing the 3D visualization of equipment and the exploitation of the knowledge that can be found in various representations. This is achieved by establishing a natural link between equipment representations existing in this domain and various geospatial data sources
Sandoval, Gomez Maria Del Rosario. "Conception et réalisation du système de gestion de multibases de données MUSE : architecture de schéma multibase et gestion du catalogue des données". Paris 6, 1989. http://www.theses.fr/1989PA066657.
Texto completoLiroz, Miguel. "Partitionnement dans les systèmes de gestion de données parallèles". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2013. http://tel.archives-ouvertes.fr/tel-01023039.
Texto completoPetit, Loïc. "Gestion de flux de données pour l'observation de systèmes". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00849106.
Texto completoLiroz-Gistau, Miguel. "Partitionnement dans les Systèmes de Gestion de Données Parallèles". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2013. http://tel.archives-ouvertes.fr/tel-00920615.
Texto completoGürgen, Levent. "Gestion à grande échelle de données de capteurs hétérogènes". Grenoble INPG, 2007. http://www.theses.fr/2007INPG0093.
Texto completoThis dissertation deals with the issues related to scalable management of heterogeneous sensor data. Ln fact, sensors are becoming less and less expensive, more and more numerous and heterogeneous. This naturally raises the scalability problem and the need for integrating data gathered from heterogeneous sensors. We propose a distributed and service-oriented architecture in which data processing tasks are distributed at severallevels in the architecture. Data management functionalities are provided in terms of "services", in order to hide sensor heterogeneity behind generic services. We equally deal with system management issues in sensor farms, a subject not yet explored in this context
Liroz, Gistau Miguel. "Partitionnement dans les systèmes de gestion de données parallèles". Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20117/document.
Texto completoDuring the last years, the volume of data that is captured and generated has exploded. Advances in computer technologies, which provide cheap storage and increased computing capabilities, have allowed organizations to perform complex analysis on this data and to extract valuable knowledge from it. This trend has been very important not only for industry, but has also had a significant impact on science, where enhanced instruments and more complex simulations call for an efficient management of huge quantities of data.Parallel computing is a fundamental technique in the management of large quantities of data as it leverages on the concurrent utilization of multiple computing resources. To take advantage of parallel computing, we need efficient data partitioning techniques which are in charge of dividing the whole data and assigning the partitions to the processing nodes. Data partitioning is a complex problem, as it has to consider different and often contradicting issues, such as data locality, load balancing and maximizing parallelism.In this thesis, we study the problem of data partitioning, particularly in scientific parallel databases that are continuously growing and in the MapReduce framework.In the case of scientific databases, we consider data partitioning in very large databases in which new data is appended continuously to the database, e.g. astronomical applications. Existing approaches are limited since the complexity of the workload and continuous appends restrict the applicability of traditional approaches. We propose two partitioning algorithms that dynamically partition new data elements by a technique based on data affinity. Our algorithms enable us to obtain very good data partitions in a low execution time compared to traditional approaches.We also study how to improve the performance of MapReduce framework using data partitioning techniques. In particular, we are interested in efficient data partitioning of the input datasets to reduce the amount of data that has to be transferred in the shuffle phase. We design and implement a strategy which, by capturing the relationships between input tuples and intermediate keys, obtains an efficient partitioning that can be used to reduce significantly the MapReduce's communication overhead
Etien-Gnoan, N'Da Brigitte. "L'encadrement juridique de la gestion électronique des données médicales". Thesis, Lille 2, 2014. http://www.theses.fr/2014LIL20022/document.
Texto completoThe electronic management of medical data is as much in the simple automated processing of personal data in the sharing and exchange of health data . Its legal framework is provided both by the common rules to the automated processing of all personal data and those specific to the processing of medical data . This management , even if it is a source of economy, creates protection issues of privacy which the French government tries to cope by creating one of the best legal framework in the world in this field. However , major projects such as the personal health record still waiting to be made and the right to health is seen ahead and lead by technological advances . The development of e-health disrupts relationships within one dialogue between the caregiver and the patient . The extension of the rights of patients , sharing responsibility , increasing the number of players , the shared medical confidentiality pose new challenges with which we must now count. Another crucial question is posed by the lack of harmonization of legislation increasing the risks in cross-border sharing of medical
Gueye, Modou. "Gestion de données de recommandation à très large échelle". Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0083.
Texto completoIn this thesis, we address the scalability problem of recommender systems. We propose accu rate and scalable algorithms. We first consider the case of matrix factorization techniques in a dynamic context, where new ratings..are continuously produced. ln such case, it is not possible to have an up to date model, due to the incompressible time needed to compute it. This happens even if a distributed technique is used for matrix factorization. At least, the ratings produced during the model computation will be missing. Our solution reduces the loss of the quality of the recommendations over time, by introducing some stable biases which track users' behavior deviation. These biases are continuously updated with the new ratings, in order to maintain the quality of recommendations at a high leve for a longer time. We also consider the context of online social networks and tag recommendation. We propose an algorithm that takes account of the popularity of the tags and the opinions of the users' neighborhood. But, unlike common nearest neighbors' approaches, our algorithm doe not rely on a fixed number of neighbors when computing a recommendation. We use a heuristic that bounds the network traversai in a way that allows to faster compute the recommendations while preserving the quality of the recommendations. Finally, we propose a novel approach that improves the accuracy of the recommendations for top-k algorithms. Instead of a fixed list size, we adjust the number of items to recommend in a way that optimizes the likelihood that ail the recommended items will be chosen by the user, and find the best candidate sub-list to recommend to the user
Leclercq, Claude. "Un problème de système expert temps réel : la gestion de centres informatiques". Lille 1, 1990. http://www.theses.fr/1990LIL10143.
Texto completoDjellalil, Jilani. "Conception et réalisation de multibases de données". Lyon 3, 1989. http://www.theses.fr/1989LYO3A003.
Texto completoFaye, David Célestin. "Médiation de données sémantique dans SenPeer, un système pair-à-pair de gestion de données". Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00481311.
Texto completoCho, Choong-Ho. "Structuration des données et caractérisation des ordonnancements admissibles des systèmes de production". Lyon, INSA, 1989. http://www.theses.fr/1989ISAL0053.
Texto completoThis work deals, on the one band, with the specification and the modelization of data bases for the scheduling problems in a hierarchical architecture of manufacturing systems, on the other hand, with the analytical specification of the set of feasible solutions for the decision support scheduling problems about three different types of workshops: - first, made up several machines (flowshop: sequences of operations are the same for all jobs), considering the important cri teri on as the set up times under set tasks groups) and potential. Constraints, - second, with only one machine, under the given due dates of jobs constraints, finally, organised in a jobshop, under the three previous constraints: set, potential and due dates. One of original researchs concerns the new structure: PQR trees, to characterise the set of feasible sequences of tasks
Michaux, Valéry. "Compétence collective et systèmes d'information : cinq cas de coordination dans les centres de contacts". Nantes, 2003. http://www.theses.fr/2003NANT4011.
Texto completoThe target of this survey is to define a concept which, although in increasing use in management, reveals ambiguities: the "collective competency". The matter in hand waas, in particular, to take into account actors, whitch do not necessarily, share thez same time and location unit. It was decided here to choose a qualitative research strategy (2 cases studies) to allow both testing research hypothesizes and left, if necessary, emerge non initially anticipated elements : 5 analysises on the sites followed by one comparative analysis betwen the differents sites. This study, conducted within the customers contacts centres areas, leads :- to refute the idea, firts considered, of collective competency, as a collective ability to produce a common result with a given level of collective effiency. - to a posteriori draw, a theoretical analysis network allowing to point out, on side the nature assigned of the factors on witch are based the ability of collectivities to co-ordinate their work and on the other side, the different parts played by processing data systems within co-ordination. - to introduce the idea of effiency of socio-organisational devices or arrangements to translate the aspects assigned of the ability of collectivities to coordinate working together, - and to re-define and re-position the collective effiency concepts as a generic. .
Guégot, Françoise. "Gestion d'une base de données mixte, texte et image : application à la gestion médicale dentaire". Paris 9, 1989. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1989PA090042.
Texto completoIn the frame work of organizational data processing, we have shown, on an actual example -a dental surgeon cabinet- that image display constitutes a bonus which may prove decisive in decision making. This should be considered to play down the principles governing a mixed data basic managering system. A basis of text data will be constituted through an S. I. A. D generator which will also perform the necessary processing of the said data. A basis of image data will be established. In parallel with the former, from an inventory of the various image processing techniques. Finally, both basis will be connected to form the mixed data managerial system
Le, Mahec G. "Gestion des bases de données biologiques sur grilles de calculs". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00462306.
Texto completoPierkot, Christelle. "Gestion de la Mise à Jour de Données Géographiques Répliquées". Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00366442.
Texto completoL'institution militaire utilise elle aussi les données spatiales comme soutien et aide à la décision. A chaque étape d'une mission, des informations géographiques de tous types sont employées (données numériques, cartes papiers, photographies aériennes...) pour aider les unités dans leurs choix stratégiques. Par ailleurs, l'utilisation de réseaux de communication favorise le partage et l'échange des données spatiales entre producteurs et utilisateurs situés à des endroits différents. L'information n'est pas centralisée, les données sont répliquées sur chaque site et les utilisateurs peuvent ponctuellement être déconnectés du réseau, par exemple lorsqu'une unité mobile va faire des mesures sur le terrain.
La problématique principale concerne donc la gestion dans un contexte militaire, d'une application collaborative permettant la mise à jour asynchrone et symétrique de données géographiques répliquées selon un protocole à cohérence faible optimiste. Cela nécessite de définir un modèle de cohérence approprié au contexte militaire, un mécanisme de détection des mises à jour conflictuelles lié au type de données manipulées et des procédures de réconciliation des écritures divergentes adaptées aux besoins des unités participant à la mission.
L'analyse des travaux montre que plusieurs protocoles ont été définis dans les communautés systèmes (Cederqvist :2001 ; Kermarrec :2001) et bases de données (Oracle :2003 ; Seshadri :2000) pour gérer la réplication des données. Cependant, les solutions apportées sont souvent fonctions du besoin spécifique de l'application et ne sont donc pas réutilisables dans un contexte différent, ou supposent l'existence d'un serveur de référence centralisant les données. Les mécanismes employés en information géographique pour gérer les données et les mises à jour ne sont pas non plus appropriés à notre étude car ils supposent que les données soient verrouillées aux autres utilisateurs jusqu'à ce que les mises à jour aient été intégrée (approche check in-check out (ESRI :2004), ou utilisent un serveur centralisé contenant les données de référence (versionnement : Cellary :1990).
Notre objectif est donc de proposer des solutions permettant l'intégration cohérente et autant que possible automatique, des mises à jour de données spatiales dans un environnement de réplication optimiste, multimaître et asynchrone.
Nous proposons une stratégie globale d'intégration des mises à jour spatiales basée sur une vérification de la cohérence couplé à des sessions de mises à jour. L'originalité de cette stratégie réside dans le fait qu'elle s'appuie sur des métadonnées pour fournir des solutions de réconciliation adaptées au contexte particulier d'une mission militaire.
La contribution de cette thèse est double. Premièrement, elle s'inscrit dans le domaine de la gestion de la mise à jour des données spatiales, domaine toujours très actif du fait de la complexité et de l'hétérogénéité des données (Nous limitons néanmoins notre étude aux données géographiques vectorielles) et de la relative «jeunesse » des travaux sur le sujet. Deuxièmement, elle s'inscrit dans le domaine de la gestion de la cohérence des données répliquées selon un protocole optimiste, en spécifiant en particulier, de nouveaux algorithmes pour la détection et la réconciliation de données conflictuelles, dans le domaine applicatif de l'information géographique.
Gagnon, Bertrand. "Gestion d'information sur les procédés thermiques par base de données". Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65447.
Texto completoAntoine, Émilien. "Gestion des données distribuées avec le langage de règles: Webdamlog". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00908155.
Texto completoLe, Mahec Gaël. "Gestion des bases de données biologiques sur grilles de calcul". Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21891.
Texto completoCheballah, Kamal. "Aides à la gestion des données techniques des produits industriels". Ecully, Ecole centrale de Lyon, 1992. http://www.theses.fr/1992ECDL0003.
Texto completoCobéna, Grégory. "Gestion des changements pour les données semi-structurés du Web". Palaiseau, Ecole polytechnique, 2003. http://www.theses.fr/2003EPXX0027.
Texto completo