Tesis sobre el tema "Centres de traitement informatique – Environnement"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Centres de traitement informatique – Environnement".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Hnayno, Mohamad. "Optimisation des performances énergétiques des centres de données : du composant au bâtiment". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS021.
Texto completoData centers consume vast amounts of electrical energy to power their IT equipment, cooling systems, and supporting infrastructure. This high energy consumption contributes to the overall demand on the electrical grid and release of greenhouse gas emissions. By optimizing energy performance, data centers can reduce their electricity bills, overall operating costs and their environmental impact. This includes implementing energy-efficient technologies, improving cooling systems, and adopting efficient power management practices. Adopting new cooling solutions, such as liquid cooling and indirect evaporative cooling, offer higher energy efficiency and can significantly reduce the cooling-related energy consumption in data centres.In this work, two experimental investigations on a new cooling topologies for information technology racks are conducted. In the first topology, the rack-cooling system is based on a combination of close-coupled cooling and direct-to-chip cooling. Five racks with operational servers were tested. Two temperature difference (15 K and 20 K) was validated for all the IT racks. The impact of these temperature difference profiles on the data-centre performance was analysed using three heat rejection systems under four climatic conditions for a data centre of 600 kW. The impact of the water temperature profile on the partial power usage effectiveness and water usage effectiveness of data centre was analysed to optimise the indirect free cooling system equipped with an evaporative cooling system through two approaches: rack temperature difference and by increasing the water inlet temperature of the data centre. In the second topology, an experimental investigation conducted on a new single-phase immersion/liquid-cooling technique is developed. The experimental setup tested the impact of three dielectric fluids, the effect of the water circuit configuration, and the server power/profile. Results suggest that the system cooling demand depends on the fluid’s viscosity. As the viscosity increased from 4.6 to 9.8 mPa.s, the cooling performance decreased by approximately 6 %. Moreover, all the IT server profiles were validated at various water inlet temperatures up to 45°C and flow rates. The energy performance of this technique and the previous technique was compared. This technique showed a reduction in the DC electrical power consumption by at least 20.7 % compared to the liquid-cooling system. The cooling performance of the air- and liquid-cooled systems and the proposed solution was compared computationally at the server level. When using the proposed solution, the energy consumed per server was reduced by at least 20 % compared with the air-cooling system and 7 % compared with liquid-cooling system.In addition, a new liquid cooling technology for 600 kW Uninterruptible Power Supply (UPS) units. This cooling architecture gives more opportunities to use free cooling as a main and unique cooling system for optimal data centres (DCs). Five thermal hydraulic tests are conducted with different thermal conditions. A 20 K temperature difference profile was validated with a safe operation for all UPS electronic equipment resulting with a thermal efficiency of 82.27 %. The impact of decreasing water flow rate and increasing water and air room temperatures was also analysed. A decrease in inlet water and air temperatures from 41°C to 32°C and from 47°C to 40°C respectively increases the thermal efficiency by 8.64 %. Furthermore, an energy performance analysis comparison is made between air cooled and water cooled UPS units on both UPS and infrastructure levels
Moualla, Ghada. "Virtualisation résiliente des fonctions réseau pour les centres de données et les environnements décentralisés". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4061.
Texto completoTraditional networks are based on an ever-growing variety of network functions that run on proprietary hardware devices called middleboxes. Designing these vendor-specific appliances and deploying them is very complex, costly and time-consuming. Moreover, with the ever-increasing and heterogeneous short-term services requirements, service providers have to scale up their physical infrastructure periodically, which results in high CAPEX and OPEX. This traditional paradigm leads to network ossification and high complexity in network management and services provisioning to address emerging use cases. Network Function Virtualization (NFV) has attracted notable attention as a promising paradigm to tackle such challenges by decoupling network functions from the underlying proprietary hardware and implementing them as software, named Virtual Network Functions (VNFs), able to work on inexpensive commodity hardware. These VNFs can be arranged and chained together in a predefined order, the so-called Service Function chaining (SFC), to provide end-to-end services. Despite all the benefits associated with the new paradigm, NFV comes with the challenge of how to place the functions of the users' requested services within the physical network while providing the same resiliency as if a dedicated infrastructure were used, given that commodity hardware is less reliable than the dedicated one. This problem becomes particularly challenging when service requests have to be fulfilled as soon as they arise (i.e., in an online manner). In light of these new challenges, we propose new solutions to tackle the problem of online SFC placement while ensuring the robustness of the placed services against physical failures in data-center (DC) topologies. Although recovery solutions exist, they still require time in which the impacted services will be unavailable while taking smart placement decisions can help in avoiding the need for reacting against simple network failures. First, we provide a comprehensive study on how the placement choices can affect the overall robustness of the placed services. Based on this study we propose a deterministic solution applicable when the service provider has full knowledge and control on the infrastructure. Thereafter, we move from this deterministic solution to a stochastic approach for the case where SFCs are requested by tenants oblivious to the physical DC network, where users only have to provide the SFC they want to place and the required availability level (e.g., 5 nines). We simulated several solutions and the evaluation results show the effectiveness of our algorithms and the feasibility of our propositions in very large scale data center topologies, which make it possible to use them in a productive environment. All these solutions work well in trusted environments with a central authority that controls the infrastructure. However, in some cases, many enterprises need to collaborate together in order to run tenants' application, e.g., MapReduce applications. In such a scenario, we move to a completely untrusted decentralized environment with no trust guarantees in the presence of not only byzantine nodes but also rational nodes. We considered the case of MapReduce applications in such an environment and present an adapted MapReduce framework called MARS, which is able to work correctly in such a context without the need of any trusted third party. Our simulations show that MARS grants the execution integrity in MapReduce linearly with the number of byzantine nodes in the system
Rocha, barbosa Cassandra. "Coordination et ordonnancement de tâches à grains fins entre environnements d'exécution HPC". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS016.
Texto completoSupercomputers are becoming more and more complex to use. This is why the use of so-called hybrid programming models, MPI + X, are being implemented in applications. These new types of models allow a more efficient use of a supercomputer, but also create new problems during the execution of applications. These problems are of different types.More specifically, we will study three problems related to MPI + X programming. The progression of non-blocking MPI communications within the X environment. Then two types of possible imbalance in MPI+X applications. The first being between MPI processes and the second within an MPI process, i.e., imbalance within X.A solution in the case of an X environment in recursive tasks will first be presented for the MPI communication progress problem using progress task insertion in the X environment. For the imbalance between MPI processes, a solution for resource rebalancing within a node will be presented. Finally, for the imbalance in the X environment, a solution to use the imbalance to run a second application will also be presented
Petitgirard, Jean-Yves. "Le traitement de l'anglais oral dans un environnement informatique interactif multimedia". Chambéry, 1999. http://www.theses.fr/1999CHAML001.
Texto completoComputer based teaching which grew out of automated teaching has been one of the central issues of the past twenty years. The increasing use of computers in education has opened up a new series of possibilities for both the teacher and the learner of english. However, although many of the questions raised by the initial technological breakthrough are still pertinent, the rate of development has in many ways modified the situation. While at the beginning computer based learning applications were essentially text based, the development of multimedia allows us to incorporate sound and images into the learning process. Perhaps the most telling developments have been the standardisation of formats along with the increasing use of the cdrom. In order to provide the learner with original exercises any development of computer tools must take into account the latest possibilities offered by information technology such as direct access, simulation and interactivity. An analysis taking into account on the one hand the processing of speech by machine and on the other hand the salient features of aural comprehension will allow us to construct a structure for the teaching and the learning of the latter competence which will be based on quality, quantity, strategy and communication. For each of these categories interactive multimedia computer based teaching offers a number of advantages. In particular it allows a double approach, that is to say : * at the level of the application taking into account the learner's specific needs and the various options available * at the level of the specificity of the interactions the learners will have to deal with. It is particularly at this last level that a wider range of innovative activities most of which only possible using information technology can now be designed
Milhaud, Gérard. "Un environnement pour la composition de phrases assistée". Aix-Marseille 2, 1994. http://www.theses.fr/1994AIX22079.
Texto completoRémy, Didier. "Paradeis : un environnement de développement parallèle pour le traitement du creux". Evry-Val d'Essonne, 2000. http://www.theses.fr/2000EVRY0015.
Texto completoNamyst, Raymond. "Pm2 : un environnement pour une conception portable et une exécution efficace des applications parallèles irrégulières". Lille 1, 1997. http://www.theses.fr/1997LIL10028.
Texto completoGuessoum, Zahia. "Un environnement opérationnel de conception et de réalisation de systèmes multi-agents". Paris 6, 1996. http://www.theses.fr/1996PA066577.
Texto completoMas-Bellissent, Christine. "La responsabilité contractuelle de droit commun du prestataire de service informatique : contribution à l'étude de la prestation de service". Pau, 1994. http://www.theses.fr/1994PAUU2034.
Texto completoAmamou, Ahmed. "Isolation réseau dans un datacenter virtualisé". Paris 6, 2013. http://www.theses.fr/2013PA066343.
Texto completoThis thesis is intended to meet the expectations of the scientific community and the needs of Cloud operators on network insolation in a virtualized datacenter. It focuses on the layer 2 scaling in order to better identify the locks and the opportunities on the modern virtualized datacenter networks. First, it presents a new algorithm for dynamic bandwidth allocation at the physical nodes in order to overcome the problem of internal denial of service attacks while respecting all tenants SLA. Second, it uses an adaptation of TRILL (RFC 6325) switches, called RBridge on physical nodes within a virtualized data center, thus enabling to address the layer 2 scalability problems. Finally, it proposes a new mechanism called VNT (Virtual Netwotk over TRILL), allowing flexible logical networks creation. This mechanism, which includes a new identifier VNI (Virtual Network Identifier), allows the coexistence of more than 16 million logical networks within the same physical network
EYNAUD, FRANCOISE. "Informatisation d'une base de donnees commune a trois centres de recherche sur le traitement du neuroblastome utilisant le reseau numeris". Aix-Marseille 2, 1993. http://www.theses.fr/1993AIX20098.
Texto completoDardailler, Pascale. "Hyperview : un éditeur graphique de réseaux dans un environnement hypertexte réparti". Nice, 1990. http://www.theses.fr/1990NICE4395.
Texto completoHunel, Philippe. "Conception et réalisation d'un environnement intégré de génie logiciel pour le développement des protocoles". Clermont-Ferrand 2, 1994. http://www.theses.fr/1994CLF21624.
Texto completoBen, Atitallah Ahmed. "Etude et implantation d'algorithmes de compression d'images dans un environnement mixte matériel et logiciel". Bordeaux 1, 2007. http://www.theses.fr/2007BOR13409.
Texto completoSaqui-Sannes, Pierre de. "Prototypage d'un environnement de validation de protocoles : application à l'approche ESTELLE". Toulouse 3, 1990. http://www.theses.fr/1990TOU30109.
Texto completoLoison, Bertrand. "Les déterminants de l'exposition aux risques liés à l'externalisation des prestations de services informatiques : modèle explicatif et validation empirique". Paris 1, 2009. http://www.theses.fr/2009PA010007.
Texto completoBouzid, Makram. "Contribution à la modélisation de l'interaction agent / environnement : modélisation stochastique et simulation parallèle". Nancy 1, 2001. http://www.theses.fr/2001NAN10271.
Texto completoThis thesis belongs at the same time to the multi-agent system (MAS) and parallelism domains, and more precisely to the parallel simulation of MAS. Two problems are tackled: First one concerns the modeling and simulation of situated agents, together with the unreliability of their sensors and effectors, in order to perform simulations which results will be more realistic. The second problem relates to the exploitation of the inherent parallelism of multi-agent systems, in order to obtain good parallel performances, by reducing the execution time and/or processing problems of bigger sizes. Two models are proposed: a formal model of multi-agent systems, including a stochastic model of the agent/environment interaction, and a parallel simulation model for multi-agent systems based on the distribution of the conflicts occurring between the agents, and on a dynamic load balancing mechanism between the processors. The satisfactory results we have obtained are presented
Vilarem, Jean-François. "Contrôle de concurrence mixte en environnement distribué : une méthode fusionnant verrouillage et certification". Montpellier 2, 1989. http://www.theses.fr/1989MON20023.
Texto completoChristnacher, Frank. "Etude de l'adaptation d'un système optique de reconnaissance de formes à un environnement sévère". Mulhouse, 1992. http://www.theses.fr/1992MULH0242.
Texto completoFatni, Abdelkrim. "Environnement de programmation parallèle adapté au traitement d'images et au calcul scientifique : le langage C// et son compilateur". Toulouse, INPT, 1998. http://www.theses.fr/1998INPT005H.
Texto completoLaanaya, Hicham. "Classification en environnement incertain : application à la caractérisation de sédiments marins". Brest, 2007. http://www.theses.fr/2007BRES2041.
Texto completoSonar images classification is of great importance for various realistic applications such as underwater navigation or seabed mapping. Most approaches developed or used in the present work for seabed characterization are based on the use of texture analysis methods. Indeed, the sonar images have different homogeneous areas of sédiment that can be viewed as texture entities. Generally, texture features are of large numbers and are not all relevant, an extraction-reduction of these features seems necessary before the classification phase. We present in this manuscript a complete chain for sonar images classification while optimizing the chain steps. We use the Knowledge Discovery on Database (KDD) process for the chain development. The underwater environment is uncertain, which is reflected on the images obtained from the sensors used for their acquisition. Therefore, it is important to develop robust methods to these imperfections. We solve this problem in two different ways: a first solution is to make robust traditional classification methods, such as support vector machines or k-nearest neighbors, to these imperfections. A second solution is to model these imperfections to be taken into account by belief or fuzzy classification methods. We analyze the results obtained using different texture analysis approaches, feature extraction-reduction methods, and classification approaches. We use other approaches based on the uncertain theories to overcome sonar images imperfections problem
Moreau, Valentine. "Méthodologie de représentation des impacts environnementaux locaux et planétaires, directs et indirects - Application aux technologies de l'information". Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00843151.
Texto completoSimon, Gwendal. "Conception et réalisation d'un système pour environnement virtuel massivement partagé". Rennes 1, 2004. http://www.theses.fr/2004REN10158.
Texto completoHoneine, Paul. "Méthodes à noyau pour l'analyse et la décision en environnement non-stationnaire". Troyes, 2007. http://www.theses.fr/2007TROY0018.
Texto completoThis PhD thesis offers a new framework for the analysis and decision-making in a non-stationary environment in the lack of statistical information, on a crossroad of three disciplines : Time-frequency analysis, adaptive signal processing and pattern recognition with kernel machines. We derive a broad framework to take advantage of recent developments in kernel machines for the time-frequency domain, by an appropriate choice of the reproducing kernel. We study the implementation of the principal component analysis on this domain, before extending its scope to signal classification methods such as Fisher discriminant analysis and Support Vector Machines. We carry out with the problem of selecting and turning a representation for a given classification task, which can take advantage of a new criterion initially developed for selecting the reproducing kernel : the kernel-target alignment. Online learning is essential in a non-stationary and dynamic environment. While kernel machines fail in treating such problems, we propose a new method leading to reduced order models based on a criterion inspired from the sparse functional approximation community : the coherence of a dictionary of functions. Beyond the properties of this parameter that we derive for kernel machines, this notion yields efficient models with extremely low computational complexity. We apply it for online kernel algorithms such as principal component analysis. We also consider a broader class of adaptive methods for nonlinear and non-stationary system identification
Tadlaoui, Moustapha. "Un environnement de simulation paramétrable pour la validation de systèmes répartis". Toulouse, ENSAE, 1989. http://www.theses.fr/1989ESAE0016.
Texto completoBerdjugin, Jean-François. "Un environnement de développement formel de systèmes distribués temps réel". Toulouse 3, 2002. http://www.theses.fr/2002TOU30056.
Texto completoJouault, Valentin. "Amélioration de la démarche de vérification et validation du nouveau code de neutronique APOLLO3". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0231/document.
Texto completoThe APOLLO3® code, developed at CEA with the support of EDF and AREVA, is the result of a project aiming at developing a neutronic software platform with improved models of physical phenomena for existing reactor cores (until 3rd generation) but also for future reactor concepts (4th generation). Despite technological improvements in computer science (number of operations per second and storage volume increased), approximations are unavoidable in deterministic codes. Yet, those approximations bring more or less important discrepancies, named model biases, on different core characteristics (pointed out through reference calculations).During previous validation processes, the purpose was to compare a chosen calculation scheme against a reference calculation (Monte-Carlo or deterministic). This comparison allowed the users to get a global bias associated with a calculation scheme. Further validation of functionalities was used to optimize the calculation scheme, and as a decisive criteria for his options. However, the functionalities’ bias impact on the calculation scheme global bias until core calculation was not measured.The first objective of this thesis is to quantify biases associated with APOLLO3® code main functionalities, and to measure their impact on the global bias.The other part of this thesis relies on the definition of the APOLLO3® Validation domain. The Validation domain defines the set of applications in which the code is effective and has been validated
Lallier, Martial. "Un environnement d'édition evolué, graphique et syntaxique, pour la conception des systemes repartis". Nancy 1, 1988. http://www.theses.fr/1988NAN10284.
Texto completoSteux, Bruno. "RT Maps, un environnement logiciel dédié à la conception d'applications embarquées temps-réel : utilisation pour la détection automatique de véhicules par fusion radar / vision". Paris, ENMP, 2001. http://www.theses.fr/2001ENMP1013.
Texto completoBen, Henda Mokhtar. "Morphologie et architecture des interfaces de communication de l'information scientifique et technique dans un environnement multilingue : le contexte arabo-latin". Bordeaux 3, 1999. https://tel.archives-ouvertes.fr/tel-00006373.
Texto completoThe arab-latin multilingualism that we identify as a hard multilingualism presents two major peculiarities that make it distinguishable from soft multilingualism (the same linguistic family) : graphic or textual representation and bidirectionnality. The mechanism of characters representation and processing on basis of coding and standards requirements constitute one of the prime constraints to linguistic transparency of multilingual systems and human-computer interfaces. Even tough the problem has been well addressed within the context of desktop and local platforms, open and distributed network systems (i. E. Internet) are still under control of latin oriented linguistic, and particularly anglo-saxon, hegemony. Other non latin languages are yet on their ways to integrate these systems but they are generally excluded from operating system areas (uris, protocols. . . ). Our contribution to i18n and li on of multilingual information systems and human-computer interfaces is proposed in terms of a combinatory mechanism between a numeric resources identification system and a unified coded character set (unicode or iso 10646). Bidirectionnality is also a constraining factor that weighs on human-computer multilingual interfaces. Sorting algorithms, logical and visual processing of text breaking and interpolation, linguistic labeling and negotiation between distributed systems, the opposition between left-to-right restrictive orientation of numeral and their internal right-to-left algorithmic way of processing constitute the major focal points of our analysis of the bidi mechanism. Our major concern in conducting this research is to revoke inherited and state-of-the-art multilingual scientific information and communication systems in order to dig deeper in specialized research areas like linguistic engineering and socilinguistics
Ghazouani, Haythem. "Navigation visuelle de robots mobiles dans un environnement d'intérieur". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00932829.
Texto completoJarraya, Amina. "Raisonnement distribué dans un environnement ambiant". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL011.
Texto completoPervasive Computing and Ambient Intelligence aim to create a smart environment withnetworked electronic and computer devices such as sensors seamlessly integrating into everyday life and providing users with transparent access to services anywhere and anytime.To ensure this, a system needs to have a global knowledge of its environment, and inparticular about people and devices, their interests and their capabilities, and associated tasks and activities. All these information are related to the concept of context. This involves gathering the user contextual data to determine his/her current situation/activity; we also talk about situation/activity identification. Thus, the system must be sensitive to environment and context changes, in order to detect situations/activities and then to adapt dynamically.Recognizing a situation/an activity requires the definition of a whole process : perception of contextual data, analysis of these collected data and reasoning on them for the identification of situations/activities.We are particularly interested in aspects related to the distributed modeling of the ambient environment and to those related to distributed reasoning in the presence of imperfect data for the identification of situations/activities. Thus, the first contribution of the thesis concerns the perception part. We have proposed a new perception model that allows the gathering of raw data from sensors deployed in the environment and the generation of events.Next, the second contribution focuses on the observation and analysis of these events by segmenting them and extracting the most significant and relevant features. Finally, the last two contributions present two proposals concerning the distributed reasoning for the identification of situations/activities ; one represents the main contribution and the other represents its improved version overcoming certain limitations. From a technical point of view, all these proposals have been developed, validated and evaluated with several tools
Al, King Raddad. "Localisation de sources de données et optimisation de requêtes réparties en environnement pair-à-pair". Toulouse 3, 2010. http://thesesups.ups-tlse.fr/912/.
Texto completoDespite of their great success in the file sharing domain, P2P systems support only simple queries usually based on looking up a file by using its name. Recently, several research works have made to extend P2P systems to be able to share data having a fine granularity (i. E. Atomic attribute) and to process queries written with a highly expressive language (i. E. SQL). The characteristics of P2P systems (e. G. Large-scale, node autonomy and instability) make impractical to have a global catalog that stores often information about data, schemas and data source hosts. Because of the absence of a global catalog, two problems become more difficult: (i) locating data sources with taking into account the schema heterogeneity and (ii) query optimization. In our thesis, we propose an approach for processing SQL queries in a P2P environment. To solve the semantic heterogeneity between local schemas, our approach is based on domain ontology and on similarity formulas. As for the structural heterogeneity of local schemas, it is solved by the extension of a query routing method (i. E. Chord protocol) with Structure Indexes. Concerning the query optimization problem, we propose to take advantage of the data source localization phase to obtain all metadata required for generating a close to optimal execution plan. Finally, in order to show the feasibility and the validity of our propositions, we carry out performance evaluations and we discuss the obtained results
Jarma, Yesid. "Protection de ressources dans des centres de données d'entreprise : architectures et protocoles". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00666232.
Texto completoEr-Rouane, Sadik. "Méthodologie d'acquisition et de traitement des données hydrogéologiques : application au cas de la plaine de la Bahira". Nancy 1, 1992. http://www.theses.fr/1992NAN10397.
Texto completoVaudable, Christophe. "Analyse et reconnaissance des émotions lors de conversations de centres d'appels". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00758650.
Texto completoRamisch, Carlos Eduardo. "Un environnement générique et ouvert pour le traitement des expressions polylexicales : de l'acquisition aux applications". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00741147.
Texto completoBelabed, Dallal. "Design and Evaluation of Cloud Network Optimization Algorithms". Electronic Thesis or Diss., Paris 6, 2015. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2015PA066149.pdf.
Texto completoThis dissertation tries to give a deep understanding of the impact of the new Cloud paradigms regarding to the Traffic Engineering goal, to the Energy Efficiency goal, to the fairness in the endpoints offered throughput, and of the new opportunities given by virtualized network functions.In the first part of our dissertation we investigate the impact of these novel features in Data Center Network optimization, providing a formal comprehensive mathematical formulation on virtual machine placement and a metaheuristic for its resolution. We show in particular how virtual bridging and multipath forwarding impact common DCN optimization goals, Traffic Engineering and Energy Efficiency, assess their utility in the various cases in four different DCN topologies.In the second part of the dissertation our interest move into better understand the impact of novel attened and modular DCN architectures on congestion control protocols, and vice-versa. In fact, one of the major concerns in congestion control being the fairness in the offered throughput, the impact of the additional path diversity, brought by the novel DCN architectures and protocols, on the throughput of individual endpoints and aggregation points is unclear.Finally, in the third part we did a preliminary work on the new Network Function Virtualization paradigm. In this part we provide a linear programming formulation of the problem based on virtual network function chain routing problem in a carrier network. The goal of our formulation is to find the best route in a carrier network where customer demands have to pass through a number of NFV node, taking into consideration the unique constraints set by NFV
Le, Mouël Frédéric. "Environnement adaptatif d'exécution distribuée d'applications dans un contexte mobile". Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00004161.
Texto completoPolitaki, Dimitra. "Vers la modélisation de clusters de centres de données vertes". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4116.
Texto completoData center clusters energy consumption is rapidly increasing making them the fastest-growing consumers of electricity worldwide. Renewable electricity sources and especially solar energy as a clean and abundant energy can be used, in many locations, to cover their electricity needs and make them "green" namely fed by photovoltaics. This potential can be explored by predicting solar irradiance and assessing the capacity provision for data center clusters. In this thesis we develop stochastic models for solar energy; one at the surface of the Earth and a second one which models the photovoltaic output current. We then compare them to the state of the art on-off model and validate them against real data. We conclude that the solar irradiance model can better capture the multiscales correlations and is suitable for small scale cases. We then propose a new job life-cycle of a complex and real cluster system and a model for data center clusters that supports batch job submissions and cons iders both impatient and persistent customer behavior. To understand the essential computer cluster characteristics, we analyze in detail two different workload type traces; the first one is the published complex Google trace and the second, simpler one, which serves scientific purposes, is from the Nef cluster located at the research center Inria Sophia Antipolis. We then implement the marmoteCore-Q, a tool for the simulation of a family of queueing models based on our multi-server model for data center clusters with abandonments and resubmissions
Khelil, Amar. "Elaboration d'un système de stockage et exploitation de données pluviométriques". Lyon, INSA, 1985. http://www.theses.fr/1985ISAL0034.
Texto completoThe Lyon District Urban Area (CO. UR. LY. ) may be explained from an hydrological point of view as a 600 km2 area equipped with a sewerage system estimated by 2 000 km of pipes. Due to the complexity of the sewerage network of the area, it must therefore be controlled by an accurate and reliable system of calculation to avoid any negative consequences of its function. The capacity of the present computerising system SERAIL, allows an overall simulation of the functioning of drainage / sewerage system. This model requires an accurate information of the rainfall rate which was not previously available. Therefore a 30 rain gages network (with cassette in sit recording) was set up within the Urban District Area in 1983. This research however introduces the experiment of three steps: 1) to install the network; 2) to build up a data checking and storage system; 3) to analyse the data. The characteristic nature of this work deals with the data analysis system. It allows to extract easily and analyse any rainfall event important to the hydrologist. Two aims were defined: 1) to get a better understanding of the phenomena (punctual representations ); 2) to build up models. In order to achieve the second aim, it was necessary to think about the fitting of the propounded models and their limits which led to the setting up of several other programmes for checking and comparison. For example a complete analysis of a rainfall event is given with comments and conclusion
Saulou, Jean-Yves. "Contribution du "processus" à l'efficacité d'une démarche qualité de type ISO 9001 : recherche-action dans les Centres de Production Informatique de service de l'Assurance Maladie". Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS017S.
Texto completoQuality initiatives such as ISO 9001 v 2008 generate the problem of measuring their effectiveness. Among the variables that contribute to the effectiveness of the quality process, the tool "Process" is one of the levers available to the manager. It is conducted action research and qualitative constructivist in the field of computer centers health insurance. Thirty-eight semi-structured interviews were conducted with the functions of director, responsible for quality and process control. Three types of organizations, all ISO9001 are concerned: data centers, five customers and two of their suppliers. Based on the Theory of Conventions and in particular the conventions of effort and evaluation (M. Boltanski L & Thévenot, 1987, 1991), the research highlights the modeling undertaken to reduce the complexity of the entity to control, stakeholder involvement in the life of the processes and issues of power in the transfer of knowledge generated by the formalization of procedures. It is put forward as a pilot process to involve "knowledgeable parties' support change and control the effectiveness of the process, the basis of efficiency of management qualité
Dinh, Ngoc Tu. "Walk-In : interfaces de virtualisation flexibles pour la performance et la sécurité dans les datacenters modernes". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES002.
Texto completoVirtualization is a powerful tool that brings numerous benefits for the security, efficiency and management of computer systems. Modern infrastructure therefore makes heavy use of virtualization in almost every software component. However, the extra hardware and software layers present various challenges to the system operator. In this work, we analyze and identify the challenges relevant to virtualization. Firstly, we observe a complexification of maintenance from the numerous software layers that must be constantly updated. Secondly, we notice a lack of transparency on details of the underlying infrastructure from virtualization. Thirdly, virtualization has a damaging effect on system performance, stemming from how the layers of virtualization have to be navigated during operation. We explore three approaches of solving the challenges of virtualization through adding flexibility into the virtualization stack. - Our first contribution tackles the issue of maintainability and security of virtual machine platforms caused by the need to keep these platforms up-to-date. We introduce HyperTP, a framework based on the hypervisor transplant concept for updating hypervisors and mitigating vulnerabilities. - Our second contribution focuses on performance loss resulting from the lack of visibility of non-uniform memory access (NUMA) topologies on virtual machines. We thoroughly evaluate I/O workloads on virtual machines running on NUMA architectures, and implement a unified hypervisor-VM resource allocation strategy for optimizing virtual I/O on said architectures. - For our third work, we focus our attention on high-performance storage subsystems for virtualization purposes. We present NVM-Router, a flexible yet easy to use virtual storage platform that supports the implementation of fast yet efficient storage functions. Together, our solutions demonstrate the tradeoffs present in the configuration spaces of virtual machine deployments, as well as how to reduce virtualization overhead through dynamic adjustment of these configurations
Nehme, Mohamad Jaafar. "Next generation state-machine replication protocols for data centers". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM077/document.
Texto completoMany uniform total order broadcast protocols have been designed in the last 30 years. They can be classified into two categories: those targeting low latency, and those targeting high throughput. Latency measures the time required to complete a single message broadcast without contention, whereas throughput measures the number of broadcasts that the processes can complete per time unit when there is contention. All the protocols that have been designed so far make the assumption that the underlying network is not shared by other applications running. This is a major concern provided that in modern data centers (aka Clouds), the networking infrastructure is shared by several applications. The consequence is that, in such environments, uniform total order broadcast protocols exhibit unstable behaviors.In this thesis, I provide two contributions. The first contribution is MDC-Cast a new protocol for total order broadcasts in which it optimizes the performance of distributed systems when executed in multi-data center environments. MDC-Cast combines the benefits of IP-multicast in cluster environments and TCP/IP unicast to get a hybrid algorithm that works perfectly in between datacenters.The second contribution is an algorithm designed for debugging performance in black-box distributed systems. The algorithm is not published yet due to the fact that it needs more tests for a better generalization
Benblidia, Mohammed Anis. "Pour une meilleure efficacité énergétique dans un système Smart Grid - Cloud". Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0019.
Texto completoThis thesis considers the energy efficiency of information and communication infrastructures in a smart grid - cloud system. It especially deals with communication networks and cloud data centers due to their high energy consumption, which confers them an important role in the network. The contributions of this thesis are implemented on the same framework integrating the smart grid, microgrid, cloud, data centers and users. Indeed, we have studied the interaction between the cloud data centers and the smart grid provider and we have proposed energy efficient power allocation solutions and an energy cost minimization scheme using two architectures: a smart grid-cloud architecture and a microgrid-cloud architecture. In addition, we paid close attention to execute user requests while ensuring a good quality of service in a fog-cloud architecture. In comparison with state-of-the-art works, the results of our contributions have shown that they respond to the identified challenges, particularly in terms of reducing carbon emissions and energy costs of cloud data centers
Wang, Yewan. "Évaluation et modélisation de l’impact énergétique des centres de donnée en fonction de l’architecture matérielle/ logicielle et de l’environnement associé". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0175.
Texto completoFor years, the energy consumption of the data center has dramatically increased followed by the explosion of demand in cloud computing. This thesis addresses the scientific challenge of energy modeling of a data center, based on the most important variables. With such modeling, an data center operator will be able to better reallocate / design the current / future data centers. In order to identify the energy impacts of hardware and software used in computer systems. In the first part of the thesis, to identify and characterize the uncertainties of energy consumption introduced by external elements: thermal effects, difference between identical processors caused by imperfect manufacturing process, precision problems resulting from power measurement tool, etc. We have completed this scientific study by developing a global power modeling for a given physical cluster, this cluster is composed by 48 identical servers and equipped with a direct expansion cooling system, conventionally used today for modern data centers. The modeling makes it possible to estimate the overall energy consumption of the cluster based on operational configurations and data relating to IT activity, such as ambient temperature, cooling system configurations and server load
Bielski, Maciej. "Nouvelles techniques de virtualisation de la mémoire et des entrées-sorties vers les périphériques pour les prochaines générations de centres de traitement de données basés sur des équipements répartis déstructurés". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT022.
Texto completoThis dissertation is positioned in the context of the system disaggregation - a novel approach expected to gain popularity in the data center sector. In traditional clustered systems resources are provided by one or multiple machines. Differently to that, in disaggregated systems resources are provided by discrete nodes, each node providing only one type of resources (CPUs, memory and peripherals). Instead of a machine, the term of a slot is used to describe a workload deployment unit. The slot is dynamically assembled before a workload deployment by the unit called system orchestrator.In the introduction of this work, we discuss the subject of disaggregation and present its benefits, compared to clustered architectures. We also add a virtualization layer to the picture as it is a crucial part of data center systems. It provides an isolation between deployed workloads and a flexible resources partitioning. However, the virtualization layer needs to be adapted in order to take full advantage of disaggregation. Thus, the main contributions of this work are focused on the virtualization layer support for disaggregated memory and devices provisioning.The first main contribution presents the software stack modifications related to flexible resizing of a virtual machine (VM) memory. They allow to adjust the amount of guest (running in a VM) RAM at runtime on a memory section granularity. From the software perspective it is transparent whether they come from local or remote memory banks.As a second main contribution we discuss the notions of inter-VM memory sharing and VM migration in the disaggregation context. We first present how regions of disaggregated memory can be shared between VMs running on different nodes. This sharing is performed in a way that involved guests which are not aware of the fact that they are co-located on the same computing node or not. Additionally, we discuss different flavors of concurrent accesses serialization methods. We then explain how the VM migration term gained a twofold meaning. Because of resources disaggregation, a workload is associated to at least one computing node and one memory node. It is therefore possible that it is migrated to a different computing node and keeps using the same memory, or the opposite. We discuss both cases and describe how this can open new opportunities for server consolidation.The last main contribution of this dissertation is related to disaggregated peripherals virtualization. Starting from the assumption that the architecture disaggregation brings many positive effects in general, we explain why it breaks the passthrough peripheral attachment technique (also known as a direct attachment), which is very popular for its near-native performance. To address this limitation we present a design that adapts the passthrough attachment concept to the architecture disaggregation. By this novel design, disaggregated devices can be directly attached to VMs, as if they were plugged locally. Moreover, all modifications do not involve the guest OS itself, for which the setup of the underlying infrastructure is not visible
Deddy, Bezeid. "Conception thermique d’une paroi complexe de datacentre pour une optimisation énergétique". Lorient, 2012. http://www.theses.fr/2012LORIS274.
Texto completoThe reduction of energy consumption for telecommunication buildings is an international challenge for main telecommunication operators and principal actors of internet. Indeed, in these buildings there are electronics equipment with a strong power density and thus a very important thermal contribution. Therefore it is necessary to use large air-conditioning systems in order to maintain ambient conditions (temperature and relative humidity of the air) in fixed ranges. One possible approach for limit installed air conditioning systems is to clip the peaks of internal temperature by using a heat storage in the wall and by adopting a night cool storage directly in the masonry. In this thesis, the study describes a numerical and experimental study in order to define new conceptions of optimized telecommunication buildings. Walls are used in order to increased heat transfer and reduced cooling energy consumption. In the first step, the temperature response of the internal volume 1 m3 were followed and simulated under different test conditions. The thermal inertia is increased by incorporating phase change materials (PCM microencapsulated paraffin) in concrete. From experience and measurements of thermophysical properties, a one dimensional thermal model conduction that represents heat transfer in the walls were developed and validated. From these studies, a specific component, representative of a multilayer wall with PCM, is developed and coupled to TRNSYS Type 56. These TRNSYS developments are then applied to the study of a real "data center" site. After confrontation with experimental data, different configurations of walls have been studied in order to improve thermal inertia. New building architectures are proposed in order to reduce cooling energy consumption
Kaced, Yazid. "Études du refroidissement par free cooling indirect d’un bâtiment exothermique : application au centre de données". Thesis, Lorient, 2018. http://www.theses.fr/2018LORIS499/document.
Texto completoA data center is a warehouse that contains telecommunication equipment, network infrastructure, servers, and computers. This equipment leads to a very high heat dissipation which must be compensated by the use of cooling systems. Telecommunication standards impose restricted climatic ranges (temperatures and humidity) leading to a very high energy consumption devote to air conditioning. The reduction of this energy consumption constitutes a real challenge which should be raised and solved. Many cooling solutions are proposed as the free cooling solution, which consists in cooling equipment by using external air in propitious climatic conditions. The work carried out during this thesis is based on experiments conducted within a building in real climatic conditions in order to study the cooling of telecom cabinets. During this study, the building configuration was modified, an indirect "free cooling" system was set up and a significant instrumentation was implemented. The objectives are to establish performance factors issued from measurements, to develop and to validate a numerical model in order to predict the thermoaeraulic behavior for this type of solution. Initially, experiments are carried out with a power dissipated inside the building and a cooling provided only by an outside air circulation. Then, significant modifications were made into the building to introduce an internal air circulation in a closed loop in order to evacuate the heat dissipated inside cabinets by a crossing airflow. In order to get a convincing database, measurements were conducted by using one and then several cabinets in different conditions. Modifications are made to operating parameters in order to better understand the installation operation and to define the energy optimization parameters. Numerical models are developed through TRNSYS / TRNFLOW. The confrontation of simulations with measurements shows the implemented approach relevance
Bayati, Léa. "Data centers energy optimization". Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC0063.
Texto completoTo ensure both good data center service performance and reasonable power consumption, a detailed analysis of the behavior of these systems is essential for the design of efficient optimization algorithms to reduce energy consumption. This thesis fits into this context, and our main work is to design dynamic energy management systems based on stochastic models of controlled queues. The goal is to search for optimal control policies for data center management, which should meet the growing demands of reducing energy consumption and digital pollution while maintaining quality of service. We first focused on the modeling of dynamic energy management by a stochastic model for a homogeneous data center, mainly to study some structural properties of the optimal strategy, such as monotony. Afterwards, since data centers have a significant level of server heterogeneity in terms of energy consumption and service rates, we have generalized the homogeneous model to a heterogeneous model. In addition, since the data center server's wake-up and shutdown are not instantaneous and a server requires a little more time to go from sleep mode to ready-to-work mode, we have extended the model to the purpose of including this server time latency. Throughout this exact optimization, arrivals and service rates are specified with histograms that can be obtained from actual traces, empirical data, or traffic measurements. We have shown that the size of the MDP model is very large and leads to the problem of the explosion of state space and a large computation time. Thus, we have shown that optimal optimization requiring a MDP is often difficult or almost impossible to apply for large data centers. Especially if we take into account real aspects such as server heterogeneity or latency. So, we have suggested what we call the greedy-window algorithm that allows to find a sub-optimal strategy better than that produced when considering a special mechanism like threshold approaches. And more importantly, unlike the MDP approach, this algorithm does not require the complete construction of the structure that encodes all possible strategies. Thus, this algorithm gives a strategy very close to the optimal strategy with very low space-time complexities. This makes this solution practical, scalable, dynamic and can be put online