Academic literature on the topic 'Centres de traitement informatique – Environnement'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Centres de traitement informatique – Environnement.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Centres de traitement informatique – Environnement"
Alarcon, W. "Santé Mentale en Afrique de l’Ouest (SMAO) : pour le développement de politiques de Santé Mentale en Afrique subsaharienne." European Psychiatry 28, S2 (November 2013): 73. http://dx.doi.org/10.1016/j.eurpsy.2013.09.194.
Full textDissertations / Theses on the topic "Centres de traitement informatique – Environnement"
Hnayno, Mohamad. "Optimisation des performances énergétiques des centres de données : du composant au bâtiment." Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS021.
Full textData centers consume vast amounts of electrical energy to power their IT equipment, cooling systems, and supporting infrastructure. This high energy consumption contributes to the overall demand on the electrical grid and release of greenhouse gas emissions. By optimizing energy performance, data centers can reduce their electricity bills, overall operating costs and their environmental impact. This includes implementing energy-efficient technologies, improving cooling systems, and adopting efficient power management practices. Adopting new cooling solutions, such as liquid cooling and indirect evaporative cooling, offer higher energy efficiency and can significantly reduce the cooling-related energy consumption in data centres.In this work, two experimental investigations on a new cooling topologies for information technology racks are conducted. In the first topology, the rack-cooling system is based on a combination of close-coupled cooling and direct-to-chip cooling. Five racks with operational servers were tested. Two temperature difference (15 K and 20 K) was validated for all the IT racks. The impact of these temperature difference profiles on the data-centre performance was analysed using three heat rejection systems under four climatic conditions for a data centre of 600 kW. The impact of the water temperature profile on the partial power usage effectiveness and water usage effectiveness of data centre was analysed to optimise the indirect free cooling system equipped with an evaporative cooling system through two approaches: rack temperature difference and by increasing the water inlet temperature of the data centre. In the second topology, an experimental investigation conducted on a new single-phase immersion/liquid-cooling technique is developed. The experimental setup tested the impact of three dielectric fluids, the effect of the water circuit configuration, and the server power/profile. Results suggest that the system cooling demand depends on the fluid’s viscosity. As the viscosity increased from 4.6 to 9.8 mPa.s, the cooling performance decreased by approximately 6 %. Moreover, all the IT server profiles were validated at various water inlet temperatures up to 45°C and flow rates. The energy performance of this technique and the previous technique was compared. This technique showed a reduction in the DC electrical power consumption by at least 20.7 % compared to the liquid-cooling system. The cooling performance of the air- and liquid-cooled systems and the proposed solution was compared computationally at the server level. When using the proposed solution, the energy consumed per server was reduced by at least 20 % compared with the air-cooling system and 7 % compared with liquid-cooling system.In addition, a new liquid cooling technology for 600 kW Uninterruptible Power Supply (UPS) units. This cooling architecture gives more opportunities to use free cooling as a main and unique cooling system for optimal data centres (DCs). Five thermal hydraulic tests are conducted with different thermal conditions. A 20 K temperature difference profile was validated with a safe operation for all UPS electronic equipment resulting with a thermal efficiency of 82.27 %. The impact of decreasing water flow rate and increasing water and air room temperatures was also analysed. A decrease in inlet water and air temperatures from 41°C to 32°C and from 47°C to 40°C respectively increases the thermal efficiency by 8.64 %. Furthermore, an energy performance analysis comparison is made between air cooled and water cooled UPS units on both UPS and infrastructure levels
Moualla, Ghada. "Virtualisation résiliente des fonctions réseau pour les centres de données et les environnements décentralisés." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4061.
Full textTraditional networks are based on an ever-growing variety of network functions that run on proprietary hardware devices called middleboxes. Designing these vendor-specific appliances and deploying them is very complex, costly and time-consuming. Moreover, with the ever-increasing and heterogeneous short-term services requirements, service providers have to scale up their physical infrastructure periodically, which results in high CAPEX and OPEX. This traditional paradigm leads to network ossification and high complexity in network management and services provisioning to address emerging use cases. Network Function Virtualization (NFV) has attracted notable attention as a promising paradigm to tackle such challenges by decoupling network functions from the underlying proprietary hardware and implementing them as software, named Virtual Network Functions (VNFs), able to work on inexpensive commodity hardware. These VNFs can be arranged and chained together in a predefined order, the so-called Service Function chaining (SFC), to provide end-to-end services. Despite all the benefits associated with the new paradigm, NFV comes with the challenge of how to place the functions of the users' requested services within the physical network while providing the same resiliency as if a dedicated infrastructure were used, given that commodity hardware is less reliable than the dedicated one. This problem becomes particularly challenging when service requests have to be fulfilled as soon as they arise (i.e., in an online manner). In light of these new challenges, we propose new solutions to tackle the problem of online SFC placement while ensuring the robustness of the placed services against physical failures in data-center (DC) topologies. Although recovery solutions exist, they still require time in which the impacted services will be unavailable while taking smart placement decisions can help in avoiding the need for reacting against simple network failures. First, we provide a comprehensive study on how the placement choices can affect the overall robustness of the placed services. Based on this study we propose a deterministic solution applicable when the service provider has full knowledge and control on the infrastructure. Thereafter, we move from this deterministic solution to a stochastic approach for the case where SFCs are requested by tenants oblivious to the physical DC network, where users only have to provide the SFC they want to place and the required availability level (e.g., 5 nines). We simulated several solutions and the evaluation results show the effectiveness of our algorithms and the feasibility of our propositions in very large scale data center topologies, which make it possible to use them in a productive environment. All these solutions work well in trusted environments with a central authority that controls the infrastructure. However, in some cases, many enterprises need to collaborate together in order to run tenants' application, e.g., MapReduce applications. In such a scenario, we move to a completely untrusted decentralized environment with no trust guarantees in the presence of not only byzantine nodes but also rational nodes. We considered the case of MapReduce applications in such an environment and present an adapted MapReduce framework called MARS, which is able to work correctly in such a context without the need of any trusted third party. Our simulations show that MARS grants the execution integrity in MapReduce linearly with the number of byzantine nodes in the system
Rocha, barbosa Cassandra. "Coordination et ordonnancement de tâches à grains fins entre environnements d'exécution HPC." Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS016.
Full textSupercomputers are becoming more and more complex to use. This is why the use of so-called hybrid programming models, MPI + X, are being implemented in applications. These new types of models allow a more efficient use of a supercomputer, but also create new problems during the execution of applications. These problems are of different types.More specifically, we will study three problems related to MPI + X programming. The progression of non-blocking MPI communications within the X environment. Then two types of possible imbalance in MPI+X applications. The first being between MPI processes and the second within an MPI process, i.e., imbalance within X.A solution in the case of an X environment in recursive tasks will first be presented for the MPI communication progress problem using progress task insertion in the X environment. For the imbalance between MPI processes, a solution for resource rebalancing within a node will be presented. Finally, for the imbalance in the X environment, a solution to use the imbalance to run a second application will also be presented
Petitgirard, Jean-Yves. "Le traitement de l'anglais oral dans un environnement informatique interactif multimedia." Chambéry, 1999. http://www.theses.fr/1999CHAML001.
Full textComputer based teaching which grew out of automated teaching has been one of the central issues of the past twenty years. The increasing use of computers in education has opened up a new series of possibilities for both the teacher and the learner of english. However, although many of the questions raised by the initial technological breakthrough are still pertinent, the rate of development has in many ways modified the situation. While at the beginning computer based learning applications were essentially text based, the development of multimedia allows us to incorporate sound and images into the learning process. Perhaps the most telling developments have been the standardisation of formats along with the increasing use of the cdrom. In order to provide the learner with original exercises any development of computer tools must take into account the latest possibilities offered by information technology such as direct access, simulation and interactivity. An analysis taking into account on the one hand the processing of speech by machine and on the other hand the salient features of aural comprehension will allow us to construct a structure for the teaching and the learning of the latter competence which will be based on quality, quantity, strategy and communication. For each of these categories interactive multimedia computer based teaching offers a number of advantages. In particular it allows a double approach, that is to say : * at the level of the application taking into account the learner's specific needs and the various options available * at the level of the specificity of the interactions the learners will have to deal with. It is particularly at this last level that a wider range of innovative activities most of which only possible using information technology can now be designed
Milhaud, Gérard. "Un environnement pour la composition de phrases assistée." Aix-Marseille 2, 1994. http://www.theses.fr/1994AIX22079.
Full textRémy, Didier. "Paradeis : un environnement de développement parallèle pour le traitement du creux." Evry-Val d'Essonne, 2000. http://www.theses.fr/2000EVRY0015.
Full textNamyst, Raymond. "Pm2 : un environnement pour une conception portable et une exécution efficace des applications parallèles irrégulières." Lille 1, 1997. http://www.theses.fr/1997LIL10028.
Full textGuessoum, Zahia. "Un environnement opérationnel de conception et de réalisation de systèmes multi-agents." Paris 6, 1996. http://www.theses.fr/1996PA066577.
Full textMas-Bellissent, Christine. "La responsabilité contractuelle de droit commun du prestataire de service informatique : contribution à l'étude de la prestation de service." Pau, 1994. http://www.theses.fr/1994PAUU2034.
Full textAmamou, Ahmed. "Isolation réseau dans un datacenter virtualisé." Paris 6, 2013. http://www.theses.fr/2013PA066343.
Full textThis thesis is intended to meet the expectations of the scientific community and the needs of Cloud operators on network insolation in a virtualized datacenter. It focuses on the layer 2 scaling in order to better identify the locks and the opportunities on the modern virtualized datacenter networks. First, it presents a new algorithm for dynamic bandwidth allocation at the physical nodes in order to overcome the problem of internal denial of service attacks while respecting all tenants SLA. Second, it uses an adaptation of TRILL (RFC 6325) switches, called RBridge on physical nodes within a virtualized data center, thus enabling to address the layer 2 scalability problems. Finally, it proposes a new mechanism called VNT (Virtual Netwotk over TRILL), allowing flexible logical networks creation. This mechanism, which includes a new identifier VNI (Virtual Network Identifier), allows the coexistence of more than 16 million logical networks within the same physical network
Books on the topic "Centres de traitement informatique – Environnement"
Peltier, Thomas R. Policies & procedures for data security: A complete manual for computer systems and networks. San Francisco: Miller Freeman Inc., 1991.
Find full textThe art of managing software development people. New York: Wiley, 1985.
Find full textIFIP TC11 International Conference on Information Security (11th 1995 Cape Town, South Africa). Information security-the next decade: Proceedings of the IFIP TC11 Eleventh International Conference on Information Security, IFIP/Sec '95. London: Chapman & Hall on behalf of the IFIP, 1995.
Find full textMinoli, Daniel. Analyzing outsourcing: Reengineering information and communication systems. New York: Mcgraw-Hill, 1995.
Find full textF, Blanding Steven, ed. Enterprise operations management handbook. 2nd ed. Boca Raton, Fla: Auerbach, 2000.
Find full textEarly Edition Corporate Computer and Network Security. Prentice Hall, 2002.
Find full textGentile, Michael, Ron Collette, and Thomas D. August. CISO Handbook: A Practical Guide to Securing Your Company. Auerbach Publishers, Incorporated, 2016.
Find full textGentile, Michael, Ron Collette, and Thomas D. August. Ciso Handbook. Taylor & Francis Group, 2005.
Find full textThe Ciso Handbook: A Practical Guide to Securing Your Company. AUERBACH, 2005.
Find full textChoosing and Keeping Computer Staff: Recruitment, Selection and Development of Computer Personnel. Taylor & Francis Group, 2017.
Find full textBook chapters on the topic "Centres de traitement informatique – Environnement"
FAGES, François, and Franck MOLINA. "La cellule, un calculateur analogique chimique." In Approches symboliques de la modélisation et de l’analyse des systèmes biologiques, 255–74. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9029.ch7.
Full text