Tesis sobre el tema "Centre de données virtualisé"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Centre de données virtualisé".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Tayachi, Zeineb. "Sûreté de fonctionnement et provisionnement éco-énergétique dans les centres de données virtualisés IaaS". Electronic Thesis or Diss., Paris, CNAM, 2021. http://www.theses.fr/2021CNAM1292.
Texto completoCloud computing allows users to exploit services such as infrastructures, platforms, applications, ...This allows a considerable cost and time saving since users do not need buying and managing of equipment. Moreover, they just pay the resources used (pay-as-you go). With the increasing large-scale applications and the need to store huge quantities of data, data centers have been widely deployed. However, studies have shown the under utilization of resources. Therefore, Cloud providers resort to virtualization technologies that are adopted by data center architectures and virtualized data centres have been deployed. A Virtualized Data Center is a data center where some or all of the hardware (e.g, servers, routers, switches, and links) are virtualized by using software called hypervisor that divides the equipment into multiple isolated and independent virtual instances (e.g virtual machines (VMs)). However, equipment performance can be mitigated due to several phenomena such as software aging. In this thesis, we focus on performance evaluation of two components in the data centers which are the virtualized server and the virtual switch, by usingmodeling formalisms. The first contribution concerns performability modeling and analysis of server virtualized systems subject to software aging, software rejuvenation and implements an energy management policy. A modular approach based on SRNs is proposed to investigate dependencies between several server virtualized modules. Numerical analysis shows how workload with bursty nature impacts performability metrics. This can handle decision making related to rejuvenation scheduling algorithms and to select the suitable rejuvenation mechanism. The second contribution concerns virtual switch (VS) which is considered as key element in data center networks since it achieves the communication between virtual machines. An analytical queueing model with batch arrivals and server vacations is proposed to evaluate VS performance with several network interface cards and several CPU cores. Performance metrics are obtained as a function of two proposed batch acceptance strategies and mean batch size. Numerical results aremeaningful when sizing virtual switch resources
Dumont, Frédéric. "Analyses et préconisations pour les centres de données virtualisés". Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0249/document.
Texto completoThis thesis presents two contributions. The first contribution is the study of key performance indicators to monitor physical and virtual machines activity running on VMware and KVM hypervisors. This study highlights performance metrics and provides advanced analysis with the aim to prevent or detect abnormalities related to the four main resources of a datacenter: CPU, memory, disk and network. Thesecond contribution relates to a tool for virtual machines with pre-determined and / or atypical behaviors detection. The detection of these virtual machines has several objectives. First, optimize the use of hardware resources by freeing up resources by removing unnecessary virtual machines or by resizing those oversized. Second, optimize infrastructure performance by detecting undersized or overworked virtual machines and those having an atypical activity
Gogunska, Karyna. "Étude du coût de mesure des réseaux virtualisés". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4077.
Texto completoThe current trend in application development and deployment is to package applications within containers or virtual machines. This results in a blend of virtual and physical resources with complex network setups mixing virtual and physical switches along with specific protocols to build virtual networks spanning over several servers. While this complexity is hidden by cloud management solutions, this new environment constitutes a challenge when it comes to monitor and debug performance related issues. In this thesis, we consider the problem of measuring traffic in a virtualized environment and focus on one typical scenario, virtual machines interconnected with a virtual switch. We assess the cost of continuously measuring the network traffic of the machines. Specifically, we seek to estimate the competition that exists to access the resources (e.g., CPU) of the physical substrate between the measurement task and the application. We confirm the negative correlation of measurement within such setup and propose actions towards its minimization. Concluding on the measurement interference with virtual network, we then turn our work towards minimizing its presence in the network. We assess the capability of machine learning techniques to predict the measurement impact on the ongoing traffic between virtual machines. We propose a data-driven solution that is able to provide optimal monitoring parameters for virtual network measurements with minimum traffic interference
Le, Louët Guillaume. "Maîtrise énergétique des centres de données virtualisés : D'un scénario de charge à l'optimisation du placement des calculs". Phd thesis, Ecole des Mines de Nantes, 2014. http://tel.archives-ouvertes.fr/tel-01044650.
Texto completoCerović, Danilo. "Architecture réseau résiliente et hautement performante pour les datacenters virtualisés". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS478.
Texto completoThe amount of traffic in data centers is growing exponentially and it is not expected to stop growing any time soon. This brings about a vast amount of advancements in the networking field. Network interface throughputs supported today are in the range of 40Gbps and higher. On the other hand, such high interface throughputs do not guarantee higher packet processing speeds which are limited due to the overheads imposed by the architecture of the network stack. Nevertheless, there is a great need for a speedup in the forwarding engine, which is the most important part of a high-speed router. For this reason, many software-based and hardware-based solutions have emerged recently with a goal of increasing packet processing speeds. The networking stack of an operating system is not conceived for high-speed networking applications but rather for general purpose communications. In this thesis, we investigate various approaches that strive to improve packet processing performance on server-class network hosts, either by using software, hardware, or the combination of the two. Some of the solutions are based on the Click modular router which offloads its functions on different types of hardware like GPUs, FPGAs or different cores among different servers with parallel execution. Furthermore, we explore other software solutions which are not based on the Click modular router. We compare software and hardware packet processing solutions based on different criteria and we discuss their integration possibilities in virtualized environments, their constraints and their requirements. As our first contribution, we propose a resilient and highly performant fabric network architecture. Our goal is to build a layer 2 mesh network that only uses directly connected hardware acceleration cards that perform packet processing instead of routers and switches. We have decided to use the TRILL protocol for the communication between these smart NICs as it provides a better utilization of network links while also providing least-cost pair-wise data forwarding. The data plane packet processing is offloaded on a programmable hardware with parallel processing capability. Additionally, we propose to use the ODP API so that packet processing application code can be reused by any other packet processing solution that supports the ODP API. As our second contribution, we designed a data plane of the TRILL protocol on theMPPA (Massively Parallel Processor Array) smart NIC which supports the ODP API. Our experimental results show that we can process TRILL frames at full-duplex line-rate (up to 40Gbps) for different packet sizes while reducing latency. As our third contribution, we provide a mathematical analysis of the impact of different network topologies on the control plane’s load. The data plane packet processing is performed on the MPPA smart NICs. Our goal is to build a layer 2 mesh network that only uses directly connected smart NIC cards instead of routers and switches. We have considered various network topologies and we compared their loads induced by the control plane traffic. We have also shown that hypercube topology is the most suitable for our PoP data center use case because it does not have a high control plane load and it has a better resilience than fat-tree while having a shorter average distance between the nodes
Moussa, Hadjer. "Traitement automatique de données océanographiques pour l'interpolation de la ∫CO₂ de surface dans l'océan Atlantique tropical, en utilisant les données satellitaires". Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0025/document.
Texto completoThis thesis work consists of using satellite data of SST (sea surface temperature), SSS (sea surface salinity), and Chl-a (chlorophyll-a), in order to interpolate the CO2 fugacity (fCO2) in the surface of the tropical Atlantic ocean, for seasons of the period 2002-2013. Three data types were used: in situ (SOCAT V.3 DB (database)); satellite (MODIS-A, Sea-WIFS, and SMOS sensors); and assimilated (SODA V.2.2.4 DB). The first step was the data classification based on SST. The second step was the fCO2 interpolation (for each class of each season), using feedforward NNs (artificial neural networks) with a backpropagation learning method. Obtained results (RMSEs (root mean square error) between 8,8 and 15,7 µatm) confirm the importance of: process each season separately, pass through data classification step, and choose the best NN on the basis of generalization step results. This allowed the development of 138 monthly fCO2 CSV (Comma-separated values) file, with 4 km x 4 km spatial resolution, for the period from July 2002 to December 2013
Hamadache, Clarisse. "Recherche d'effets de microlentille gravitationnelle vers le centre galactique avec les données d'EROS-II". Phd thesis, Université Louis Pasteur - Strasbourg I, 2004. http://tel.archives-ouvertes.fr/tel-00008874.
Texto completoDumas, feris Barbara Pilar. "Réseaux optiques en mode paquet pour les connexions internes à un centre de données". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0057/document.
Texto completoData-center energy consumption is nowadays a major issue. Intra-data-center networking accounts almost for a quarter of the data-center total power consumption. Optical switching technologies could provide higher power efficiency than current solutions based on electrical-packet switching. This work focuses on optical-packet-switched networks for small- and medium-size data centers. It takes part of the EPOC (Energy-Proportional and Opportunistic Computing) project, which main interest consists on reducing the overall power consumption of a data center partially powered by renewable sources. A key assumption is that our data center does not rely on a dedicated storage network, in order to reduce the consumption of those interconnections. In addition, with the aim of being able to turn off some servers according to the workload and the available energy, the bit rate must be close to 100 Gbit/s. We have chosen, after studying the state of the art of data-center interconnects, a purely passive network architecture based on fast-wavelength-tunable transmitters under the name of POPI.We study POPI's limitations due to its components (insertion loss, tuning range and channel spacing). We then propose an extension called E-POPI that allows to increase the number of connected servers by using several transmission bands. For larger data centers, we propose POPI+, a two-stage infrastructure for intra- and inter-rack communications operating in the C and L bands, respectively. The connection between both stages is done via a transparent gateway in one direction and an opaque one in the other. We discuss different control solutions for both stages.The feasibility of these architectures depends on, among other factors, dealing with bit-rate increasing and power losses of a passive interconnect. Coherent long-distance-transmission techniques are not currently suited to data centers. We therefore studied PAM 4 and 8 modulation formats with direct detection. On one hand, by simulation, with different bit rates (up to 112 Gbit/s) and receivers (PIN, APD and SOA-PIN) and, on the other hand, experimentally, at 12 and 18 Gbit/s. We have developed a method for compensating the distortions generated by the different network components. Our method takes into account a good tradeoff between correction accuracy and computation time.Simulation results allow us to determine the amount of insertion loss that may be supported. We then combine these results with the limitations of transmitters-tuning range and channel spacing using multiple of 12.5 GHz slots for dimensioning the proposed architectures. POPI, E-POPI and POPI+ interconnects allow the connection of 48, 99 and 2352 entities, respectively, at 112 Gbit/s. Our assessments take into account a potential dispersion of the characteristics of the main architecture components
Hamadache, Clarisse. "Recherches d'effets de microlentille gravitationnelle vers le centre galactique avec les données d'Eros II". Université Louis Pasteur (Strasbourg) (1971-2008), 2004. https://publication-theses.unistra.fr/public/theses_doctorat/2004/HAMADACHE_Clarisse_2004.pdf.
Texto completoThe systematic search for gravitational microlensing effect towards the galactic center makes it possible to probe the galactic structure. The thesis work presented here concerns the analysis of all galactic center data collected by the Eros2 experiment during 7 years (1996-2003) : the survey of 66 square degrees located on both sides of the galactic plane has allowed to build the lightcurves of approximately 50 million stars in two filters. Gravitational microlensing events with a duration ranging between 4 days and 500 days and whose maximum magnification is higher than 2. 18 were required ; this makes it possible to select convincing candidates and constitutes an originality compared to the previous analyses (Eros2 and other experiment) where maximum magnification was required to be higher than 1. 34. The analysis revealed 139 microlensing candidates. This sample contains 91 candidates whose source is a clump red giant star with an associated detection efficiency of 56%. The optical depth obtained for the clump red giant sources is (1,79 +/- 0,20). 10e-6. This value is in good agreement with predicted values as well as with the latest result of the Macho group but it is lower than the Ogle and Moa group results which are 2 to 3 times higher than the predicted one. In addition, the large statistics of galactic center data collected by Eros2 made it possible to calculate the optical depth for various galactic latitudes, and to detect the gradient of optical depth expected in galactic models
Kaced, Yazid. "Études du refroidissement par free cooling indirect d’un bâtiment exothermique : application au centre de données". Thesis, Lorient, 2018. http://www.theses.fr/2018LORIS499/document.
Texto completoA data center is a warehouse that contains telecommunication equipment, network infrastructure, servers, and computers. This equipment leads to a very high heat dissipation which must be compensated by the use of cooling systems. Telecommunication standards impose restricted climatic ranges (temperatures and humidity) leading to a very high energy consumption devote to air conditioning. The reduction of this energy consumption constitutes a real challenge which should be raised and solved. Many cooling solutions are proposed as the free cooling solution, which consists in cooling equipment by using external air in propitious climatic conditions. The work carried out during this thesis is based on experiments conducted within a building in real climatic conditions in order to study the cooling of telecom cabinets. During this study, the building configuration was modified, an indirect "free cooling" system was set up and a significant instrumentation was implemented. The objectives are to establish performance factors issued from measurements, to develop and to validate a numerical model in order to predict the thermoaeraulic behavior for this type of solution. Initially, experiments are carried out with a power dissipated inside the building and a cooling provided only by an outside air circulation. Then, significant modifications were made into the building to introduce an internal air circulation in a closed loop in order to evacuate the heat dissipated inside cabinets by a crossing airflow. In order to get a convincing database, measurements were conducted by using one and then several cabinets in different conditions. Modifications are made to operating parameters in order to better understand the installation operation and to define the energy optimization parameters. Numerical models are developed through TRNSYS / TRNFLOW. The confrontation of simulations with measurements shows the implemented approach relevance
Ali, Muhammad. "Stockage de données codées et allocation de tâches pour les centres de données à faible consommation d'énergie". Electronic Thesis or Diss., CY Cergy Paris Université, 2023. http://www.theses.fr/2023CYUN1243.
Texto completoData centers are responsible for a significant portion of global energy consumption. This consumption is expected to grow in the coming years, driven by the increasing demand for data center services. Therefore, the need for energy-efficient, low-carbon data center operations is growing rapidly.This research focuses on designing and implementing a low-carbon, energy-efficient data center powered by solar and hydrogen, granting it independence from the power grid. As a result, the data center is limited by the upper bound on the energy consumption, which is 10KWh. The maximum usage of energy-constraint imposes several challenges to the design, energy usage, and sustainability of the data center.The work first contributes to designing a low-power budget data center while respecting the overall energy constraint. We tried to save the energy usage of the data center through the right choice of hardware while keeping the performance of the data center intact. The second contribution of our work provides valuable protocols like lazy repair in distributed data storage, job placement, and power management techniques to further reduce the data center's energy usage. With the combined efforts of the right choice of hardware, protocols, and techniques, we significantly reduced the overall energy consumption of the data center
Marchadier, Elodie Sylviane Germaine. "Etude fonctionnelle d'un centre d'interactions protéiques chez Bacillus subtilis par une approche intégrée". Paris 11, 2009. http://www.theses.fr/2009PA112047.
Texto completoThe entire complement of proteins expressed by a genome forms the proteome. The proteome is organized in structured networks of protein interactions: the interactome. In these networks, most of the proteins have few interactions whereas a few proteins have many connections: these proteins are called centres of interactions or hubs. This thesis focused on an important biological question: understanding the biological function of a cluster of hubs (CoH), discovered in Bacillus subtilis, and which is located at the interface of several essential cellular processes: DNA replication, cell division, chromosome segregation, stress response and biogenesis of the bacterial cell wall. The partners of the protein of the cluster of hubs were first identified by the technique of two-hybrid in yeast, which helped us to define it rigorously in a network composed of 287 proteins connected by 787 interactions. This network shows many proteins in a new context, thereby facilitate functional analysis of individual proteins and links between the major cellular processes. After conducting a study of the genomic context of genes of the CoH, an integrative biology approach has been initiated by analyzing heterogeneous transcriptome data available in public databases. Statistical analysis of these data identified groups of genes co-regulated with the genes of the cluster of hubs. At first, the analysis of correlations between the expression of genes across various conditions has been performed on the basis of classical statistics such as the unsupervised classification. This first analysis allowed us to associate genes in the CoH to functional groups, to validate and to identify regulons. It also enabled us to highlight the limitations of this approach and the need to resort to methods allowing identification of the conditions in which genes are co-regulated. To this end, we have (i) generated transcriptome data to promote the differential expression of genes coding for proteins CoH and (ii) used bi-clustering methods, to identify groups of genes co -expressed in a wide range of conditions. This led us to identify associations of expression in specific conditions among the genes of the CoH. Therefore, it has been possible to combine two approaches: the study of the transcriptome and the interactome, both of them were conducted in a systematic manner in the whole genome. The integration of these two kinds of data allowed us to clarify the functional context of genes of interest and to make assumptions about the nature of interactions between proteins cluster hub. It appears finally composed of a few groups of co-expressed proteins (party hubs) which can interact together and other proteins expressed in an uncorrelated manner (date hubs). The CoH could form a large group of date hubs whose function could be to ensure the connection between basic cellular processes, whatever the environmental conditions B. Subtilis could be exposed. Generation and processing of such a data set is a major scientific challenge, it require the mobilization of skills, knowledge, and tools to access to a better understanding of living organisms. The constituted data set may be used to implement other statistical methods. All of this will provide methods to ultimately extract information from large data sets which are currently produced. This is the major issue of integrative biology
Jagueneau, Liliane. "Structuration de l'espace linguistique entre Loire et Gironde : analyse dialectométrique des données phonétiques de l'"Atlas linguistique et ethnographique de l'Ouest"". Toulouse 2, 1987. http://www.theses.fr/1987TOU20082.
Texto completoThis study deals with the geolinguistic structuration of phonetic features between loire and gironde, in the "centre-ouest" (vendee, deux-sevres, vienne, charente-maritime, charente, and some surrounding points) - a boundary area between northern and southern languages of france ("oil" and "oc"). It first presents the phonetic description of this area, derived from the maps of the atlas linguistique et ethnographique de l'ouest (b. Horiot-g. Massignon). And then, the space distribution of these phonetic features is analysed: actually they are neither spread about nor ordered according to strict dialect limits. After the automatic analysis of these data, a new structuration of linguistic space is put forward: on the one hand, the space structuration which results from the cluster analysis of the languages (eighty-two points) is quite similar to the geological one, and partly corresponds to historical, cultural or economic areas; but it always differs from administrative divisions. On the other hand, the cluster analysis of phonetic features reveals a new geolinguistic scheme: these phonetic features are distributed in a "nucleus", and then diffuse into an "area of influence". (theory of geolinguistic nuclei) finally, through the multivariate analysis, attention is drawn to the relations between the points themselves, and to the relations between the points and the phonetic modalities
Dumas, Stéphane. "Développement d'un système de veille stratégique dans un centre technique". Aix-Marseille 3, 1994. http://www.theses.fr/1994AIX30063.
Texto completoDuranthon, Sophie. "Intoxications par les produits agricoles : bilan sur 5 années de données recueillies au centre anti-poison de Bordeaux (1990-1994)". Bordeaux 2, 1996. http://www.theses.fr/1996BOR2P007.
Texto completoParadis, René. "Implantation d'un système de cueillette et d'analyse de données générées par la plate-forme de protéomique du centre de recherche du CHUL". Thesis, Université Laval, 2004. http://www.theses.ulaval.ca/2004/21917/21917.pdf.
Texto completoParent, Daniel. "Données épidémiologiques, pronostiques et thérapeutiques : à propos de 167 cas de pneumopathies dans le service de réanimation du Centre hospitalier de Tourcoing". Lille 2, 1989. http://www.theses.fr/1989LIL2M181.
Texto completoOndo, Assoumou Emmanuel. "Dynamique des paysages végétaux du littoral centre-ouest du Gabon autour de Port-Gentil : approche spatiale et analyse des données de terrain". Montpellier 3, 2006. http://www.theses.fr/2006MON30044.
Texto completoThis work, centered on the space-time of the vegetable landscapes and morphodynamic variations of the feature of coast of the area of Port-Gentil, is a contribution to the reflexion on the problems related to the coastal zones vis-a-vis on the future fluctuations (climatic changes, rise in the sea level). The object of this study is to try, starting from the conceptual tools (concepts of landscape and dynamics) and methodological (statements of ground: transects and small squares, of the topographic charts and the teledetection: air photographs and satellite images), on the one hand, to inventory and characterize the vegetable landscapes. In addition, we plan to include/understand and follow the evolution of the coastal vegetable formations and the speed of the evolution of the feature of coast, by locating the accretion sectors and the sectors in erosion of the area of Port-Gentil. The study of the vegetable structure also made it possible to highlight the various strategies of space conquest and the models of occupation of space used by the mangroves and Melaleuca leucadendron)
Jung, François. "L'hospitalisation des vieillards déments en Centre hospitalier spécialisé : données comparatives avec une population de référence de malades déments hospitalisés en long séjour". Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR1M060.
Texto completoToillon, Michel. "L'hospitalisation des vieillards déments en long séjour : données comparatives avec une population de référence de malades déments hospitalisés en centre hospitalier spécialisé". Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR1M061.
Texto completoBlanc, Beyne Thibault. "Estimation de posture 3D à partir de données imprécises et incomplètes : application à l'analyse d'activité d'opérateurs humains dans un centre de tri". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0106.
Texto completoIn a context of study of stress and ergonomics at work for the prevention of musculoskeletal disorders, the company Ebhys wants to develop a tool for analyzing the activity of human operators in a waste sorting center, by measuring ergonomic indicators. To cope with the uncontrolled environment of the sorting center, these indicators are measured from depth images. An ergonomic study allows us to define the indicators to be measured. These indicators are zones of movement of the operator’s hands and zones of angulations of certain joints of the upper body. They are therefore indicators that can be obtained from an analysis of the operator’s 3D pose. The software for calculating the indicators will thus be composed of three steps : a first part segments the operator from the rest of the scene to ease the 3D pose estimation, a second part estimates the operator’s 3D pose, and the third part uses the operator’s 3D pose to compute the ergonomic indicators. First of all, we propose an algorithm that extracts the operator from the rest of the depth image. To do this, we use a first automatic segmentation based on static background removal and selection of a moving element given its position and size. This first segmentation allows us to train a neural network that improves the results. This neural network is trained using the segmentations obtained from the first automatic segmentation, from which the best quality samples are automatically selected during training. Next, we build a neural network model to estimate the operator’s 3D pose. We propose a study that allows us to find a light and optimal model for 3D pose estimation on synthetic depth images, which we generate numerically. However, if this network gives outstanding performances on synthetic depth images, it is not directly applicable to real depth images that we acquired in an industrial context. To overcome this issue, we finally build a module that allows us to transform the synthetic depth images into more realistic depth images. This image-to-image translation model modifies the style of the depth image without changing its content, keeping the 3D pose of the operator from the synthetic source image unchanged on the translated realistic depth frames. These more realistic depth images are then used to re-train the 3D pose estimation neural network, to finally obtain a convincing 3D pose estimation on the depth images acquired in real conditions, to compute de ergonomic indicators
Deshayes, Perrine. "Tomographie en vitesse et en atténuation de la zone de subduction au Chili central - ouest de l'Argentine (29°S-34°S) à partir de données sismologiques locales: apport à l'étude de la composition minéralogique". Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00360063.
Texto completosous la plaque continentale sud-Américaine. Cette région est une zone de transition entre une
subduction plate et une subduction pentue. Au nord de 33°S, où le slab devient plat vers 100 km de
profondeur, la ride de Juan Fernandez subducte le long de la plaque océanique Nazca. Alors que dans
cette région, le volcanisme quaternaire s'arrête vers 5-7 Ma, au sud de 33°S, où la plaque océanique
plonge avec un angle de 30°,la majorité des édifices volcaniques sont actifs. A partir de l'enregistrement
de séismes locaux au travers de deux campagnes sismologiques, nous avons réalisé une tomographie des
écarts des temps d'arrivée et du paramètre d'atténuation t* = t/Q, afin de déterminer des modèles
tridimensionnels d'une part de vitesse et d'autre part d'atténuation des ondes P et S. La plaque subduite,
plus froide que le manteau dans lequel elle plonge, est un milieu où les ondes P et S se propagent
rapidement et sont faiblement atténuées. L'un des blocs tectoniques constituant la croûte continental
(bloc Cuyania) se caractérise par des vitesses rapides des ondes sismiques et une forte atténuation de
l'onde S. Sous les édifices volcaniques actifs, la vitesse de ses ondes est plus faible due probablement à la
présence de fusion partielle. Les modèles de vitesse des ondes P et S, combinés à un modèle thermique
bidimensionnel à 31.5°S déterminé dans cette étude, ont permis d'obtenir un modèle minéralogique de
la lithosphère continentale et de la croûte océanique de la plaque Nazca. Cette croûte est composée
de Blueschists jusqu'à 80 km de profondeur et d'Eclogite plus profond. De la serpentine est observée
dans le coin mantellique considéré comme "froid". Le manteau continental est constitué par un mélange
d'Harzburgites et de Lherzolites plus ou moins hydratées. Un faciès éclogite est observé à la base de
la croûte continentale. Les modèles d'atténuation ont une résolution spatiale trop faible pour pouvoir
améliorer les modèles thermiques et par conséquent les modèles minéralogiques de la zone de subduction
plate.
Avoine, Marie-Pierre. "La signification de pratiques déshumanisantes telles que vécues par des patients hospitalisés ou ayant été hospitalisés en centre de réadaptation". Mémoire, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/6264.
Texto completoPeoples, Bruce E. "Méthodologie d'analyse du centre de gravité de normes internationales publiées : une démarche innovante de recommandation". Thesis, Paris 8, 2016. http://www.theses.fr/2016PA080023.
Texto completo“Standards make a positive contribution to the world we live in. They facilitate trade, spreadknowledge, disseminate innovative advances in technology, and share good management andconformity assessment practices”7. There are a multitude of standard and standard consortiaorganizations producing market relevant standards, specifications, and technical reports in thedomain of Information Communication Technology (ICT). With the number of ICT relatedstandards and specifications numbering in the thousands, it is not readily apparent to users howthese standards inter-relate to form the basis of technical interoperability. There is a need todevelop and document a process to identify how standards inter-relate to form a basis ofinteroperability in multiple contexts; at a general horizontal technology level that covers alldomains, and within specific vertical technology domains and sub-domains. By analyzing whichstandards inter-relate through normative referencing, key standards can be identified as technicalcenters of gravity, allowing identification of specific standards that are required for thesuccessful implementation of standards that normatively reference them, and form a basis forinteroperability across horizontal and vertical technology domains. This Thesis focuses on defining a methodology to analyze ICT standards to identifynormatively referenced standards that form technical centers of gravity utilizing Data Mining(DM) and Social Network Analysis (SNA) graph technologies as a basis of analysis. As a proofof concept, the methodology focuses on the published International Standards (IS) published bythe International Organization of Standards/International Electrotechnical Committee; JointTechnical Committee 1, Sub-committee 36 Learning Education, and Training (ISO/IEC JTC1 SC36). The process is designed to be scalable for larger document sets within ISO/IEC JTC1 that covers all JTC1 Sub-Committees, and possibly other Standard Development Organizations(SDOs).Chapter 1 provides a review of literature of previous standard analysis projects and analysisof components used in this Thesis, such as data mining and graph theory. Identification of adataset for testing the developed methodology containing published International Standardsneeded for analysis and form specific technology domains and sub-domains is the focus ofChapter 2. Chapter 3 describes the specific methodology developed to analyze publishedInternational Standards documents, and to create and analyze the graphs to identify technicalcenters of gravity. Chapter 4 presents analysis of data which identifies technical center of gravitystandards for ICT learning, education, and training standards produced in ISO/IEC JTC1 SC 36.Conclusions of the analysis are contained in Chapter 5. Recommendations for further researchusing the output of the developed methodology are contained in Chapter 6
Tudoran, Radu-Marius. "High-Performance Big Data Management Across Cloud Data Centers". Electronic Thesis or Diss., Rennes, École normale supérieure, 2014. http://www.theses.fr/2014ENSR0004.
Texto completoThe easily accessible computing power offered by cloud infrastructures, coupled with the "Big Data" revolution, are increasing the scale and speed at which data analysis is performed. Cloud computing resources for compute and storage are spread across multiple data centers around the world. Enabling fast data transfers becomes especially important in scientific applications where moving the processing close to data is expensive or even impossible. The main objectives of this thesis are to analyze how clouds can become "Big Data - friendly", and what are the best options to provide data management services able to meet the needs of applications. In this thesis, we present our contributions to improve the performance of data management for applications running on several geographically distributed data centers. We start with aspects concerning the scale of data processing on a site, and continue with the development of MapReduce type solutions allowing the distribution of calculations between several centers. Then, we present a transfer service architecture that optimizes the cost-performance ratio of transfers. This service is operated in the context of real-time data streaming between cloud data centers. Finally, we study the viability, for a cloud provider, of the solution consisting in integrating this architecture as a service based on a flexible pricing paradigm, qualified as "Transfer-as-a-Service"
Vegeto, Nathalie. "Le prélèvement micro-chirurgical de spermatozoi͏̈des épididymaires et la fécondation in vitro : Expérience du Centre hospitalier universitaire de Montpellier-Nîmes et données de la littérature". Montpellier 1, 1991. http://www.theses.fr/1991MON11181.
Texto completoMoualla, Ghada. "Virtualisation résiliente des fonctions réseau pour les centres de données et les environnements décentralisés". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4061.
Texto completoTraditional networks are based on an ever-growing variety of network functions that run on proprietary hardware devices called middleboxes. Designing these vendor-specific appliances and deploying them is very complex, costly and time-consuming. Moreover, with the ever-increasing and heterogeneous short-term services requirements, service providers have to scale up their physical infrastructure periodically, which results in high CAPEX and OPEX. This traditional paradigm leads to network ossification and high complexity in network management and services provisioning to address emerging use cases. Network Function Virtualization (NFV) has attracted notable attention as a promising paradigm to tackle such challenges by decoupling network functions from the underlying proprietary hardware and implementing them as software, named Virtual Network Functions (VNFs), able to work on inexpensive commodity hardware. These VNFs can be arranged and chained together in a predefined order, the so-called Service Function chaining (SFC), to provide end-to-end services. Despite all the benefits associated with the new paradigm, NFV comes with the challenge of how to place the functions of the users' requested services within the physical network while providing the same resiliency as if a dedicated infrastructure were used, given that commodity hardware is less reliable than the dedicated one. This problem becomes particularly challenging when service requests have to be fulfilled as soon as they arise (i.e., in an online manner). In light of these new challenges, we propose new solutions to tackle the problem of online SFC placement while ensuring the robustness of the placed services against physical failures in data-center (DC) topologies. Although recovery solutions exist, they still require time in which the impacted services will be unavailable while taking smart placement decisions can help in avoiding the need for reacting against simple network failures. First, we provide a comprehensive study on how the placement choices can affect the overall robustness of the placed services. Based on this study we propose a deterministic solution applicable when the service provider has full knowledge and control on the infrastructure. Thereafter, we move from this deterministic solution to a stochastic approach for the case where SFCs are requested by tenants oblivious to the physical DC network, where users only have to provide the SFC they want to place and the required availability level (e.g., 5 nines). We simulated several solutions and the evaluation results show the effectiveness of our algorithms and the feasibility of our propositions in very large scale data center topologies, which make it possible to use them in a productive environment. All these solutions work well in trusted environments with a central authority that controls the infrastructure. However, in some cases, many enterprises need to collaborate together in order to run tenants' application, e.g., MapReduce applications. In such a scenario, we move to a completely untrusted decentralized environment with no trust guarantees in the presence of not only byzantine nodes but also rational nodes. We considered the case of MapReduce applications in such an environment and present an adapted MapReduce framework called MARS, which is able to work correctly in such a context without the need of any trusted third party. Our simulations show that MARS grants the execution integrity in MapReduce linearly with the number of byzantine nodes in the system
Minsart, Anne-Frédérique. "Impact de la mise en place d'un Centre d'Epidémiologie Périnatale en Wallonie et à Bruxelles sur les données en santé périnatale et analyse des nouvelles données sur la santé périnatale des immigrants et sur l'impact de l'indice de masse corporelle maternel". Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209481.
Texto completoUn problème souvent rencontré dans l’analyse des certificats de naissance est la présence de données manquantes. Des informations manquaient sur 64.0% des certificats bruxellois de janvier 2008 (situation de base). Le renforcement de l’enregistrement par le CEpiP durant l’année 2008 est lié à une diminution des informations manquantes sur les certificats initiaux (à la sortie des maternités et services d’état civil) après la première et la deuxième année d’enregistrement :20,8% et 19,5% des naissances en décembre 2008 et 2009 respectivement. Le taux résiduel de données manquantes après correction grâce aux listes envoyées aux maternités et services d’Etat civil est faible. En particulier, la nationalité d’origine des parents était souvent manquante, jusqu’à 35% à Bruxelles (données non publiées), et ce taux est passé à 2.6% en 2008 et 0.1% en 2009. Certaines données manquantes ne sont pas distribuées de façon équivalente selon la nationalité de la mère, même après correction. Les mères d’origine sub-saharienne ont les taux de remplissage les moins élevés. Enfin, le taux de mort-nés a augmenté par rapport aux données de 2007, au profit des mort-nés avant l’âge de 28 semaines, et suggère une amélioration de l’enregistrement suite au renforcement de l’information.
Les données concernant l’indice de masse corporelle des patientes sont donc relevées depuis 2009 pour l’ensemble des mères qui accouchent en Belgique. L’obésité maternelle et l’immigration sont en augmentation en Belgique, et ont été rarement étudiées au travers d’études de population sur les certificats de naissance. Des études ont pourtant montré que ces mères étaient à risque de complications périnatales, comme la césarienne ou la mortalité périnatale. L’obésité et l’immigration ont en commun le fait qu’elles recouvrent des réalités médicales, sociales et relationnelles face au personnel soignant, qui les mettent à risque de complications périnatales.
Des différences en termes de complications obstétricales et néonatales entre populations immigrantes et autochtones ont été observées en Belgique et dans d’autres pays, mais elles sont encore mal comprises.
Dans un premier travail d’analyse, nous avons évalué les taux de mortalité périnatale chez les mères immigrantes, en fonction du fait qu’elles étaient naturalisées ou non.
Le taux de mortalité périnatale est globalement plus élevé chez les mères immigrantes (8.6‰) que non-immigrantes (6.4‰).
Le taux de mortalité périnatale est globalement plus élevé chez les mères non naturalisées (10.3‰) que chez les mères naturalisées (6.1‰).
Le taux de mortalité périnatale varie selon l’origine des mères, mais dans chaque sous-groupe étudié, les mères non naturalisées ont un taux plus élevé de mortalité périnatale.
Des études ont successivement montré davantage, ou moins de césariennes chez les mères immigrantes. Peu de facteurs confondants étaient généralement pris en compte. Dans un second travail d’analyse, nous avons comparé les taux de césarienne dans plusieurs sous-groupes de nationalités.
Les taux de césarienne varient selon les sous-groupes de nationalités. Les mères originaires d’Afrique sub-saharienne ont un odds ratio ajusté pour la césarienne de 2.06 (1.62-2.63) en comparaison aux mères belges. L’odds ratio ajusté n’est plus statistiquement significatif après introduction des variables anthropométriques dans le modèle multivariable pour les mères d’Europe de l’Est, et après introduction des interventions médicales pour les mères du Maghreb.
Peu d’études ont analysé la relation entre l’obésité maternelle et les complications néonatales, et la plupart de ces études n’ont pas ajusté leurs résultats pour plusieurs variables confondantes. Nous avons eu pour but dans un troisième travail d’analyse d’étudier la relation entre l’obésité maternelle et les paramètres néonatals, en tenant compte du type de travail (induit ou spontané) et du type d’accouchement (césarienne ou voie basse). Les enfants de mères obèses ont un excès de 38% d’admission en centre néonatal après ajustement pour toutes les caractéristiques du modèle multivariable (intervalle de confiance à 95% :1.22-1.56) ;les enfants de mères obèses en travail spontané et induit ont également un excès de risque de 45% (1.21-1.73) et 34% (1.10-1.63) respectivement, alors qu’après une césarienne programmée l’excès de risque est de 18% (0.86-1.63) et non statistiquement significatif.
Les enfants de mères obèses ont un excès de 31% de taux d’Apgar à 1 minute inférieur à 7, après ajustement pour toutes les caractéristiques du modèle mutivariable (1.15-1.49) ;les enfants de mères obèses en travail spontané et induit ont également un excès de risque de 26% (1.04-1.52) et 38% (1.12-1.69) respectivement, alors qu’après une césarienne programmée l’excès de risque est de 50% (0.96-2.36) et non statistiquement significatif.
In 2008, a Centre for Perinatal Epidemiology was created inter alia to assist the Health Departments of Brussels-Capital City Region and the French Community to check birth certificates. A problem repeatedly reported in birth certificate data is the presence of missing data. The purpose of this study is to assess the changes brought by the Centre in terms of completeness of data registration for the entire population and according to immigration status. Reinforcement of data collection was associated with a decrease of missing information. The residual missing data rate was very low. Education level and employment status were missing more often in immigrant mothers compared to Belgian natives both in 2008 and 2009. Mothers from Sub-Saharan Africa had the highest missing rate of socio-economic data. The stillbirth rate increased from 4.6‰ in 2007 to 8.2‰ in 2009. All twin pairs were identified, but early loss of a co-twin before 22 weeks was rarely reported.
Differences in neonatal mortality among immigrants have been documented in Belgium and elsewhere, and these disparities are poorly understood. Our objective was to compare perinatal mortality rates in immigrant mothers according to citizenship status. Perinatal mortality rate varied according to the origin of the mother and her naturalization status: among immigrants, non-naturalized immigrants had a higher incidence of perinatal mortality (10.3‰) than their naturalized counterparts (6.1‰). In a country with a high frequency of naturalization, and universal access to health care, naturalized immigrant mothers experience less perinatal mortality than their not naturalized counterparts.
Our second objective was to provide insight into the differential effect of immigration on cesarean section rates, using Robson classification. Cesarean section rates currently vary between Robson categories in immigrant subgroups. Immigrant mothers from Sub-Saharan Africa with a term, singleton infant in cephalic position, without previous cesarean section, appear to carry the highest burden.
If it is well known that obesity increases morbidity for both mother and fetus and is associated with a variety of adverse reproductive outcomes, few studies have assessed the relation between obesity and neonatal outcomes. This is the aim of the last study, after taking into account type of labor and delivery, as well as social, medical and hospital characteristics in a population-based analysis. Neonatal admission to intensive care and low Apgar scores were more likely to occur in infants from obese mothers, both after spontaneous and
Doctorat en Sciences médicales
info:eu-repo/semantics/nonPublished
Haubois, Xavier. "Imagerie interférométrique infrarouge et perspectives pour l'observation interférométrique du Centre Galactique : le projet GRAVITY". Phd thesis, Observatoire de Paris, 2009. http://tel.archives-ouvertes.fr/tel-00424467.
Texto completoLa précision des observables interférométriques conditionne la qualité de la reconstruction d'image. Dans une deuxième partie, j'ai pu pratiquer une étude des performances interférométriques simulées de GRAVITY afin d'estimer la précision sur les phases et visibilités qu'il délivrera. Afin d'optimiser les futures observations de GRAVITY, il est essentiel d'avoir une idée des caractéristiques spatiales et temporelles de son objectif scientifique majeur qu'est Sgr A*. Pour cela, j'ai pu finalement participer à une campagne d'observation multi-longueurs d'onde de l'environnement de ce trou noir. A cette occasion, j'ai utilisé le mode BURST du spectro-imageur VISIR pour obtenir une haute résolution angulaire et une grande sensibilité au rayonnement de Sgr A*. Ceci m'a conduit à obtenir une limite supérieure la plus basse jamais enregistrée à 8,6 microns. Autre fait marquant, ces observations ont révélé la présence d'un sursaut d'intensité lumineuse en proche infrarouge. Si le processus de rayonnement n'est pas encore parfaitement modélisé, ces observations tendent à confirmer que les sursauts tirent leur origine d'un mouvement orbital de matière à quelques rayons de Schwarzschild de Sgr A*.
Grâce à sa précision astrométrique de 10 micro-secondes d'angle, correspondant à un rayon de Schwarzschild à la distance du Centre Galactique, GRAVITY sera en mesure de résoudre le mouvement orbital de ces spots de matière et de comprendre la nature d'un tel rayonnement. De plus, il permettra la mesure directe de la métrique d'espace-temps et l'étude de la relativité générale en champ fort.
Carrère, Christophe. "Profil épidémiologique et protection vaccinale des voyageurs en zone tropicale : analyse et comparaison des données statistiques de 1987 et 1988 du centre de vaccinations internationales de Bordeaux". Bordeaux 2, 1990. http://www.theses.fr/1990BOR2M122.
Texto completoHernandez, Fabrice. "Comparaison et combinaison de données altimétriques et lagrangiennes. Application à la campagne SEMAPHORE-93". Toulouse 3, 1995. http://www.theses.fr/1995TOU30295.
Texto completoDessureault, Danny. "Évaluation de critères prédicteurs de l'adaptation sociale ultérieure au séjour d'un groupe d'ex-détenus du Centre résidentiel Radisson à partir des données disponibles à la fin du séjour". Thèse, Université du Québec à Trois-Rivières, 1986. http://depot-e.uqtr.ca/5814/1/000560764.pdf.
Texto completoBruandet, Amelie. "Facteurs pronostiques de patients atteints de démence suivis en centre mémoire de ressource et de recherche : exemple d'utilisation de bases de données médicales à des fins de recherche clinique". Phd thesis, Université du Droit et de la Santé - Lille II, 2008. http://tel.archives-ouvertes.fr/tel-00336252.
Texto completoL'objectif de mon travail est l'étude des facteurs pronostiques de patients, pris en charge au centre de mémoire de ressource et de recherche (CMRR) du Centre Hospitalier Régional et Universitaire (CHRU) de Lille et du centre médical des monts de Flandres de Bailleul. Pour cela, nous avons utilisé la base de données médicales informatisées des patients consultant au CMRR de Lille-Bailleul. Ce travail s'est en particulier intéressé aux avantages et aux limites de l'utilisation de bases de données médicales à des fins de recherche clinique dans l'étude des facteurs pronostiques des démences.
Dans une population de 670 patients ayant une maladie d'Alzheimer, nous avons confirmé que le déclin des fonctions cognitives (évaluées par le MMSE) était significativement plus élevé chez les sujets ayant un niveau d'éducation intermédiaire ou élevé par rapport aux sujets ayant un bas niveau d'éducation. Cependant, la mortalité ne différaient pas de façon significative entre ces trois groupes. Nous avons décrit une mortalité similaire entre patients ayant une maladie d'Alzheimer, une démence mixte ou une démence vasculaire. Les patients ayant une démence mixte avaient un déclin du MMSE plus important que les patients ayant une démence vasculaire mais moins important que les patients ayant une maladie d'Alzheimer. Enfin, nous avons montré que le risque de développer une démence vasculaire ou mixte augmentait de manière significative avec le nombre d'hypersignaux sous corticaux chez des patients ayant un mild cognitive impairment.
Ces travaux soulignent la difficulté de l'établissement du diagnostic des démences mixtes, la complexité de l'analyse du déclin des fonctions cognitives (prise en compte du stade de progression des démences, absence d'instrument de suivi des fonctions cognitives à la fois simple d'utilisation et sensible aux faibles variations au cours du temps ou non linéarité du déclin des fonctions cognitives), les avantages en terme de coût et de temps de l'utilisation de bases de données médicales, et les problème de sélection de la population issue d'une structure de soins.
Malgré les problèmes de représentativité des populations, ce travail montre l'intérêt de l'utilisation à des fins de recherche clinique de données médicales concernant des patients pris en charge en structure de soins.
Bruandet, Amélie. "Facteurs pronostiques de patients atteints de démence suivis en Centre mémoire de ressources et de recherche : exemple d'utilisation de bases de données médicales à des fins de recherche clinique". Lille 2, 2008. http://tel.archives-ouvertes.fr/tel-00336252/fr/.
Texto completoPalermiti, Rosalba. "Les anaphores dans le dialogue homme-machine en langue naturelle écrite : données recueillies par le Magicien d'Oz dans une situation de recherche documentaire". Université Pierre Mendès France (Grenoble ; 1990-2015), 1992. http://www.theses.fr/1992GRE21032.
Texto completoThe anaphora traditionally handled in man-machine dialogues are pronominal anaphora. In this thesis, however, the aim is to study the entires set of anaphorical phenomena that can be found in man-machine dialogues. For this, we use a corpus collected using the so-called "wizard of oz" simulation technique. We explain the experimental protocol used and detail the application context : an information retrieval system for everyman's use in a library. We also survey different existing retrieval modes for bibliographical data bases. Next, having exploited the corpus on the retrieval level (query typology, search strategies,. . . ) and on the linguistic level (suraces structures), anaphora, reference and coherence,. . . ), we express our doubts about the usability of natural language in certain situations and we also state the hypothesis of the need for a new "phraseology" for man-machine communication. Concerning natural language processing (nlp, we explain the different types of knowledge necessary for dialogue management and anaphora resolution (dialogue modeling, task modeling, dialogue dynamics,. . . ), on the basis of different linguistic approaches of anaphorical phenomena and existing work in nlp. Within the framework of the nlp work done at criss (centre for research in informatics applied to the social sciences), we then derive some rules for identifying anaphoric units
Rostirolla, Gustavo. "Ordonnancement dans un centre de calculs alimenté par des sources d'énergie renouvelables sans connexion au réseau avec une charge de travail mixte basée sur des phases". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30160.
Texto completoDue to the increase of cloud, web-services and high performance computing demands all over the world, datacenters are now known to be one of the biggest actors when talking about energy consumption. In 2006 alone, datacenters were responsible for consuming 61.4 billion kWh in the United States. When looking at the global scenario, datacenters are currently consuming more energy than the entire United Kingdom, representing about 1.3\% of world's electricity consumption, and being even called the factories of the digital age. Supplying datacenters with clean-to-use renewable energy is therefore essential to help mitigate climate change. The vast majority of cloud provider companies that claim to use green energy supply on their datacenters consider the classical grid, and deploy the solar panels/wind turbines somewhere else and sell the energy to electricity companies, which incurs in energy losses when the electricity travels throughout the grid. Even though several efforts have been conducted at the computing level in datacenters partially powered by renewable energy sources, the scheduling considering on site renewable energy sources and its variations, without connection to the grid can still be widely explored. Since energy efficiency in datacenters is directly related to the resource consumption of the computing nodes, performance optimization and an efficient load scheduling are essential for energy saving. Today, we observe the use of cloud computing as the basis of datacenters, either in a public or private fashion. The main particularity of our approach is that we consider a power envelope composed only by renewable energy as a constraint, hence with a variable amount of power available at each moment. The scheduling under this kind of constraint becomes more complex: without further checks, we are not ensured that a running task will run until completion. We start by addressing the IT load scheduling of batch tasks, which are characterized by their release time, due date and resource demand, in a cloud datacenter while respecting the aforementioned power envelope. The data utilized for the batch tasks comes from datacenter traces, containing CPU, memory and network values. The power envelopes considered, represent an estimation which would be provided by a power decision module and is the expected power production based on weather forecasts. The aim is to maximize the Quality of Service with a variable constraint on electrical power. Furthermore, we explore a workload composed by batch and services, where the resources consumption varies over time. The traces utilized for the service tasks originate from business critical datacenter. In this case we rely on the concept of phases, where each significant resource change in the resources consumption constitutes a new phase of the given task. In this task model phases could also receive less resources than requested. The reduction of resources can impact the QoS and consequently the datacenter profit. In this approach we also include the concept of cross-correlation to evaluate where to place a task under a power curve, and what is the best node to place tasks together (i.e. sharing resources). Finally, considering the previous workload of batch tasks and services, we present an approach towards handling unexpected events in the datacenter. More specifically we focus on IT related events such as tasks arriving at any given time, demanding more or less resources than expected, or having a different finish time than what was initially expected. We adapt the proposed algorithms to take actions depending on which event occurs, e.g. task degradation to reduce the impact on the datacenter profit
Vazquez, Llana Jordan Diego. "Environnement big data et prise de décision intuitive : le cas du Centre d'Information et de Commandement (CIC) de la Police nationale des Bouches du Rhône (DDSP 13)". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE3063.
Texto completoGodé and Vazquez have previously demonstrated that French Police team operate in extreme contexts (Godé & Vazquez, 2017), simultaneously marked by high levels of change, uncertainty and mainly vital, material and legal risks (Godé, 2016), but also technological. In this context, the notion of big data environment, can affect the police decision-making process. The problematic of this thesis is : "What is the status of intuition in decision-making process in a big data environment?". We explain how the growth of available information volumes, the great diversity of their sources (social networks, websites, connected objects), their speed of diffusion (in real time or near real time) and their unstructured nature (Davenport & Soulard, 2014) introduces new decision-making challenges for National Police forces
Tarrieu, Leïla. "Nouvelles données minéralogiques, géochimiques et géochronologiques sur le gisement polymétallique de Tighza (Maroc-Central). : Contribution à la métallogénie des gisements de métaux de base filoniens en contexte post-collisionnel". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENA017/document.
Texto completoIn Morocco, many Pb-Zn-Ag vein-type deposits are hosted in paleozoic series crosscut by variscan granitoids. The spatial association of these veins with the magmatic intrusions has often been interpreted as evidence of a genetic link between the granitoids emplacement and the mineralizing hydrothermal event. However, recent data allow to consider a late-variscan emplacement of these mineralizations. Genetical models must thus be reviewed, as well as the exploration of these base metals strategy. The polymetallic district of Tighza-Jbel Aouam (Central Morocco), were Pb-Zn-Ag vein-type deposits are located around small granitic stocks, has been selected to study the spatial and temporal relationships between granitoids and associated mineralizations. The aims of this work have been: (i) to go further into the mineralizations mineralogy, (ii) to use appropriate geochronological methods to date the mineralizations, (iii) to study the magma-fluids-metals transfers from the deeper areas (mantle, deep crust) to the upper lithosphere, using geochemical tracers. Granitic stocks and dykes of Tighza have been analyzed and dated using U/Pb method on zircons. These rocks belong to a high-K calc-alkaline association and datations show three magmatic and associated hydrothermal events: 320 to 300 Ma, stocks with few dykes emplacement; 300 to 280 Ma, pluton emplacement with associated hydrothermal event and responsible for a hydrothermal metamorphic halo, mineralizing the W-Au ore; (iii) 280 to 240 Ma, NE-SW dykes network emplacement, associated with a Pb-Zn-Ag mineralizing hydrothermal event, characterized by a chlorite-muscovite-calcite alteration of the hosting metasediments. The detailed study of Pb-Zn-Ag veins paragenesis showed the pulsated character of the mineralization precipitations. Four successive paragenesis have been identified: (P1) siderite ± quartz; (P2) calcite ± ankerite + sphalerite + galena ± siderite ± quartz ± chalcedony; (P3) siderite + sphalerite + galena ± quartz ; (P4) quartz + calcite + pyrite. These synchrone paragenese are associated with REE-minerals (monazite, xenotime, synchisite) responsible for the high amount of REE in the gangue carbonates (≃ 700 ppm). Some monazites from a Pb-Zn-Ag vein have been dated at 255 ± 15 Ma using the Th/Pb method. The metals source study (lead isotopes) shows that the Pb of Pb-Zn-Ag mineralizations come from the upper crust leaching. The fluid sources study of Pb-Zn-Ag veins indicates a crustal origin (He, Ar isotopes) strongly buffered by the hosting shales (C, O isotopes). W-Au mineralizing fluids are stemming from a mixing of meteoric and mantellic fluids (He, Ar isotopes). W-Au and Pb-Zn-Ag are thus distinguished mineralizations. The polymetallic district of Tighza-Jbel Aouam is thus characterized by the superposition of several magmatic and hydrothermal events controlled by lithospheric scale leap, leading to the formation of: (i) mesothermal W-Au deposit which can be considered as a porphyry-type mineralization; (ii) epithermal Pb-Zn-Ag mineralization. These magmatic and metallogenic events last over 80 Ma after the end of the varican orogenesis, and characterize the post-collisional context of this orogen, in particular the effects of the thermal balance of the crust during the permo-triassic period and the Atlantic pre-rifting
Ibrahim, Mahmoud. "Conception et optimisation d'Alimentations Sans Interruption". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT053/document.
Texto completoThe design of Uninterruptible Power Supplies (UPS) has been successively improved in recent years to achieve efficiency levels of around 95% while minimizing their footprint. The massive use of power electronics for these systems is led to focus all design efforts to increase both efficiency and power density. The constant developments in power electronics provide the designer many options, among them the multi-level and/or interleaved power topologies in order to reduce passive components size, the new technologies of semiconductor materials with the introduction of grand gap components and advanced technology on passive components materials. The choice between these options is a compromise to achieve the predefined objectives, particularly when other constraints appear to limit the space of possible solutions, including thermal aspect, technological limitations or EMI constraints. This work proposes the implementation of multi-objective optimization methodology for the design of power converters with all its constraints. This offers a quick tool to compare the different possibilities of design and to quantify the improvement provided to the converter. To do this, different topological and technological choices were studied with the development of multi-physics models. These models can take discrete variables as input. So, optimized converters could meet industrial requirements covered by real components and their datasheets. To do this, we first establish the different constraints imposed on the UPS within its environment. Identifying solutions to design is carried through a state of the art research in the field of power electronics. Generic models of power structures and discrete multi-physical models of the components are then developed based on analytical approaches by ensuring a good compromise between accuracy and speed of calculation. Finally, multi-objective and multi constraints optimization methodology is performed on the set of design choices to quantify the performances achieved by each of them. Experimental work has been essential for us to validate the models and optimal solutions. Based on the optimization results PFC converter of 4.2kW/L was built is its performance has been validated
Grange, Léo. "Datacenter management for on-site intermittent and uncertain renewable energy sources". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30176.
Texto completoIn recent years, information and communication technologies (ICT) became a major energy consumer, with the associated harmful ecological consequences. Indeed, the emergence of Cloud computing and massive Internet companies increased the importance and number of datacenters around the world. In order to mitigate economical and ecological cost, powering datacenters with renewable energy sources (RES) began to appear as a sustainable solution. Some of the commonly used RES, such as solar and wind energies, directly depends on weather conditions. Hence they are both intermittent and partly uncertain. Batteries or other energy storage devices (ESD) are often considered to relieve these issues, but they result in additional energy losses and are too costly to be used alone without more integration. The power consumption of a datacenter is closely tied to the computing resource usage, which in turn depends on its workload and on the algorithms that schedule it. To use RES as efficiently as possible while preserving the quality of service of a datacenter, a coordinated management of computing resources, electrical sources and storage is required. A wide variety of datacenters exists, each with different hardware, workload and purpose. Similarly, each electrical infrastructure is modeled and managed uniquely, depending on the kind of RES used, ESD technologies and operating objectives (cost or environmental impact). Some existing works successfully address this problem by considering a specific couple of electrical and computing models. However, because of this combined diversity, the existing approaches cannot be extrapolated to other infrastructures. This thesis explores novel ways to deal with this coordination problem. A first contribution revisits batch tasks scheduling problem by introducing an abstraction of the power sources. A scheduling algorithm is proposed, taking preferences of electrical sources into account, though designed to be independent from the type of sources and from the goal of the electrical infrastructure (cost, environmental impact, or a mix of both). A second contribution addresses the joint power planning coordination problem in a totally infrastructure-agnostic way. The datacenter computing resources and workload management is considered as a black-box implementing a scheduling under variable power constraint algorithm. The same goes for the electrical sources and storage management system, which acts as a source commitment optimization algorithm. A cooperative multiobjective power planning optimization, based on a multi-objective evolutionary algorithm (MOEA), dialogues with the two black-boxes to find the best trade-offs between electrical and computing internal objectives. Finally, a third contribution focuses on RES production uncertainties in a more specific infrastructure. Based on a Markov Decision Process (MDP) formulation, the structure of the underlying decision problem is studied. For several variants of the problem, tractable methods are proposed to find optimal policies or a bounded approximation
Cordova, David. "Étude et réalisation d’une chaîne de conversion de données pour liaisons numériques à très haut débit". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0269.
Texto completoThe increasing demand of higher data rates in datacenters has led tonew emerging standards (100 - 400G Ethernet and others) in wireline communications. These standards will favor more sophisticated encodings that use less frequency bandwidth. As speed requirements become more stringent, pure analog architectures can not meet them. So, a natural shift towards mixed-signal architectures is expected.This thesis proposes the design of an ADC-based receiver architecture. It uses a design methodology to define and validate the requirements and specifications for silicon-based wireline receivers that comply with >100Gb/s operation over transmission channels with high losses (>20dB). A prototype in 22nm CMOS FDSOI technology is proposed as proof of concept
Dab, Boutheina. "Optimization of routing and wireless resource allocation in hybrid data center networks". Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1068/document.
Texto completoThe high proliferation of smart devices and online services allows billions of users to connect with network while deploying a vast range of applications. Particularly, with the advent of the future 5G technology, it is expected that a tremendous mobile and data traffic will be crossing Internet network. In this regard, Cloud service providers are urged to rethink their data center architectures in order to cope with this unprecedented traffic explosion. Unfortunately, the conventional wired infrastructures struggle to resist to such a traffic growth and become prone to serious congestion problems. Therefore, new innovative techniques are required. In this thesis, we investigate a recent promising approach that augments the wired Data Center Network (DCN) with wireless communications. Indeed, motivated by the feasibility of the new emerging 60 GHz technology, offering an impressive data rate (≈ 7 Gbps), we envision, a Hybrid (wireless/wired) DCN (HDCN) architecture. Our HDCN is based on i) Cisco’s Massively Scalable Data Center (MSDC) model and ii) IEEE 802.11ad standard. Servers in the HDCN are regrouped into racks, where each rack is equipped with a: i) Ethernet top-of-rack (ToR) switch and ii) set of wireless antennas. Our research aims to optimize the routing and the allocation of wireless resources for inter-rack communications in HDCN while enhancing network performance and minimizing congestion. The problem of routing and resource allocation in HDCN is NP-hard. To deal with this difficulty, we will tackle the problem into three stages. In the first stage, we consider only one-hop inter-rack communications in HDCN, where all communicating racks are in the same transmission range. We will propound a new wireless channel allocation approach in HDCN to hardness both wireless and wired interfaces for incoming flows while enhancing network throughput. In the second stage, we deal with the multi-hop communications in HDCN where communicating racks can not communicate in one single-hop wireless path. We propose a new approach to jointly route and allocate channels for each single communication flow, in an online way. Finally, in the third stage, we address the batched arrival of inter-rack communications to the HDCN so as to further optimize the usage of wireless and wired resources. For that end, we propose: i) a heuristic-based and ii) an approximate, solutions, to solve the joint batch routing and channel assignment. Based on extensive simulations conducted in QualNet simulator while considering the full protocol stack, the obtained results for both real workload and uniform traces, show that our proposals outperform the prominent related strategies
Chkirbene, Zina. "Network topologies for cost reduction and QoS improvement in massive data centers". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK002/document.
Texto completoData centers (DC) are being built around the world to provide various cloud computing services. One of the fundamental challenges of existing DC is to design a network that interconnects massive number of nodes (servers)1 while reducing DC' cost and energy consumption. Several solutions have been proposed (e.g. FatTree, DCell and BCube), but they either scale too fast (i.e., double exponentially) or too slow. Effcient DC topologies should incorporate high scalability, low latency, low Average Path Length (APL), high Aggregated Bottleneck Throughput (ABT) and low cost and energy consumption. Therefore, in this dissertation, different solutions have been proposed to overcome these problems. First, we propose a novel DC topology called LCT (Linked Cluster Topology) as a new solution for building scalable and cost effective DC networking infrastructures. The proposed topology reduces the number of redundant connections between clusters of nodes, while increasing the numbers of nodes without affecting the network bisection bandwidth. Furthermore, in order to reduce the DCs cost and energy consumption, we propose first a new static energy saving topology called VacoNet (Variable Connection Network) that connects the needed number of servers while reducing the unused materials (cables, switches). Also, we propose a new approach that exploits the correlation in time of internode communication and some topological features to maximize energy saving without too much impacting the average path length
Petitdemange, Eva. "SAMUFLUX : une démarche outillée de diagnostic et d'amélioration à base de doubles numériques : application aux centres d'appels d'urgence de trois SAMU". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0012.
Texto completoThe demand for emergency medical services has been significant and increasing over the last decade. In a constrained medico-economic context, the maintenance of operational capacities represents a strategic strake in front of the risk of congestion and insufficient accessibility for the population. Recent events such as the COVID-19 pandemic show the limits of the current system to face crisis situations. Reinforcement in human resources cannot be the only solution in front of this observation and it becomes unavoidable to build new organizational models while aiming at a quality of service allowing to answer 99% of the incoming calls in less than 60 seconds (90% in 15s and 99% in 30s MARCUS report and HAS recommendation October 2020). However, these models must take into account the great heterogeneity of EMS and their operation. In the light of these findings, the research work presented in this manuscript aims to evaluate the organizational effiectiveness and resilience of EMS in managing the flow of emergency telephone calls to deal with daily life and crisis situations. This evaluation allows us to propose and test new organizational schemes in order to make recommendations adpated to the particularities of emergency call centers. In a first part, we propose a methodology equipped for the diagnosis and improvement of emergency call centers. It can be broken down into two main parts: the study of data from emergency call centers, and then the design and use of a dual digital system. For each step of this methodology, we propose an associated tool. In a second part, we apply the first part of the methodology to our partner EMS data. The aim is to be able to extract information and knowledge from the telephony data as well as from the business processes for handling emergency calls. The knowledge thus extracted makes it possible to design a digital duplicate that is close to the real behavior of the EMS. Finally, in a third part, we use the material produced previously to model and parameterize a digital duplicate deployed on a discrete event simulation engine. It allows us to test several scenarios by playing on different call management organizations. Thanks to this, we make recommendations on the types of organizations to adopt in order to improve the performance of call centers
Salazar, Javier. "Resource allocation optimization algorithms for infrastructure as a service in cloud computing". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB074.
Texto completoThe cloud architecture offers on-demand computing, storage and applications. Within this structure, Cloud Providers (CPs) not only administer infrastructure resources but also directly benefit from leasing them. In this thesis, we propose three optimization models to assist CPs reduce the costs incurred in the resource allocation process when serving users’ demands. Implementing the proposed models will not only increase the CP’s revenue but will also enhance the quality of the services offered, benefiting all parties. We focus on Infrastructure as a Service (IaaS) resources which constitute the physical infrastructure of the cloud and are contained in datacenters (DCs). Following existing research in DC design and cloud computing applications, we propose the implementation of smaller DCs (Edge DCs) be located close to end users as an alternative to large centralized DCs. Lastly, we use the Column Generation optimization technique to handle large scale optimization models efficiently. The proposed formulation optimizes both the communications and information technology resources in a single phase to serve IaaS requests. Based on this formulation, we also propose a second model that includes QoS guarantees under the same Infrastructure as a Service resource allocation perspective, to provide different solutions to diverse aspects of the resource allocation problem such as cost and delay reduction while providing different levels of service. Additionally, we consider the multimedia cloud computing scenario. When Edge DCs architecture is applied to this scenario it results in the creation of the Multimedia Edge Cloud (MEC) architecture. In this context we propose a resource allocation approach to help with the placement of these DCs to reduce communication related problems such as jitter and latency. We also propose the implementation of optical fiber network technologies to enhance communication between DCs. Several studies can be found proposing new methods to improve data transmission and performance. For this study, we decided to implement Wavelength Division Multiplexing (WDM) to strengthen the link usage and light-paths and, by doing so, group different signals over the same wavelength. Using a realistic simulation environment, we evaluate the efficiency of the approaches proposed in this thesis using a scenario specifically designed for the DCs, comparing them with different benchmarks and also simulating the effect of the optical formulation on the network performance. The numerical results obtained show that by using the proposed models, a CP can efficiently reduce allocation costs while maintaining satisfactory request acceptance and QoS ratios
Ait, el mahjoub Youssef. "Performance evaluation of green IT networks". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG011.
Texto completoEnergy saving in telecommunication networks is a major objective to reduce overall consumption. The IT sector already has a very high contribution to this increase. Indeed, many techniques to reduce consumption in other industries or services results in more IT and telecommunications (the "Green by IT" approach) and therefore in an increase of consumption in IT domains. It is therefore important from an economical point of view to reduce the energy consumption per transmitted or calculated bit (the "Green IT" concept). In the networks domain, energy optimization is mainly based on an adaptation of the architecture and the resources employed according to the traffic flows to be transported and the promised quality of service. We therefore seek to adapt resources to demand, which results in a dynamic dimensioning adapted to the load. This is by nature different from the worst-case dimensioning commonly used. In terms of technology, this requires network equipment to have "sleep", "deep sleep" or "hibernate" modes (terminology varies among suppliers), but all of these modes are associated with the same concept: putting the equipment to sleep to reduce its energy consumption. For the performance/energy trade-off to be relevant, it seems important to use energy consumption formulas obtained from the network resource utilization. The approaches we propose are based on the theory of queuing networks, Markov chain analysis (analytically by proposing new product forms and numerically by suggesting new resolution algorithms) and the theory of stochastic comparison.At the application level we have addressed various issues: DVFS mechanisms with a change of processors speed, task migration between physical servers in a data center (load balancing, consolidation), optical networks with efficient filling of optical containers, intermittent power distribution in a sensor network (and LoRa network) including a new model of Energy Packet Networks (EPNs)
Salazar, Javier. "Resource allocation optimization algorithms for infrastructure as a service in cloud computing". Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB074.
Texto completoThe cloud architecture offers on-demand computing, storage and applications. Within this structure, Cloud Providers (CPs) not only administer infrastructure resources but also directly benefit from leasing them. In this thesis, we propose three optimization models to assist CPs reduce the costs incurred in the resource allocation process when serving users’ demands. Implementing the proposed models will not only increase the CP’s revenue but will also enhance the quality of the services offered, benefiting all parties. We focus on Infrastructure as a Service (IaaS) resources which constitute the physical infrastructure of the cloud and are contained in datacenters (DCs). Following existing research in DC design and cloud computing applications, we propose the implementation of smaller DCs (Edge DCs) be located close to end users as an alternative to large centralized DCs. Lastly, we use the Column Generation optimization technique to handle large scale optimization models efficiently. The proposed formulation optimizes both the communications and information technology resources in a single phase to serve IaaS requests. Based on this formulation, we also propose a second model that includes QoS guarantees under the same Infrastructure as a Service resource allocation perspective, to provide different solutions to diverse aspects of the resource allocation problem such as cost and delay reduction while providing different levels of service. Additionally, we consider the multimedia cloud computing scenario. When Edge DCs architecture is applied to this scenario it results in the creation of the Multimedia Edge Cloud (MEC) architecture. In this context we propose a resource allocation approach to help with the placement of these DCs to reduce communication related problems such as jitter and latency. We also propose the implementation of optical fiber network technologies to enhance communication between DCs. Several studies can be found proposing new methods to improve data transmission and performance. For this study, we decided to implement Wavelength Division Multiplexing (WDM) to strengthen the link usage and light-paths and, by doing so, group different signals over the same wavelength. Using a realistic simulation environment, we evaluate the efficiency of the approaches proposed in this thesis using a scenario specifically designed for the DCs, comparing them with different benchmarks and also simulating the effect of the optical formulation on the network performance. The numerical results obtained show that by using the proposed models, a CP can efficiently reduce allocation costs while maintaining satisfactory request acceptance and QoS ratios
Segalini, Andrea. "Alternatives à la migration de machines virtuelles pour l'optimisation des ressources dans les centres informatiques hautement consolidés". Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4085.
Texto completoServer virtualization is a technology of prime importance in contemporary data centers. Virtualization provides two key mechanisms, virtual instances and migration, that enable the maximization of the resource utilization to decrease the capital expenses in a data center. In this thesis, we identified and studied two contexts where traditional virtual instance migration falls short in providing the optimal tools to utilize at best the resource available in a cluster: idle virtual machines and largescale hypervisor upgrades.Idle virtual machines permanently lock the resources they are assigned only to await incoming user requests. Indeed, while they are most of the time idle, they cannot be shut down, which would release resources for more demanding services. To address this issue, we propose SEaMLESS, a solution that leverages a novel VM-to-container migration that transforms idle Linux virtual machines into resource-less proxies. SEaMLESS intercepts new user requests while virtual machines are disabled, transparently resuming their execution upon new signs of activity. Furthermore, we propose an easy-to-adopt technique to disable virtual machines based on the traditional hypervisor memory swapping. With our novel suspend-to-swap, we are able to release the majority of the memory and CPU seized by the idle instances, yet providing a fast resume.In the second part of the thesis, we tackle the problem of large-scale upgrades of the hypervisor software. Hypervisor upgrades often require a machine reboot, forcing data center administrators to evacuate the hosts, relocating elsewhere the virtual machines to protect their execution. As this evacuation is costly, both in terms of network transfers and spare resources needed in the data center, hypervisor upgrades hardly scale. We propose Hy-FiX and Multi-FiX, two in-place upgrade that do not consume resource external to the host. Both solutions leverage a zero-copy migration of virtual machines within the host, preserving their execution state across the hypervisor upgrade. Hy-FiX and Multi-FiX achieve scalable upgrades, with only limited impact on the running instances
Grundeler, Guillaume. "L'investissement : étude juridique". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM1059.
Texto completoInvestment is a relatively new legal concept. Some years ago, the term was only used within the foreign investment regulations. Back then, investment was mostly happrehended through other legal concepts, such as capital contribution or capital movement. Since then, however, the concept of investment has largely entered the legal vocabulary. For instance, it turns out that, in the French legal order, the existence of an investment makes the conclusion of a long duration contract possible. Besides, it may also be noted that, in the international order, the jurisdiction of an arbitral tribunal established under the aegis of the ICSID is limited to the disputes that arise out of an investment.Such a phenomenon has unfortunately brought on various inconsistencies. Thus, the term appears to be used in ways that sometimes make its meaning overly wide, as in securities law, in which investment refers to all kinds of operations related to financial instruments. Similarly, in matrimonial property regimes, what the French Cour de cassation calls "investment spending" includes all real estate spending. Sometimes, on the contrary, investment is still being apprehended through other concepts that are the simple reflection of that very concept. Therefore, the purpose of this dissertation is to establish some consistency in the use of the term investment by proposing a legal definition of the concept and outlining some elements of its regime
Gendron, Marie-Pierre. "Utilisation de médicaments durant la grossesse et l’allaitement : données d’un centre d’information sur les tératogènes". Thèse, 2010. http://hdl.handle.net/1866/5278.
Texto completoTeratogen Information Services (TIS) are giving information on the risks and benefits associated with medication use during pregnancy and lactation, to the health care providers and the public. IMAGe Center at the CHU Sainte-Justine in Quebec is a TIS which providing since 1997 a free telephone information service to the health care providers. Two studies were conducted using the calls received at IMAGe Center. The first study included all the calls received between January 2004, and April 2007, concerning women who used or expected to use medication during pregnancy or lactation. The objectives of this study aimed to identify the most frequent medication classes, the indications of use, and the predictors of a call concerning them (associated maternal characteristics). Antidepressants, anti-inflammatory drugs, antibiotics, benzodiazepines, and anti-psychotics represented the medication classes with the greater amount of calls. These results rise to the possibility that more information about the risks and benefits associated with the use of these medication classes during pregnancy and lactation is needed by the health care providers. Depression was in the top three of the most prevalent indications of use for the antidepressants, benzodiazepines, and anti-psychotics. Smoking was associated with the use of antidepressants and anti-psychotics during pregnancy, and with a call concerning the anti-inflammatory drugs during lactation. The second study included all the calls received between January 2003, and March 2008. This study aimed to identify the impact of the Health Canada (HC) warnings, concerning the risks of antidepressant use during pregnancy, and related to the rofecoxib market withdrawal, on the number of calls received to IMAGe. Time series of the weekly number of calls received demonstrated that the Health Canada warning on the risk of cardiac malformations associated with paroxetine use during the first trimester of pregnancy generated a statistically significant abrupt and permanent increase of the calls received at IMAGe about the antidepressants. These studies ensure to better understand the information need of the health care providers concerning the risks and benefits of medication use during pregnancy and lactation.