Rozprawy doktorskie na temat „Infrastructure informatique distribuée”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 30 najlepszych rozpraw doktorskich naukowych na temat „Infrastructure informatique distribuée”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Deneault, Sébastien. "Infrastructure distribuée permettant la détection d'attaques logicielles". Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6170.
Pełny tekst źródłaGallard, Jérôme. "Flexibilité dans la gestion des infrastructures informatiques distribuées". Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00625278.
Pełny tekst źródłaRojas, Balderrama Javier. "Gestion du cycle de vie de services déployés sur une infrastructure de calcul distribuée en neuroinformatique". Phd thesis, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00804893.
Pełny tekst źródłaGaronne, Vincent. "Etude, définition et modélisation d'un système distribué à grande échelle : DIRAC - Distributed infrastructure with remote agent control". Aix-Marseille 2, 2005. http://theses.univ-amu.fr.lama.univ-amu.fr/2005AIX22057.pdf.
Pełny tekst źródłaChuchuk, Olga. "Optimisation de l'accès aux données au CERN et dans la Grille de calcul mondiale pour le LHC (WLCG)". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4005.
Pełny tekst źródłaThe Worldwide LHC Computing Grid (WLCG) offers an extensive distributed computing infrastructure dedicated to the scientific community involved with CERN's Large Hadron Collider (LHC). With storage that totals roughly an exabyte, the WLCG addresses the data processing and storage requirements of thousands of international scientists. As the High-Luminosity LHC phase approaches, the volume of data to be analysed will increase steeply, outpacing the expected gain through the advancement of storage technology. Therefore, new approaches to effective data access and management, such as caches, become essential. This thesis delves into a comprehensive exploration of storage access within the WLCG, aiming to enhance the aggregate science throughput while limiting the cost. Central to this research is the analysis of real file access logs sourced from the WLCG monitoring system, highlighting genuine usage patterns.In a scientific setting, caching has profound implications. Unlike more commercial applications such as video streaming, scientific data caches deal with varying file sizes—from a mere few bytes to multiple terabytes. Moreover, the inherent logical associations between files considerably influence user access patterns. Traditional caching research has predominantly revolved around uniform file sizes and independent reference models. Contrarily, scientific workloads encounter variances in file sizes, and logical file interconnections significantly influence user access patterns.My investigations show how LHC's hierarchical data organization, particularly its compartmentalization into datasets, impacts request patterns. Recognizing the opportunity, I introduce innovative caching policies that emphasize dataset-specific knowledge, and compare their effectiveness with traditional file-centric strategies. Furthermore, my findings underscore the "delayed hits" phenomenon triggered by limited connectivity between computing and storage locales, shedding light on its potential repercussions for caching efficiency.Acknowledging the long-standing challenge of predicting Data Popularity in the High Energy Physics (HEP) community, especially with the upcoming HL-LHC era's storage conundrums, my research integrates Machine Learning (ML) tools. Specifically, I employ the Random Forest algorithm, known for its suitability with Big Data. By harnessing ML to predict future file reuse patterns, I present a dual-stage method to inform cache eviction policies. This strategy combines the power of predictive analytics and established cache eviction algorithms, thereby devising a more resilient caching system for the WLCG. In conclusion, this research underscores the significance of robust storage services, suggesting a direction towards stateless caches for smaller sites to alleviate complex storage management requirements and open the path to an additional level in the storage hierarchy. Through this thesis, I aim to navigate the challenges and complexities of data storage and retrieval, crafting more efficient methods that resonate with the evolving needs of the WLCG and its global community
Tato, Genc. "Lazy and locality-aware building blocks for fog middleware : a service discovery use case". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S079.
Pełny tekst źródłaIn the last decade, cloud computing has grown to become the standard deployment environment for most distributed applications. While cloud providers have continuously extended their coverage to different locations worldwide, the distance of their datacenters to the end users still often translates into significant latency and network utilization. With the advent of new families of applications such as virtual/augmented reality and self-driving vehicles, which operate on very low latency, or the IoT, which generates enormous amounts of data, the current centralized cloud infrastructure has shown to be unable to support their stringent requirements. This has shifted the focus to more distributed alternatives such as fog computing. Although the premises of such infrastructure seem auspicious, a standard fog management platform is yet to emerge. Consequently, significant attention is dedicated to capturing the right design requirements for delivering those premises. In this dissertation, we aim at designing building blocks which can provide basic functionalities for fog management tasks. Starting from the basic fog principle of preserving locality, we design a lazy and locality-aware overlay network called Koala, which provides efficient decentralized management without introducing additional traffic overhead. In order to capture additional requirements which originate from the application layer, we port a well-known microservice-based application, namely Sharelatex, to a fog environment. We examine how its performance is affected and what functionalities the management layer can provide in order to facilitate its fog deployment and improve its performance. Based on our overlay building block and the requirements retrieved from the fog deployment of the application, we design a service discovery mechanism which satisfies those requirements and integrates these components into a single prototype. This full stack prototype enables a complete end-to-end evaluation of these components based on real use case scenarios
Villebonnet, Violaine. "Scheduling and Dynamic Provisioning for Energy Proportional Heterogeneous Infrastructures". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN057/document.
Pełny tekst źródłaThe increasing number of data centers raises serious concerns regarding their energy consumption. These infrastructures are often over-provisioned and contain servers that are not fully utilized. The problem is that inactive servers can consume as high as 50% of their peak power consumption.This thesis proposes a novel approach for building data centers so that their energy consumption is proportional to the actual load. We propose an original infrastructure named BML for "Big, Medium, Little", composed of heterogeneous computing resources : from low power processors to classical servers. The idea is to take advantage of their different characteristics in terms of energy consumption, performance, and switch on reactivity to adjust the composition of the infrastructure according to the load evolutions. We define a generic methodology to compute the most energy proportional combinations of machines based on hardware profiling data.We focus on web applications whose load varies over time and design a scheduler that dynamically reconfigures the infrastructure, with application migrations and machines switch on and off, to minimize the infrastructure energy consumption according to the current application requirements.We have developed two different dynamic provisioning algorithms which take into account the time and energy overheads of the different reconfiguration actions in the decision process. We demonstrate through simulations based on experimentally acquired hardware profiles that we achieve important energy savings compared to classical data center infrastructures and management
Ariyattu, Resmi. "Towards federated social infrastructures for plug-based decentralized social networks". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S031/document.
Pełny tekst źródłaIn this thesis, we address two issues in the area of decentralized distributed systems: network-aware overlays and collaborative editing. Even though network overlays have been extensively studied, most solutions either ignores the underlying physical network topology, or uses mechanisms that are specific to a given platform or applications. This is problematic, as the performance of an overlay network strongly depends on the way its logical topology exploits the underlying physical network. To address this problem, we propose Fluidify, a decentralized mechanism for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based systems. The second issue that we address focuses on collaborative editing platforms. Distributed collaborative editors allow several remote users to contribute concurrently to the same document. Only a limited number of concurrent users can be supported by the currently deployed editors. A number of peer-to-peer solutions have therefore been proposed to remove this limitation and allow a large number of users to work collaboratively. These decentralized solution assume however that all users are editing the same set of documents, which is unlikely to be the case. To open the path towards more flexible decentralized collaborative editors, we present Filament, a decentralized cohort-construction protocol adapted to the needs of large-scale collaborative editors. Filament eliminates the need for any intermediate DHT, and allows nodes editing the same document to find each other in a rapid, efficient and robust manner by generating an adaptive routing field around themselves. Filament's architecture hinges around a set of collaborating self-organizing overlays that utilizes the semantic relations between peers. The resulting protocol is efficient, scalable and provides beneficial load-balancing properties over the involved peers
Croubois, Hadrien. "Toward an autonomic engine for scientific workflows and elastic Cloud infrastructure". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN061/document.
Pełny tekst źródłaThe constant development of scientific and industrial computation infrastructures requires the concurrent development of scheduling and deployment mechanisms to manage such infrastructures. Throughout the last decade, the emergence of the Cloud paradigm raised many hopes, but achieving full platformautonomicity is still an ongoing challenge. Work undertaken during this PhD aimed at building a workflow engine that integrated the logic needed to manage workflow execution and Cloud deployment on its own. More precisely, we focus on Cloud solutions with a dedicated Data as a Service (DaaS) data management component. Our objective was to automate the execution of workflows submitted by many users on elastic Cloud resources.This contribution proposes a modular middleware infrastructure and details the implementation of the underlying modules:• A workflow clustering algorithm that optimises data locality in the context of DaaS-centeredcommunications;• A dynamic scheduler that executes clustered workflows on Cloud resources;• A deployment manager that handles the allocation and deallocation of Cloud resources accordingto the workload characteristics and users’ requirements. All these modules have been implemented in a simulator to analyse their behaviour and measure their effectiveness when running both synthetic and real scientific workflows. We also implemented these modules in the Diet middleware to give it new features and prove the versatility of this approach.Simulation running the WASABI workflow (waves analysis based inference, a framework for the reconstruction of gene regulatory networks) showed that our approach can decrease the deployment cost byup to 44% while meeting the required deadlines
Dugenie, Pascal. "Espaces Collaboratifs Ubiquitaires sur une infrastructure à ressources distribuées". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00203542.
Pełny tekst źródłaZhu, Xiaoyang. "Building a secure infrastructure for IoT systems in distributed environments". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI038/document.
Pełny tekst źródłaThe premise of the Internet of Things (IoT) is to interconnect not only sensors, mobile devices, and computers but also individuals, homes, smart buildings, and cities, as well as electrical grids, automobiles, and airplanes, to mention a few. However, realizing the extensive connectivity of IoT while ensuring user security and privacy still remains a challenge. There are many unconventional characteristics in IoT systems such as scalability, heterogeneity, mobility, and limited resources, which render existing Internet security solutions inadequate to IoT-based systems. Besides, the IoT advocates for peer-to-peer networks where users as owners intend to set security policies to control their devices or services instead of relying on some centralized third parties. By focusing on scientific challenges related to the IoT unconventional characteristics and user-centric security, we propose an IoT secure infrastructure enabled by the blockchain technology and driven by trustless peer-to-peer networks. Our IoT secure infrastructure allows not only the identification of individuals and collectives but also the trusted identification of IoT things through their owners by referring to the blockchain in trustless peer-to-peer networks. The blockchain provides our IoT secure infrastructure with a trustless, immutable and public ledger that records individuals and collectives identities, which facilitates the design of the simplified authentication protocol for IoT without relying on third-party identity providers. Besides, our IoT secure infrastructure adopts socialized IoT paradigm which allows all IoT entities (namely, individuals, collectives, things) to establish relationships and makes the IoT extensible and ubiquitous networks where owners can take advantage of relationships to set access policies for their devices or services. Furthermore, in order to protect operations of our IoT secure infrastructure against security threats, we also introduce an autonomic threat detection mechanism as the complementary of our access control framework, which can continuously monitor anomaly behavior of device or service operations
Mechtri, Marouen. "Virtual networked infrastructure provisioning in distributed cloud environments". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0028.
Pełny tekst źródłaCloud computing emerged as a new paradigm for on-demand provisioning of IT resources and for infrastructure externalization and is rapidly and fundamentally revolutionizing the way IT is delivered and managed. The resulting incremental Cloud adoption is fostering to some extent cloud providers cooperation and increasing the needs of tenants and the complexity of their demands. Tenants need to network their distributed and geographically spread cloud resources and services. They also want to easily accomplish their deployments and instantiations across heterogeneous cloud platforms. Traditional cloud providers focus on compute resources provisioning and offer mostly virtual machines to tenants and cloud services consumers who actually expect full-fledged (complete) networking of their virtual and dedicated resources. They not only want to control and manage their applications but also control connectivity to easily deploy complex network functions and services in their dedicated virtual infrastructures. The needs of users are thus growing beyond the simple provisioning of virtual machines to the acquisition of complex, flexible, elastic and intelligent virtual resources and services. The goal of this thesis is to enable the provisioning and instantiation of this type of more complex resources while empowering tenants with control and management capabilities and to enable the convergence of cloud and network services. To reach these goals, the thesis proposes mapping algorithms for optimized in-data center and in-network resources hosting according to the tenants' virtual infrastructures requests. In parallel to the apparition of cloud services, traditional networks are being extended and enhanced with software networks relying on the virtualization of network resources and functions especially through network resources and functions virtualization. Software Defined Networks are especially relevant as they decouple network control and data forwarding and provide the needed network programmability and system and network management capabilities. In such a context, the first part proposes optimal (exact) and heuristic placement algorithms to find the best mapping between the tenants' requests and the hosting infrastructures while respecting the objectives expressed in the demands. This includes localization constraints to place some of the virtual resources and services in the same host and to distribute other resources in distinct hosts. The proposed algorithms achieve simultaneous node (host) and link (connection) mappings. A heuristic algorithm is proposed to address the poor scalability and high complexity of the exact solution(s). The heuristic scales much better and is several orders of magnitude more efficient in terms of convergence time towards near optimal and optimal solutions. This is achieved by reducing complexity of the mapping process using topological patterns to map virtual graph requests to physical graphs representing respectively the tenants' requests and the providers' physical infrastructures. The proposed approach relies on graph decomposition into topology patterns and bipartite graphs matching techniques. The third part propose an open source Cloud Networking framework to achieve cloud and network resources provisioning and instantiation in order to respectively host and activate the tenants' virtual resources and services. This framework enables and facilitates dynamic networking of distributed cloud services and applications. This solution relies on a Cloud Network Gateway Manager and gateways to establish dynamic connectivity between cloud and network resources. The CNG-Manager provides the application networking control and supports the deployment of the needed underlying network functions in the tenant desired infrastructure (or slice since the physical infrastructure is shared by multiple tenants with each tenant receiving a dedicated and isolated portion/share of the physical resources)
Ahmed, Arif. "Efficient cloud application deployment in distributed fog infrastructures". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S004.
Pełny tekst źródłaFog computing architectures are composed of a large number of machines distributed across a geographical area such as a city or a region. In this context it is important to support a quick startup of applications deployed in the for of docker containers. This thesis explores the reasons for slow deployment and identifies three improvement opportunities: (1) improving the Docker cache hit rate; (2) speed-up the image installation operation; and (3) accelerate the application boot phase after the creation of a container
Di, Lena Giuseppe. "Emulation fiable et distribuée de réseaux virtualisés et programmables sur bancs de test et infrastructures cloud". Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4028.
Pełny tekst źródłaIn recent years, there have been multiple enhancements in virtualization technologies, cloud computing, and network programmability. The emergence of concepts like Software Defined Networking (SDN) and Network Function Virtualization (NFV) are changing the way the Internet Service Providers manage their services. In parallel, the last decade witnessed the rise of secure public cloud platforms like Amazon AWS and Microsoft Azure. These new concepts lead to cost reductions and fast innovation, driving the adoption of these paradigms by the industry. All these changes also bring new challenges. Networks have become huge and complex while providing different kinds of services. Testing them is increasingly complicated and resource-intensive. To tackle this issues, we propose a new tool that combines emulation technologies and optimization techniques to distribute SDN/NFV experiments in private test-beds and public cloud platforms. Cloud providers, in general, deliver specific metrics to the users in terms of CPU and memory resources for the services they propose, but they tend to give a high-level overview for the network delay, without any specific value. This is a problem when deploying a delay-sensitive application in the cloud, since the users do not have any precise data about the delay. We propose a testing framework to monitor the network delay between multiple datacenters in the cloud infrastructures. Finally, in the context of SDN/NFV networks, we exploit the SDN centralized logic to implement an optimal routing strategy in case of multiple link failures in the network. We also created a test-bed environment to validate our proposition in different network topologies
Mechtri, Marouen. "Virtual networked infrastructure provisioning in distributed cloud environments". Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0028/document.
Pełny tekst źródłaCloud computing emerged as a new paradigm for on-demand provisioning of IT resources and for infrastructure externalization and is rapidly and fundamentally revolutionizing the way IT is delivered and managed. The resulting incremental Cloud adoption is fostering to some extent cloud providers cooperation and increasing the needs of tenants and the complexity of their demands. Tenants need to network their distributed and geographically spread cloud resources and services. They also want to easily accomplish their deployments and instantiations across heterogeneous cloud platforms. Traditional cloud providers focus on compute resources provisioning and offer mostly virtual machines to tenants and cloud services consumers who actually expect full-fledged (complete) networking of their virtual and dedicated resources. They not only want to control and manage their applications but also control connectivity to easily deploy complex network functions and services in their dedicated virtual infrastructures. The needs of users are thus growing beyond the simple provisioning of virtual machines to the acquisition of complex, flexible, elastic and intelligent virtual resources and services. The goal of this thesis is to enable the provisioning and instantiation of this type of more complex resources while empowering tenants with control and management capabilities and to enable the convergence of cloud and network services. To reach these goals, the thesis proposes mapping algorithms for optimized in-data center and in-network resources hosting according to the tenants' virtual infrastructures requests. In parallel to the apparition of cloud services, traditional networks are being extended and enhanced with software networks relying on the virtualization of network resources and functions especially through network resources and functions virtualization. Software Defined Networks are especially relevant as they decouple network control and data forwarding and provide the needed network programmability and system and network management capabilities. In such a context, the first part proposes optimal (exact) and heuristic placement algorithms to find the best mapping between the tenants' requests and the hosting infrastructures while respecting the objectives expressed in the demands. This includes localization constraints to place some of the virtual resources and services in the same host and to distribute other resources in distinct hosts. The proposed algorithms achieve simultaneous node (host) and link (connection) mappings. A heuristic algorithm is proposed to address the poor scalability and high complexity of the exact solution(s). The heuristic scales much better and is several orders of magnitude more efficient in terms of convergence time towards near optimal and optimal solutions. This is achieved by reducing complexity of the mapping process using topological patterns to map virtual graph requests to physical graphs representing respectively the tenants' requests and the providers' physical infrastructures. The proposed approach relies on graph decomposition into topology patterns and bipartite graphs matching techniques. The third part propose an open source Cloud Networking framework to achieve cloud and network resources provisioning and instantiation in order to respectively host and activate the tenants' virtual resources and services. This framework enables and facilitates dynamic networking of distributed cloud services and applications. This solution relies on a Cloud Network Gateway Manager and gateways to establish dynamic connectivity between cloud and network resources. The CNG-Manager provides the application networking control and supports the deployment of the needed underlying network functions in the tenant desired infrastructure (or slice since the physical infrastructure is shared by multiple tenants with each tenant receiving a dedicated and isolated portion/share of the physical resources)
Moise, Diana Maria. "Optimizing data management for MapReduce applications on large-scale distributed infrastructures". Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0067/document.
Pełny tekst źródłaData-intensive applications are nowadays, widely used in various domains to extract and process information, to design complex systems, to perform simulations of real models, etc. These applications exhibit challenging requirements in terms of both storage and computation. Specialized abstractions like Google’s MapReduce were developed to efficiently manage the workloads of data-intensive applications. The MapReduce abstraction has revolutionized the data-intensive community and has rapidly spread to various research and production areas. An open-source implementation of Google's abstraction was provided by Yahoo! through the Hadoop project. This framework is considered the reference MapReduce implementation and is currently heavily used for various purposes and on several infrastructures. To achieve high-performance MapReduce processing, we propose a concurrency-optimized file system for MapReduce Frameworks. As a starting point, we rely on BlobSeer, a framework that was designed as a solution to the challenge of efficiently storing data generated by data-intensive applications running at large scales. We have built the BlobSeer File System (BSFS), with the goal of providing high throughput under heavy concurrency to MapReduce applications. We also study several aspects related to intermediate data management in MapReduce frameworks. We investigate the requirements of MapReduce intermediate data at two levels: inside the same job, and during the execution of pipeline applications. Finally, we show how BSFS can enable extensions to the de facto MapReduce implementation, Hadoop, such as the support for the append operation. This work also comprises the evaluation and the obtained results in the context of grid and cloud environments
Vervaet, Arthur. "Automated Log-Based Anomaly Detection within Cloud Computing Infrastructures". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS548.
Pełny tekst źródłaCloud computing aims to optimize resource utilization while accommodating a large user base and elastic services. Within this context, cloud computing platforms bear the responsibility of managing their customers’ infrastructure. The management of an everexpanding number of IT resources poses a significant challenge. In this study, conducted in collaboration with 3DS OUTSCALE, a French public cloud provider, we investigate the potential of log data as a valuable source for automated anomaly detection within cloud computing platforms. Logs serve as a widely utilized information source for various purposes, including monitoring, diagnosing, performance evaluation, and maintenance. These logs are generated during runtime and provide insights into the current state of a system. However, achieving automated real-time anomaly detection based on log data remains a complex undertaking. The intricate nature of cloud computing platforms must be duly considered. Extracting relevant information from a multitude of logging sources and accounting for frequent code base evolution poses challenges and introduces the potential for errors. Furthermore, establishing log relationships within such systems is often an insurmountable task. Log parsing solutions aim to extract variables from the template of log messages. Our first contribution involves a comprehensive study of two state-of-the-art log parsing methods, investigating the impact of hyperparameter tuning and preprocessing on their accuracy. Given the laborious nature of labeling logs related to a cloud computing platform, we sought to identify potential generic values that enable accurate parsing across diverse scenarios. However, our research reveals the infeasibility of finding such requirements, thereby emphasizing the necessity for more robust parsing approaches. Our second contribution introduces USTEP, an innovative online log parsing approach that surpasses existing methods in terms of accuracy, efficiency, and robustness. Notably, USTEP achieves a constant worst-case parsing time complexity, distinguishing it from its predecessors for which the number of already detected templates is to be taken into account. Through a comparative analysis of five online log parsers using 13 open-source datasets and one derived from 3DS OUTSCALE systems, we demonstrate the superior performance of USTEP. Furthermore, we propose USTEP-UP, an architecture that enables the distributed execution of multiple USTEP instances. Our third contribution presents Monilog, a system architecture designed for automated log-based anomaly detection within log data streams. Monilog leverages model/metric pairs to predict log traffic patterns within a system and detect anomalies by identifying deviations in system behavior. Monilog forecasting models are powered by the recent advances in the deep learning field and is able to generate comprehensive reports that highlight the relevant system components and the associated applications. We implemented an instance of Monilog at cloud scale and conducted experimental analyses to evaluate its ability to forecast anomalous events, such as servers crashes resulting from virtualization issues. The results obtained strongly support our hypothesis regarding the utility of logs in detecting and predicting abnormal events. Our Monilog implementation successfully identified abnormal periods and provided valuable insights into the applications involved. With Monilog, we demonstrate the value of logs in predicting anomalies in such environments and provide a flexible architecture for future study. Our work on the parsing field with the proposal of USTEP and USTEP-UP not only provides us with additional information for building anomaly detection models but also has potential benefits for other log mining applications
Alvares, De Oliveira Junior Frederico. "Gestion multi autonome pour l'optimisation de la consommation énergétique sur les infrastructures en nuage". Phd thesis, Université de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00853575.
Pełny tekst źródłaTrabé, Patrick. "Infrastructure réseau coopérative et flexible de défense contre les attaques de déni de service distribué". Toulouse 3, 2006. http://www.theses.fr/2006TOU30288.
Pełny tekst źródłaThe goal of Distributed Denial of Service attacks (DDoS) is to prevent legitimate users from using a service. The availability of the service is attacked by sending altered packets to the victim. These packets either consume a large part of networks bandwidth, or create an artificial consumption of victim’s key resources such as memory or CPU. DDoS’ filtering is still an important problem for network operators since illegitimate traffics look like legitimate traffics. The discrimination of both classes of traffics is a hard task. Moreover DDoS victims are not limited to end users (e. G. Web server). The network is likely to be attacked itself. The approach presented in this thesis is pragmatic. Firstly it seeks to control dynamic and distributed aspects of DDoS. Secondly it looks for protecting legitimate traffics and the network against collateral damages. Thus we propose a distributed infrastructure of defense based on nodes dedicated to the analysis and the filtering of the illegitimate traffic. Each node is associated with one POP router or interconnection router in order to facilitate its integration into the network. These nodes introduce the required programmability through open interfaces. The programmability offers applicative level packets processing, and thus treatments without collateral damages. A prototype has been developed. It validates our concepts
Tsafack, Chetsa Ghislain Landry. "Profilage système et leviers verts pour les infrastructures distribuées à grande échelle". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00925320.
Pełny tekst źródłaAmeziane, El Hassani Abdeljebar. "Le contrôle d'accès des réseaux et grandes infrastructures critiques distribuées". Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/15962/1/ameziane.pdf.
Pełny tekst źródłaDibo, Mariam. "UDeploy : une infrastructure de déploiement pour les applications à base de composants logiciels distribués". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00685853.
Pełny tekst źródłaTsafack, Chetsa Ghislain Landry. "System Profiling and Green Capabilities for Large Scale and Distributed Infrastructures". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00946583.
Pełny tekst źródłaQuesnel, Flavien. "Vers une gestion coopérative des infrastructures virtualisées à large échelle : le cas de l'ordonnancement". Phd thesis, Ecole des Mines de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00821103.
Pełny tekst źródłaMoise, Diana. "Optimisation de la gestion des données pour les applications MapReduce sur des infrastructures distribuées à grande échelle". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00653622.
Pełny tekst źródłaMoise, Diana Maria. "Optimisation de la gestion des données pour les applications MapReduce sur des infrastructures distribuées à grande échelle". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00696062.
Pełny tekst źródłaDecotigny, David. "Une infrastructure de simulation modulaire pour l'évaluation de performances de systèmes temps-réel". Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00003582.
Pełny tekst źródłaFontan, Benjamin. "Méthodologie de conception de systèmes temps réel et distribués en contexte UML/SysML". Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00258430.
Pełny tekst źródłaDaouda, Ahmat mahamat. "Définition d'une infrastructure de sécurité et de mobilité pour les réseaux pair-à-pair recouvrants". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0186/document.
Pełny tekst źródłaSecuring communications in distributed dynamic environments, that lack a central coordination point and whose topology changes constantly, is a major challenge.We tackle this challenge of today’s P2P systems. In this thesis, we propose to define a security infrastructure that is suitable to the constraints and issues of P2P systems. The first part of this document presents the design of SEMOS, our middleware solution for managing and securing mobile sessions. SEMOS ensures that communication sessions are secure and remain active despite the possible disconnections that can occur when network configurations change or a malfunction arises. This roaming capability is implemented via the definition of a new addressing space in order to split up addresses for network entities with their names ; the new naming space is then based on distributed hash tables(DHT). The second part of the document presents a generic and distributed mechanism for a key exchange method befitting to P2P architectures. Building on disjoint paths andend-to-end exchange, the proposed key management protocol consists of a combination of the Diffie-Hellman algorithm and the Shamir’s (k, n) threshold scheme. On the onehand, the use of disjoint paths to route subkeys offsets the absence of the third party’s certified consubstantial to Diffie-Hellman and reduces, at the same time, its vulnerability to interception attacks. On the other hand, the extension of the Diffie-Hellman algorithm by adding the threshold (k, n) scheme substantially increases its robustness, in particular in key splitting and / or in the case of accidental or intentional subkeys routing failures. Finally, we rely on a virtual mobile network to assess the setup of secure mobile sessions.The key management mechanism is then evaluated in an environment with randomly generated P2P topologies
Rachedi, Abderrezak. "Contributions à la sécurité dans les réseaux mobiles ad Hoc". Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00683602.
Pełny tekst źródła