Thèses sur le sujet « Informatique nuage »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Informatique nuage ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Degoutin, Stéphane. « Société-nuage ». Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1009.
Texte intégralThis book unfolds, like a Chinese landscape painting through which the viewer’s gaze wanders slowly. I describe a panorama. It is not made of mountains in the mist or bushes swept by the wind, but of data centers, automated warehouses, social network feeds...I explore the hypothesis that the Internet is part of a general process that reduces society and materials to small-scale components, which allow its mechanisms to become more fluid. A chemist’s idea – the decomposition of matter into powder to facilitate its recomposition – is also applied to social relations, memory and humans in general.Just as the reduction of matter accelerates chemical reactions, the reduction of society to powder allows for an accelerated decomposition and recomposition of all from which humans are made. It allows to multiply the reactions within society, to accelerate the productions of humanity and the social chemistry : combination of human passions (Charles Fourier), hyperfragmentation of work (Mechanical Turk), decomposition of knowledge (Paul Otlet), Internet of neurons (Michael Chorost), agregation of micro affects (Facebook). This is what I call the « society as cloud »
Etchevers, Xavier. « Déploiement d'applications patrimoniales en environnements de type informatique dans le nuage ». Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00875559.
Texte intégralEtchevers, Xavier. « Déploiement d’applications patrimoniales en environnements de type informatique dans le nuage ». Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM100/document.
Texte intégralCloud computing aims to cut down on the outlay and operational expenses involved in setting up and running applications. To do this, an application is split into a set of virtualized hardware and software resources. This virtualized application can be autonomously managed, making it responsive to the dynamic changes affecting its running environment. This is referred to as Application Life-cycle Management (ALM). In cloud computing, ALM is a growing but immature market, with many offers claiming to significantly improve productivity. However, all these solutions are faced with a major restriction: the duality between the level of autonomy they offer and the type of applications they can handle. To address this, this thesis focuses on managing the initial deployment of an application to demonstrate that the duality is artificial. The main contributions of this work are presented in a platform named VAMP (Virtual Applications Management Platform). VAMP can deploy any legacy application distributed in the cloud, in an autonomous, generic and reliable way. It consists of: • a component-based model to describe the elements making up an application and their projection on the running infrastructure, as well as the dependencies binding them in the applicative architecture; • an asynchronous, distributed and reliable protocol for self-configuration and self-activation of the application; • mechanisms ensuring the reliability of the VAMP system itself. Beyond implementing the solution, the most critical aspects of running VAMP have been formally verified using model checking tools. A validation step was also used to demonstrate the genericity of the proposal through various real-life implementations
Gadhgadhi, Ridha. « Openicra : vers un modèle générique de déploiement automatisé des applications dans le nuage informatique ». Mémoire, École de technologie supérieure, 2013. http://espace.etsmtl.ca/1222/1/GADHGADHI_Ridha.pdf.
Texte intégralLejemble, Thibault. « Analyse multi-échelle de nuage de points ». Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30184.
Texte intégral3D acquisition techniques like photogrammetry and laser scanning are commonly used in numerous fields such as reverse engineering, archeology, robotics and urban planning. The main objective is to get virtual versions of real objects in order to visualize, analyze and process them easily. Acquisition techniques become more and more powerful and affordable which creates important needs to process efficiently the resulting various and massive 3D data. Data are usually obtained in the form of unstructured 3D point cloud sampling the scanned surface. Traditional signal processing methods cannot be directly applied due to the lack of spatial parametrization. Points are only represented by their 3D coordinates without any particular order. This thesis focuses on the notion of scale of analysis defined by the size of the neighborhood used to locally characterize the point-sampled surface. The analysis at different scales enables to consider various shapes which increases the analysis pertinence and the robustness to acquired data imperfections. We first present some theoretical and practical results on curvature estimation adapted to a multi-scale and multi-resolution representation of point clouds. They are used to develop multi-scale algorithms for the recognition of planar and anisotropic shapes such as cylinders and feature curves. Finally, we propose to compute a global 2D parametrization of the underlying surface directly from the 3D unstructured point cloud
Malvault, Willy. « Vers une architecture pair-à-pair pour l'informatique dans le nuage ». Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00633787.
Texte intégralGuesmi, Asma. « Spécification et analyse formelles des politiques de sécurité dans un processus de courtage de l'informatique en nuage ». Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2010/document.
Texte intégralThe number of cloud offerings increases rapidly. Therefore, it is difficult for clients to select the adequate cloud providers which fit their needs. In this thesis, we introduce a cloud service brokerage mechanism that considers the client security requirements. We consider two types of the client requirements. The amount of resources is represented by the functional requirements. The non-functional requirements consist on security properties and placement constraints. The requirements and the offers are specified using the Alloy language. To eliminate inner conflicts within customers requirements, and to match the cloud providers offers with these customers requirements, we use a formal analysis tool: Alloy. The broker uses a matching algorithm to place the required resources in the adequate cloud providers, in a way that fulfills all customer requirements, including security properties. The broker checks that the placement configuration ensures all the security requirements. All these steps are done before the resources deployment in the cloud computing. This allows to detect the conflicts and errors in the clients requirements, thus resources vulnerabilities can be avoided after the deployment
Alvares, De Oliveira Junior Frederico. « Gestion multi autonome pour l'optimisation de la consommation énergétique sur les infrastructures en nuage ». Phd thesis, Université de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00853575.
Texte intégralCostache, Stefania. « Gestion autonome des ressources et des applications dans un nuage informatique selon une approche fondée sur un marché ». Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925352.
Texte intégralDemir, Levent. « Module de confiance pour externalisation de données dans le Cloud ». Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM083/document.
Texte intégralData outsourcing to the Cloud has led to new security threats. The main concerns of this thesis are to protect the user data and privacy. In particular, it follows two principles : to decrease the necessary amount of trust towards the Cloud, and to design an architecture based on a trusted module between the Cloud and the clients. Both principles are derived from a new design approach : "Trust The Module, Not The Cloud ".Gathering all the cryptographic operations in a dedicated module allows several advantages : a liberation from internal and external attacks on client side ; the limitation of software to the essential needs offers a better control of the system ; using co-processors for cryptographic operations leads to higher performance.The thesis work is structured into three main sections. In the first section , we confront challenges of a personal Cloud, designed to protect the users’ data and based on a common and cheap single-board computer. The architecture relies on two main foundations : a transparent encryption scheme based on Full Disk Encryption (FDE), initially used for local encryption (e.g., hard disks), and a transparent distribution method that works through iSCSI network protocol in order to outsource containers in Cloud.In the second section we deal with the performance issue related to FDE. By analysing the XTS-AES mode of encryption, the Linux kernel module dm-crypt and the cryptographic co-processors, we introduce a new approach called extReq which extends the cryptographic requests sent to the co-processors. This optimisation has doubled the encryption and decryption throughput.In the final third section we establish a Cloud for enterprises based on a more powerful and certified Hardware Security Module (HSM) which is dedicated to data encryption and keys protection. Based on the TTM architecture, we added "on-the-shelf" features to provide a solution for enterprise
Bousquet, Aline. « Application et assurance autonomes de propriétés de sécurité dans un environnement d’informatique en nuage ». Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2012/document.
Texte intégralCloud environnements are heterogeneous and dynamic, which makes them difficult to protect. In this thesis, we introduce a language and an architecture that can be used to express and enforce security properties in a Cloud. The language allows a Cloud user to express his security requirements without specifying how they will be enforced. The language is based on contexts (to abstract the resources) and properties (to express the security requirements). The properties are then enforced through an autonomic architecture using existing and available security mechanisms (such as SELinux, PAM, iptables, or firewalld). This architecture abstracts and reuses the security capabilities of existing mechanisms. A security property is thus defined by a combination of capabilities and can be enforced through the collaboration of several mechanisms. The mechanisms are then automatically configured according to the user-defined properties. Moreover, the architecture offers an assurance system to detect the failure of a mechanism or an enforcement error. Therefore, the architecture can address any problem, for instance by re-applying a property using different mechanisms. Lastly, the assurance system provides an evaluation of the properties enforcement. This thesis hence offers an autonomic architecture to enforce and assure security in Cloud environnements
Belabed, Dallal. « Design and Evaluation of Cloud Network Optimization Algorithms ». Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066149/document.
Texte intégralThis dissertation tries to give a deep understanding of the impact of the new Cloud paradigms regarding to the Traffic Engineering goal, to the Energy Efficiency goal, to the fairness in the endpoints offered throughput, and of the new opportunities given by virtualized network functions.In the first part of our dissertation we investigate the impact of these novel features in Data Center Network optimization, providing a formal comprehensive mathematical formulation on virtual machine placement and a metaheuristic for its resolution. We show in particular how virtual bridging and multipath forwarding impact common DCN optimization goals, Traffic Engineering and Energy Efficiency, assess their utility in the various cases in four different DCN topologies.In the second part of the dissertation our interest move into better understand the impact of novel attened and modular DCN architectures on congestion control protocols, and vice-versa. In fact, one of the major concerns in congestion control being the fairness in the offered throughput, the impact of the additional path diversity, brought by the novel DCN architectures and protocols, on the throughput of individual endpoints and aggregation points is unclear.Finally, in the third part we did a preliminary work on the new Network Function Virtualization paradigm. In this part we provide a linear programming formulation of the problem based on virtual network function chain routing problem in a carrier network. The goal of our formulation is to find the best route in a carrier network where customer demands have to pass through a number of NFV node, taking into consideration the unique constraints set by NFV
Kouki, Yousri. « Approche dirigée par les contrats de niveaux de service pour la gestion de l'élasticité du "nuage" ». Phd thesis, Ecole des Mines de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00919900.
Texte intégralLescouet, Alexis. « Memory management for operating systems and runtimes ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS008.
Texte intégralDuring the last decade, the need for computational power has increased due to the emergence and fast evolution of fields such as data analysis or artificial intelligence. This tendency is also reinforced by the growing number of services and end-user devices. Due to physical constraints, the trend for new hardware has shifted from an increase in processor frequency to an increase in the number of cores per machine. This new paradigm requires software to adapt, making the ability to manage such a parallelism the cornerstone of many parts of the software stack. Directly concerned by this change, operating systems have evolved to include complex rules each pertaining to different hardware configurations. However, more often than not, resources management units are responsible for one specific resource and make a decision in isolation. Moreover, because of the complexity and fast evolution rate of hardware, operating systems, not designed to use a generic approach have trouble keeping up. Given the advance of virtualization technology, we propose a new approach to resource management in complex topologies using virtualization to add a small software layer dedicated to resources placement in between the hardware and a standard operating system. Similarly, in user space applications, parallelism is an important lever to attain high performances, which is why high performance computing runtimes, such as MPI, are built to increase parallelism in applications. The recent changes in modern architectures combined with fast networks have made overlapping CPU-bound computation and network communication a key part of parallel applications. While some degree of overlap might be attained manually, this is often a complex and error prone procedure. Our proposal automatically transforms blocking communications into non blocking ones to increase the overlapping potential. To this end, we use a separate communication thread responsible for handling communications and a memory protection mechanism to track memory accesses in communication buffers. This guarantees both progress for these communications and the largest window during which communication and computation can be processed in parallel
Cherrueau, Ronan-Alexandre. « Un langage de composition des techniques de sécurité pour préserver la vie privée dans le nuage ». Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0233/document.
Texte intégralA cloud service can use security techniques to ensure information privacy. These techniques protect privacy by converting the client’s personal data into unintelligible text. But they can also cause the loss of some functionalities of the service. For instance, a symmetric-key cipher protects privacy by converting readable personal data into unreadable one. However, this causes the loss of computational functionalities on this data.This thesis claims that a cloud service has to compose security techniques to ensure information privacy without the loss of functionalities. This claim is based on the study of the composition of three techniques: symmetric cipher, vertical data fragmentation and client-side computation. This study shows that the composition makes the service privacy preserving, but makes its formulation overwhelming. In response, the thesis offers a new language for the writing of cloud services that enforces information privacy using the composition of security techniques. This language comes with a set of algebraic laws to systematically transform a local service without protection into its cloud equivalent protected by composition. An Idris implementation harnesses the Idris expressive type system to ensure the correct composition of security techniques. Furthermore, an encoding translates the language intoProVerif, a model checker for automated reasoning about the security properties found in cryptographic protocols. This translation checks that the service preserves the privacy of its client
Belabed, Dallal. « Design and Evaluation of Cloud Network Optimization Algorithms ». Electronic Thesis or Diss., Paris 6, 2015. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2015PA066149.pdf.
Texte intégralThis dissertation tries to give a deep understanding of the impact of the new Cloud paradigms regarding to the Traffic Engineering goal, to the Energy Efficiency goal, to the fairness in the endpoints offered throughput, and of the new opportunities given by virtualized network functions.In the first part of our dissertation we investigate the impact of these novel features in Data Center Network optimization, providing a formal comprehensive mathematical formulation on virtual machine placement and a metaheuristic for its resolution. We show in particular how virtual bridging and multipath forwarding impact common DCN optimization goals, Traffic Engineering and Energy Efficiency, assess their utility in the various cases in four different DCN topologies.In the second part of the dissertation our interest move into better understand the impact of novel attened and modular DCN architectures on congestion control protocols, and vice-versa. In fact, one of the major concerns in congestion control being the fairness in the offered throughput, the impact of the additional path diversity, brought by the novel DCN architectures and protocols, on the throughput of individual endpoints and aggregation points is unclear.Finally, in the third part we did a preliminary work on the new Network Function Virtualization paradigm. In this part we provide a linear programming formulation of the problem based on virtual network function chain routing problem in a carrier network. The goal of our formulation is to find the best route in a carrier network where customer demands have to pass through a number of NFV node, taking into consideration the unique constraints set by NFV
Tran, Van-Hoang. « Range query processing over untrustworthy clouds ». Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S072.
Texte intégralCloud computing has increasingly become a standard for saving costs and enabling elasticity. While cloud providers expand their services, concerns about the security of outsourced data hinder cloud technologies from a widespread adoption. To address it, encryption is usually used to protect confidential data stored and processed on untrustworthy clouds. Encrypting outsourced data however mitigates the functionalities of applications since supporting some fundamental functions on encrypted data is still limited. This thesis focuses on the problem of supporting range queries over encrypted data stored on clouds. Many studies have been introduced in this line of work. Nevertheless, none of prior schemes exhibits satisfactory performances for modern systems, that require not only low-latency responses, but also high scalability. Particularly, most existing solutions suffer from either inefficient range query processing or privacy leaks. Even if some can achieve both strong privacy protection and fast processing, they do not satisfy scalability requirements, namely high ingestion throughput, practical storage overhead, and lightweight updates. To overcome this limitation, we propose scalable solutions on secure range query processing while still preserving efficiency and strong security. Our contributions are: (1) We adapt one of the state-of-the-art solutions to the context of high rate of incoming data that often creates bottlenecks. In other words, we introduce and integrate the notion of index template into one of the state-of-the-art solutions so that it can cope with the target context. (2) We develop an intensive ingestion framework dedicated to secure range query processing on encrypted data. Particularly, we re-design the architecture of the first contribution to make it fully distributed. A data presentation and asynchronous method are then introduced. Together, they significantly increase the intake ability of the system. Besides, we adapt the framework to a stronger type of adversaries (e.g., online attackers) and enhance its practicality. (3) We propose a scalable scheme for private range query processing on outsourced datasets. This scheme addresses the need of a scalable solution in terms of efficiency, high security, practical storage overhead, and numerous updates, which can not be supported by existing protocols. To this purpose, we develop our solution relying on equal-size chunks (buckets) of data and secure indexes. The former helps to protect privacy of the underlying data from the adversary while the latter enables efficiency. To support lightweight updates, we propose to decouple secure indexes from their buckets by using use equal-size bitmaps
Obame, Meye Pierre. « Sûreté de fonctionnement dans le nuage de stockage ». Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S091/document.
Texte intégralThe quantity of data in the world is steadily increasing bringing challenges to storage system providers to find ways to handle data efficiently in term of dependability and in a cost-effectively manner. We have been interested in cloud storage which is a growing trend in data storage solution. For instance, the International Data Corporation (IDC) predicts that by 2020, nearly 40% of the data in the world will be stored or processed in a cloud. This thesis addressed challenges around data access latency and dependability in cloud storage. We proposed Mistore, a distributed storage system that we designed to ensure data availability, durability, low access latency by leveraging the Digital Subscriber Line (xDSL) infrastructure of an Internet Service Provider (ISP). Mistore uses the available storage resources of a large number of home gateways and Points of Presence for content storage and caching facilities. Mistore also targets data consistency by providing multiple types of consistency criteria on content and a versioning system. We also considered the data security and confidentiality in the context of storage systems applying data deduplication which is becoming one of the most popular data technologies to reduce the storage cost and we design a two-phase data deduplication that is secure against malicious clients while remaining efficient in terms of network bandwidth and storage space savings
Oesau, Sven. « Modélisation géométrique de scènes intérieures à partir de nuage de points ». Thesis, Nice, 2015. http://www.theses.fr/2015NICE4034/document.
Texte intégralGeometric modeling and semantization of indoor scenes from sampled point data is an emerging research topic. Recent advances in acquisition technologies provide highly accurate laser scanners and low-cost handheld RGB-D cameras for real-time acquisition. However, the processing of large data sets is hampered by high amounts of clutter and various defects such as missing data, outliers and anisotropic sampling. This thesis investigates three novel methods for efficient geometric modeling and semantization from unstructured point data: Shape detection, classification and geometric modeling. Chapter 2 introduces two methods for abstracting the input point data with primitive shapes. First, we propose a line extraction method to detect wall segments from a horizontal cross-section of the input point cloud. Second, we introduce a region growing method that progressively detects and reinforces regularities of planar shapes. This method utilizes regularities common to man-made architecture, i.e. coplanarity, parallelism and orthogonality, to reduce complexity and improve data fitting in defect-laden data. Chapter 3 introduces a method based on statistical analysis for separating clutter from structure. We also contribute a supervised machine learning method for object classification based on sets of planar shapes. Chapter 4 introduces a method for 3D geometric modeling of indoor scenes. We first partition the space using primitive shapes detected from permanent structures. An energy formulation is then used to solve an inside/outside labeling of a space partitioning, the latter providing robustness to missing data and outliers
Gallard, Jérôme. « Flexibilité dans la gestion des infrastructures informatiques distribuées ». Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00625278.
Texte intégralNguyen, Thuy Linh. « Fast delivery of virtual machines and containers : understanding and optimizing the boot operation ». Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0147/document.
Texte intégralThe provisioning process of a VirtualMachine (VM) or a container is a succession of three complex stages : (i) scheduling theVM / Container to an appropriate compute node ;(ii) transferring the VM / Container image to that compute node from a repository ; (iii) and finally performing the VM / Container boot process. Depending on the properties of the client’s request and the status of the platform, each of these three phases can impact the total duration of the provisioning operation. While many works focused on optimizing the two first stages, only few works investigated the impact of the boot duration. This comes to us as a surprise as a preliminary study we conducted showed the boot time of a VM / Container can last up to a few minutes in high consolidated scenarios. To understand the major reasons for such overheads, we performed on top of Grid'5000 up to 15k experiments, booting VM / Containerunder different environmental conditions. The results showed that the most influential factor is the I/O operations. To accelerate the boot process, we defend in this thesis, the design of a dedicated mechanism to mitigate the number of generated I/O operations. We demonstrated the relevance of this proposal by discussing a first prototype entitled YOLO (You Only LoadOnce). Thanks to YOLO, the boot duration can be faster 2-13 times for VMs and 2 times for containers. Finally, it is noteworthy to mention that the way YOLO has been designed enables it to be easily applied to other types of virtualization (e.g., Xen) and containerization technologies
Giraud, Matthieu. « Secure Distributed MapReduce Protocols : How to have privacy-preserving cloud applications ? » Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC033/document.
Texte intégralIn the age of social networks and connected objects, many and diverse data are produced at every moment. The analysis of these data has led to a new science called "Big Data". To best handle this constant flow of data, new calculation methods have emerged.This thesis focuses on cryptography applied to processing of large volumes of data, with the aim of protection of user data. In particular, we focus on securing algorithms using the distributed computing MapReduce paradigm to perform a number of primitives (or algorithms) essential for data processing, ranging from the calculation of graph metrics (e.g. PageRank) to SQL queries (i.e. set intersection, aggregation, natural join).In the first part of this thesis, we discuss the multiplication of matrices. We first describe a standard and secure matrix multiplication for the MapReduce architecture that is based on the Paillier’s additive encryption scheme to guarantee the confidentiality of the data. The proposed algorithms correspond to a specific security hypothesis: collusion or not of MapReduce cluster nodes, the general security model being honest-but-curious. The aim is to protect the confidentiality of both matrices, as well as the final result, and this for all participants (matrix owners, calculation nodes, user wishing to compute the result). On the other hand, we also use the matrix multiplication algorithm of Strassen-Winograd, whose asymptotic complexity is O(n^log2(7)) or about O(n^2.81) which is an improvement compared to the standard matrix multiplication. A new version of this algorithm adapted to the MapReduce paradigm is proposed. The safety assumption adopted here is limited to the non-collusion between the cloud and the end user. The version uses the Paillier’s encryption scheme.The second part of this thesis focuses on data protection when relational algebra operations are delegated to a public cloud server using the MapReduce paradigm. In particular, we present a secureintersection solution that allows a cloud user to obtain the intersection of n > 1 relations belonging to n data owners. In this solution, all data owners share a key and a selected data owner sharesa key with each of the remaining keys. Therefore, while this specific data owner stores n keys, the other owners only store two keys. The encryption of the real relation tuple consists in combining the use of asymmetric encryption with a pseudo-random function. Once the data is stored in the cloud, each reducer is assigned a specific relation. If there are n different elements, XOR operations are performed. The proposed solution is very effective. Next, we describe the variants of grouping and aggregation operations that preserve confidentiality in terms of performance and security. The proposed solutions combine the use of pseudo-random functions with the use of homomorphic encryption for COUNT, SUM and AVG operations and order preserving encryption for MIN and MAX operations. Finally, we offer secure versions of two protocols (cascade and hypercube) adapted to the MapReduce paradigm. The solutions consist in using pseudo-random functions to perform equality checks and thus allow joining operations when common components are detected. All the solutions described above are evaluated and their security proven
Villebonnet, Violaine. « Scheduling and Dynamic Provisioning for Energy Proportional Heterogeneous Infrastructures ». Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN057/document.
Texte intégralThe increasing number of data centers raises serious concerns regarding their energy consumption. These infrastructures are often over-provisioned and contain servers that are not fully utilized. The problem is that inactive servers can consume as high as 50% of their peak power consumption.This thesis proposes a novel approach for building data centers so that their energy consumption is proportional to the actual load. We propose an original infrastructure named BML for "Big, Medium, Little", composed of heterogeneous computing resources : from low power processors to classical servers. The idea is to take advantage of their different characteristics in terms of energy consumption, performance, and switch on reactivity to adjust the composition of the infrastructure according to the load evolutions. We define a generic methodology to compute the most energy proportional combinations of machines based on hardware profiling data.We focus on web applications whose load varies over time and design a scheduler that dynamically reconfigures the infrastructure, with application migrations and machines switch on and off, to minimize the infrastructure energy consumption according to the current application requirements.We have developed two different dynamic provisioning algorithms which take into account the time and energy overheads of the different reconfiguration actions in the decision process. We demonstrate through simulations based on experimentally acquired hardware profiles that we achieve important energy savings compared to classical data center infrastructures and management
Riteau, Pierre. « Plates-formes d'exécution dynamiques sur des fédérations de nuages informatiques ». Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00651258.
Texte intégralRibot, Stephane. « Adoption of Big Data And Cloud Computing Technologies for Large Scale Mobile Traffic Analysis ». Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE3049.
Texte intégralA new economic paradigm is emerging as a result of enterprises generating and managing increasing amounts of data and looking for technologies like cloud computing and Big Data to improve data-driven decision making and ultimately performance. Mobile service providers are an example of firms that are looking to monetize the collected mobile data. Our thesis explores cloud computing determinants of adoption and Big Data determinants of adoption at the user level. In this thesis, we employ a quantitative research methodology and operationalized using a cross-sectional survey so temporal consistency could be maintained for all the variables. The TTF model was supported by results analyzed using partial least square (PLS) structural equation modeling (SEM), which reflects positive relationships between individual, technology and task factors on TTF for mobile data analysis.Our research makes two contributions: the development of a new TTF construct – task-Big Data/cloud computing technology fit model – and the testing of that construct in a model overcoming the rigidness of the original TTF model by effectively addressing technology through five subconstructs related to technology platform (Big Data) and technology infrastructure (cloud computing intention to use). These findings provide direction to mobile service providers for the implementation of cloud-based Big Data tools in order to enable data-driven decision-making and monetize the output from mobile data traffic analysis
Le, Nhan Tam. « Ingénierie dirigée par les modèles pour le provisioning d'images de machines virtuelles pour l'informatique en nuage ». Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926228.
Texte intégralDupont, Simon. « Gestion autonomique de l'élasticité multi-couche des applications dans le Cloud : vers une utilisation efficiente des ressources et des services du Cloud ». Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0239/document.
Texte intégralCloud computing, through its layered model and access to its on-demand services, has changed the way of managing the infrastructures (IaaS) and how to produce software (SaaS). With the advent of IaaS elasticity, the amount of resources can be automatically adjusted according to the demand to satisfy a certain level of quality of service (QoS) to customers while minimizing underlying operating costs. The current elasticity model is based on adjusting the IaaS resources through basic autoscaling services, which reaches to its limit in terms of responsiveness and adaptation granularity. Although it is an essential feature for Cloud computing, elasticity remains poorly equipped which prevents the various actors of the Cloud to really enjoy its benefits. In this thesis, we propose to extend the concept of elasticity to higher layers of the cloud, and more precisely to the SaaS level. Then, we present the new concept of software elasticity by defining the ability of the software to adapt, ideally in an autonomous way, to cope with workload changes and/or limitations of IaaS elasticity. This predicament brings the consideration of Cloud elasticity in a multi-layer way through the adaptation of all kind of Cloud resources. To this end, we present a model for the autonomic management of multi-layer elasticity and the associated framework ElaStuff. In order to equip and industrialize the elasticity management process, we propose the perCEPtion monitoring tool, based on complex event processing, which enables the administrators to set up an advanced observation of the Cloud system. In addition, we propose a domain specific language (DSL) for the multi-layer elasticity, called ElaScript, which allows to simply and effectively express reconfiguration plans orchestrating the different levels of elasticity actions. Finally, our proposal to extend the Cloud elasticity to higher layers, particularly to SaaS,is validated experimentally from several perspectives (QoS,energy, responsiveness and accuracy of the scaling, etc.)
Antony, Geo Johns. « Cheops reloaded, further steps in decoupling geo-distribution from application business logic : a focus on externalised sharding collaboration, consistency and dependency ». Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0441.
Texte intégralThe shift from centralized cloud computing to geo-distributed applications is critical for meeting modern demands for low-latency, highly available, and resilient services. However, existing geo-distribution solutions often require intrusive modifications to application code. My thesis extends the Cheops framework, a middleware that decouples geodistribution from application logic, offering a non-intrusive and generic solution for deploying an application across geographically distributed instances. Building on the Cheops principles of "local-first" and "collaborative-then," my research introduces Cross, a shard collaboration mechanism for partitioning resources across sites, and a new approach to decoupling consistency from the application logic, ensuring synchronization between instances. Additionally, dependency management guarantees that operations performed on one instance are reproducible across geo-distributed instances, maintaining the illusion of a unified, singlesite application. Cheops uses Scope-lang, a Domain-Specific Language (DSL), to facilitate this without altering application logic. This extension of Cheops, further enhances the separation of geo-distribution from the application business logic
Malvaut-Martiarena, Willy. « Vers une architecture pair-à-pair pour l'informatique dans le nuage ». Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM044/document.
Texte intégralWith the emergence of Cloud computing, a new trend is to externalize computing tasks in order to decrease costs and increase flexibility. Current Cloud infrastructures rely on the usage of large-scale centralized data centers, for computing resources provisioning. In this thesis, we study the possibility to provide a peer-to-peer based Cloud infrastructure, which is totally decentralized and can be deployed on any computing nodes federation. We focus on the nodes allocation problem and present Salute, a nodes allocation service that organizes nodes in unstructured overlay networks and relies on mechanisms to predict node availability in order to ensure, with high probability, that allocation requests will be satisfied over time, and this despite churn. Salute's implementation relies on the collaboration of several peer-to-peer protocols belonging to the category of epidemic protocols. To convey our claims, we evaluate Salute using real traces
Ariyattu, Resmi. « Towards federated social infrastructures for plug-based decentralized social networks ». Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S031/document.
Texte intégralIn this thesis, we address two issues in the area of decentralized distributed systems: network-aware overlays and collaborative editing. Even though network overlays have been extensively studied, most solutions either ignores the underlying physical network topology, or uses mechanisms that are specific to a given platform or applications. This is problematic, as the performance of an overlay network strongly depends on the way its logical topology exploits the underlying physical network. To address this problem, we propose Fluidify, a decentralized mechanism for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based systems. The second issue that we address focuses on collaborative editing platforms. Distributed collaborative editors allow several remote users to contribute concurrently to the same document. Only a limited number of concurrent users can be supported by the currently deployed editors. A number of peer-to-peer solutions have therefore been proposed to remove this limitation and allow a large number of users to work collaboratively. These decentralized solution assume however that all users are editing the same set of documents, which is unlikely to be the case. To open the path towards more flexible decentralized collaborative editors, we present Filament, a decentralized cohort-construction protocol adapted to the needs of large-scale collaborative editors. Filament eliminates the need for any intermediate DHT, and allows nodes editing the same document to find each other in a rapid, efficient and robust manner by generating an adaptive routing field around themselves. Filament's architecture hinges around a set of collaborating self-organizing overlays that utilizes the semantic relations between peers. The resulting protocol is efficient, scalable and provides beneficial load-balancing properties over the involved peers
Mechouche, Jeremy. « Gérer et assurer la qualité de services de ressources dans un environnement multi-cloud ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS018.
Texte intégralAs cloud computing continues to grow, new client needs arise, surpassing the capabilities of a single provider. Naturally, clients have become interested in consuming services from multiple providers. This paradigm brings a number of advantages. However, multi-cloud raises challenges inherent to the multiplicity of providers and the heterogeneity of their services. Moreover, managing the quality of service, maintaining and monitoring service level objectives, is made more complex due to the distributed nature of the multi-cloud context and the dependencies between different components.To overcome these difficulties, in this thesis, we aim to (1) propose a model for dynamic multi-cloud SLA description, (2) propose a process for pre-implementation consistency validation of dynamic multi-cloud SLAs, and (3) propose a process for post-implementation verification of dynamic multi-cloud SLAs. To achieve the first objective, we propose a model for dynamic multi-cloud SLA description composed of a global-SLA to represent global objectives for the multi-cloud system, sub-SLAs to represent local objectives at the component level of the multi-cloud system, and a state machine to formalize the reconfiguration of cloud services to address the dynamic aspect. To achieve the second objective, we propose a pre-implementation consistency verification of dynamic multi-cloud SLAs, identifying and reporting inconsistent SLOs. We propose a two-step verification: (1) between the global-SLAs and the sub-SLAs based on an SLO aggregation method, and (2) between the sub-SLAs and reconfiguration strategies based on an SLA translation technique. To achieve the third objective, focused on reporting the dynamic multi-cloud SLA after implementation by cloud service providers, this contribution is based on process mining techniques, including the collection and pre-processing of event logs produced during the implementation, to compare them with the established dynamic multi-cloud SLA and report any SLA violations to cloud architects. We evaluate this step with event logs collected from the 3 largest cloud service providers: AWS, GCP, and Azure
Aguiari, Davide. « Exploring Computing Continuum in IoT Systems : sensing, communicating and processing at the Network Edge ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS131.
Texte intégralAs Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum
Costache, Stefania. « Market-based autonomous and elastic application execution on clouds ». Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00963824.
Texte intégralCarminati, Federico. « Conception, réalisation et exploitation du traitement de données de l’expérience ALICE pour la simulation, la reconstruction et l’analyse ». Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=0ed58585-b62e-40b5-8849-710d1e15c6c2.
Texte intégralThe ALICE (A Large Ion Collider Experiment) at the CERN (Conseil Européenne pour la Recherche Nucléaire) LHC (Large Hadron Collider) facility uses an integrated software framework for the design of the experimental apparatus, the evaluation of its performance and the processing of the experimental data. Federico Carminati has designed this framework. It includes the event generators and the algorithms for particle transport describing the details of the interaction particles-matter (designed and implemented by Federico Carminati), the reconstruction of the particle trajectories and the final physics analysis
Paraiso, Fawaz. « soCloud : une plateforme multi-nuages distribuée pour la conception, le déploiement et l'exécution d'applications distribuées à large échelle ». Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2014. http://tel.archives-ouvertes.fr/tel-01009918.
Texte intégralIwaszko, Thomas. « Généralisation du diagramme de Voronoï et placement de formes géométriques complexes dans un nuage de points ». Phd thesis, Université de Haute Alsace - Mulhouse, 2012. http://tel.archives-ouvertes.fr/tel-01005212.
Texte intégralLev, Hoang Justin. « A Study of 3D Point Cloud Features for Shape Retrieval ». Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM040.
Texte intégralWith the improvement and proliferation of 3D sensors, price cut and enhancementof computational power, the usage of 3D data intensifies for the last few years. The3D point cloud is one type amongst the others for 3D representation. This particularlyrepresentation is the direct output of sensors, accurate and simple. As a non-regularstructure of unordered list of points, the analysis on point cloud is challenging andhence the recent usage only.This PhD thesis focuses on the use of 3D point cloud representation for threedimensional shape analysis. More particularly, the geometrical shape is studied throughthe curvature of the object. Descriptors describing the distribution of the principalcurvature is proposed: Principal Curvature Point Cloud and Multi-Scale PrincipalCurvature Point Cloud. Global Local Point Cloud is another descriptor using thecurvature but in combination with other features. These three descriptors are robustto typical 3D scan error like noisy data or occlusion. They outperform state-of-the-artalgorithms in instance retrieval task with more than 90% of accuracy.The thesis also studies deep learning on 3D point cloud which emerges during thethree years of this PhD. The first approach tested, used curvature-based descriptor asthe input of a multi-layer perceptron network. The accuracy cannot catch state-ofthe-art performances. However, they show that ModelNet, the standard dataset for 3Dshape classification is not a good picture of the reality. Indeed, the experiment showsthat the dataset does not reflect the curvature wealth of true objects scans.Ultimately, a new neural network architecture is proposed. Inspired by the state-ofthe-art deep learning network, Multiscale PointNet computes the feature on multiplescales and combines them all to describe an object. Still under development, theperformances are still to be improved.In summary, tackling the challenging use of 3D point clouds but also the quickevolution of the field, the thesis contributes to the state-of-the-art in three majoraspects: (i) Design of new algorithms, relying on geometrical curvature of the objectfor instance retrieval task. (ii) Study and exhibition of the need to build a new standardclassification dataset with more realistic objects. (iii) Proposition of a new deep neuralnetwork for 3D point cloud analysis
Aubé, Antoine. « Comprendre la migration infonuagique : exigences et estimation des coûts d'exploitation ». Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0021.
Texte intégralSelecting a host for an information system is a major decision for any organization. Indeed, such a decision has consequences on many aspects, such as the operating costs, the manpower allocation to operations, and the quality of service provided by the information system.While the hosting was traditionally carried out by the organizations themselves, in their premises, the emergence of third-party hosting providers initiated a change in practices: a migration of information systems to the infrastructure of another organization.Cloud Computing is such a model for delegating infrastructure to a third party. The latter provides a cloud, which is a set of configurable services to deliver computing resources.In this context of cloud migration, new issues emerge to select information systems hosts, named emph{cloud environments}. In particular, problems related to the recurrence and variety of operational costs are well-known in the industry.In this research work, we aim to identify the criteria for selecting a cloud environment during a migration, and how we can evaluate the compliance of the cloud environment with the requirements linked to these criteria. To this end, we first carried out a qualitative study with industrial experts to identify the most recurring requirements of cloud environments in the industry. Then, we focused on estimating the operational costs of these environments, which are frequently mentioned as a criterion to be minimized, and which are often misunderstood, given the variety of pricing schemes of Cloud Computing. We therefore developed a conceptual model for estimating these costs, and then a tool that implements this conceptual model to automate the estimation
Ahmed-Nacer, Mehdi. « Méthodologie d'évaluation pour les types de données répliqués ». Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0039.
Texte intégralTo provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
Loudet, Julien. « Distributed and Privacy-Preserving Personal Queries on Personal Clouds ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV067/document.
Texte intégralIn a context where we produce more and more personal data and where we control less and less how and by whom they are used, a new way of managing them is on the rise: the "personal cloud". In partnership with the french start-up Cozy Cloud (https://cozy.io) that is developing such technology, we propose through this work a way of collaboratively querying the personal clouds while preserving the privacy of the users.We detail in this thesis three contributions to achieve this objective: (1) a set of four requirements any protocol has to respect in this particular context: imposed randomness to prevent an attacker from influencing the execution of a query, knowledge dispersion to prevent any node from concentrating information, task atomicity to split the execution in as many independent tasks as necessary and hidden communications to protect the identity of the participants as well as the content of their communications; (2) SEP2P a protocol leveraging a distributed hash table and CSAR, another protocol that generates a verifiable random number, in order to generate a random and verifiable list of actors in accordance with the first requirement; and (3) DISPERS a protocol that applies the last three requirements and splits the execution of a query so as to minimize the impact of a leakage (in case an attacker was selected as actor) by providing to each actor the minimum amount of information it needs in order to execute its task
Poreba, Martyna. « Qualification et amélioration de la précision de systèmes de balayage laser mobiles par extraction d'arêtes ». Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2014. http://pastel.archives-ouvertes.fr/pastel-01068828.
Texte intégralLi, Yanhuang. « Interoperability and Negotiation of Security Policies ». Thesis, Télécom Bretagne, 2016. http://www.theses.fr/2016TELB0414/document.
Texte intégralSecurity policy provides a way to define the constraints on behavior of the members belonging to a system, organization or other entities. With the development of IT technology such as Grid Computing and Cloud Computing, more and more applications and platforms exchange their data and services for cooperating. Toward this trend, security becomes an important issue and security policy has to be applied in order to ensure the safety of data and service interaction. In this thesis, we deal with one type of security policy: access control policy. Access control policy protects the privileges of resource's utilization and there exist different policy models for various scenarios. Our goal is to ensure that the service customer well expresses her security requirements and chooses the service providers that fit these requirements.The first part of this dissertation is dedicated to service provider selection. In case that the security policies of the service provider are accessible to the service customer, we provide a method for measuring the similarity between security policies. Another case is that security policies are not accessible to the service customer or not specified explicitly. Our solution is proposing a policy-based framework which enables the derivation from attribute-based security requirements to concrete security policies. The second part of the dissertation focuses on the security policy negotiation. We investigate the process of reaching agreement through bargaining process in which negotiators exchange their offers and counter offers step by step. The positive result of the negotiation generates a policy contract
Ahmed-Nacer, Mehdi. « Méthodologie d'évaluation pour les types de données répliqués ». Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0039/document.
Texte intégralTo provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
Quach, Maurice. « Deep learning-based Point Cloud Compression ». Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG051.
Texte intégralPoint clouds are becoming essential in key applications with advances in capture technologies leading to large volumes of data.Compression is thus essential for storage and transmission.Point Cloud Compression can be divided into two parts: geometry and attribute compression.In addition, point cloud quality assessment is necessary in order to evaluate point cloud compression methods.Geometry compression, attribute compression and quality assessment form the three main parts of this dissertation.The common challenge across these three problems is the sparsity and irregularity of point clouds.Indeed, while other modalities such as images lie on a regular grid, point cloud geometry can be considered as a sparse binary signal over 3D space and attributes are defined on the geometry which can be both sparse and irregular.First, the state of the art for geometry and attribute compression methods with a focus on deep learning based approaches is reviewed.The challenges faced when compressing geometry and attributes are considered, with an analysis of the current approaches to address them, their limitations and the relations between deep learning and traditional ones.We present our work on geometry compression: a convolutional lossy geometry compression approach with a study on the key performance factors of such methods and a generative model for lossless geometry compression with a multiscale variant addressing its complexity issues.Then, we present a folding-based approach for attribute compression that learns a mapping from the point cloud to a 2D grid in order to reduce point cloud attribute compression to an image compression problem.Furthermore, we propose a differentiable deep perceptual quality metric that can be used to train lossy point cloud geometry compression networks while being well correlated with perceived visual quality and a convolutional neural network for point cloud quality assessment based on a patch extraction approach.Finally, we conclude the dissertation and discuss open questions in point cloud compression, existing solutions and perspectives. We highlight the link between existing point cloud compression research and research problems to relevant areas of adjacent fields, such as rendering in computer graphics, mesh compression and point cloud quality assessment
Iordache, Ancuta. « Performance-cost trade-offs in heterogeneous clouds ». Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S045/document.
Texte intégralCloud infrastructures provide on-demand access to a large variety of computing devices with different performance and cost. This creates many opportunities for cloud users to run applications having complex resource requirements, starting from large numbers of servers with low-latency interconnects, to specialized devices such as GPUs and FPGAs. User expectations regarding the execution of applications may vary between the fastest possible execution, the cheapest execution or any trade-off between the two extremes. However, enabling cloud users to easily make performance-cost trade-offs is not a trivial exercise and choosing the right amount and type of resources to run applications accordingto user expectations is very difficult. This thesis proposes three contributions to enable performance-cost trade-offs for application execution in heterogeneous clouds by following two directions: make good use of resources and make good choice of resources. We propose as a first contribution a method to share FPGA-based accelerators in cloud infrastructures having the objective to improve their utilization. As a second contribution we propose profiling methods to automate the selection of heterogeneous resources for executing applications under user objectives. Finally, we demonstrate how these technologies can be implemented and exploited in heterogeneous cloud platforms
Sangroya, Amit. « Towards dependability and performance benchmarking for cloud computing services ». Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM016/document.
Texte intégralCloud computing models are attractive because of various benefits such asscalability, cost and exibility to develop new software applications. However,availability, reliability, performance and security challenges are still not fullyaddressed. Dependability is an important issue for the customers of cloudcomputing who want to have guarantees in terms of reliability and availability.Many studies investigated the dependability and performance of cloud services,ranging from job scheduling to data placement and replication, adaptiveand on-demand fault-tolerance to new fault-tolerance models. However, thead-hoc and overly simplified settings used to evaluate most cloud service fault toleranceand performance improvement solutions pose significant challengesto the analysis and comparison of the effectiveness of these solutions.This thesis precisely addresses this problem and presents a benchmarkingapproach for evaluating the dependability and performance of cloud services.Designing of dependability and performance benchmarks for a cloud serviceis a particular challenge because of high complexity and the large amount ofdata processed by such service. Infrastructure as a Service (IaaS), Platform asa Service (PaaS) and Software as a Service (SaaS) are the three well definedmodels of cloud computing. In this thesis, we will focus on the PaaS modelof cloud computing. PaaS model enables operating systems and middlewareservices to be delivered from a managed source over a network. We introduce ageneric benchmarking architecture which is further used to build dependabilityand performance benchmarks for PaaS model of cloud services
Bentounsi, Mohamed el Mehdi. « Les processus métiers en tant que services - BPaaS : sécurisation des données et des services ». Electronic Thesis or Diss., Sorbonne Paris Cité, 2015. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=1698&f=9066.
Texte intégralCloud computing has become one of the fastest growing segments of the IT industry. In such open distributed computing environments, security is of paramount concern. This thesis aims at developing protocols and techniques for private and reliable outsourcing of design and compute-intensive tasks on cloud computing infrastructures. The thesis enables clients with limited processing capabilities to use the dynamic, cost-effective and powerful cloud computing resources, while having guarantees that their confidential data and services, and the results of their computations, will not be compromised by untrusted cloud service providers. The thesis contributes to the general area of cloud computing security by working in three directions. First, the design by selection is a new capability that permits the design of business processes by reusing some fragments in the cloud. For this purpose, we propose an anonymization-based protocol to secure the design of business processes by hiding the provenance of reused fragments. Second, we study two di_erent cases of fragments' sharing : biometric authentication and complex event processing. For this purpose, we propose techniques where the client would only do work which is linear in the size of its inputs, and the cloud bears all of the super-linear computational burden. Moreover, the cloud computational burden would have the same time complexity as the best known solution to the problem being outsourced. This prevents achieving secure outsourcing by placing a huge additional overhead on the cloud servers. This thesis has been carried out in Université Paris Descartes (LIPADE - diNo research group) and in collaboration with SOMONE under a Cifre contract. The convergence of the research fields of those teams led to the development of this manuscrit
Nagrath, Vineet. « Software architectures for cloud robotics : the 5 view Hyperactive Transaction Meta-Model (HTM5) ». Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS005/document.
Texte intégralSoftware development for cloud connected robotic systems is a complex software engineeringendeavour. These systems are often an amalgamation of one or more robotic platforms, standalonecomputers, mobile devices, server banks, virtual machines, cameras, network elements and ambientintelligence. An agent oriented approach represents robots and other auxiliary systems as agents inthe system.Software development for distributed and diverse systems like cloud robotic systems require specialsoftware modelling processes and tools. Model driven software development for such complexsystems will increase flexibility, reusability, cost effectiveness and overall quality of the end product.The proposed 5-view meta-model has separate meta-models for specifying structure, relationships,trade, system behaviour and hyperactivity in a cloud robotic system. The thesis describes theanatomy of the 5-view Hyperactive Transaction Meta-Model (HTM5) in computation independent,platform independent and platform specific layers. The thesis also describes a domain specificlanguage for computation independent modelling in HTM5.The thesis has presented a complete meta-model for agent oriented cloud robotic systems and hasseveral simulated and real experiment-projects justifying HTM5 as a feasible meta-model
Lefray, Arnaud. « Security for Virtualized Distributed Systems : from Modelization to Deployment ». Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1032/document.
Texte intégralThis Thesis deals with security for virtualized distributed environments such as Clouds. In these environments, a client can access resources or services (compute, storage, etc.) on-demand without prior knowledge of the infrastructure underneath. These services are low-cost due to the mutualization of resources. As a result, the clients share a common infrastructure. However, the concentration of businesses and critical data makes Clouds more attractive for malicious users, especially when considering new attack vectors between tenants.Nowadays, Cloud providers offer default security or security by design which does not fit tenants' custom needs. This gap allows for multiple attacks (data thieft, malicious usage, etc.)In this Thesis, we propose a user-centric approach where a tenant models both its security needs as high-level properties and its virtualized application. These security objectives are based on a new logic dedicated to expressing system-based information flow properties. Then, we propose security-aware algorithm to automatically deploy the application and enforce the security properties. The enforcement can be realized by taking into account shared resources during placement decision and/or through the configuration of existing security mechanisms
Bentounsi, Mohamed el Mehdi. « Les processus métiers en tant que services - BPaaS : sécurisation des données et des services ». Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCB156/document.
Texte intégralCloud computing has become one of the fastest growing segments of the IT industry. In such open distributed computing environments, security is of paramount concern. This thesis aims at developing protocols and techniques for private and reliable outsourcing of design and compute-intensive tasks on cloud computing infrastructures. The thesis enables clients with limited processing capabilities to use the dynamic, cost-effective and powerful cloud computing resources, while having guarantees that their confidential data and services, and the results of their computations, will not be compromised by untrusted cloud service providers. The thesis contributes to the general area of cloud computing security by working in three directions. First, the design by selection is a new capability that permits the design of business processes by reusing some fragments in the cloud. For this purpose, we propose an anonymization-based protocol to secure the design of business processes by hiding the provenance of reused fragments. Second, we study two di_erent cases of fragments' sharing : biometric authentication and complex event processing. For this purpose, we propose techniques where the client would only do work which is linear in the size of its inputs, and the cloud bears all of the super-linear computational burden. Moreover, the cloud computational burden would have the same time complexity as the best known solution to the problem being outsourced. This prevents achieving secure outsourcing by placing a huge additional overhead on the cloud servers. This thesis has been carried out in Université Paris Descartes (LIPADE - diNo research group) and in collaboration with SOMONE under a Cifre contract. The convergence of the research fields of those teams led to the development of this manuscrit