Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Cloud level.

Dissertationen zum Thema „Cloud level“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Cloud level" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ho, Hon Pong. „Level set implementations on unstructured point cloud /“. View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202004%20HO.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 65-69). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wood, William J. „Exploring Firm-Level Cloud Adoption and Diffusion“. ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7776.

Der volle Inhalt der Quelle
Annotation:
Cloud computing innovation adoption literature has primarily focused on individuals, small businesses, and nonprofit organizations. The functional linkage between cloud adoption and diffusion is instrumental toward understanding enterprise firm-level adoption. The purpose of this qualitative collective case study was to explore strategies used by information technology (IT) executives to make advantageous enterprise cloud adoption and diffusion decisions. This study was guided by an integrated diffusion of innovation and technology, organization, and environment conceptual framework to capture and model this complex, multifaceted problem. The study’s population consisted of IT executives with cloud-centric roles in 3 large (revenues greater than $5 billion) telecom-related companies with a headquarters in the United States. Data collection included semistructured, individual interviews (n = 19) and the analysis of publicly available financial documents (n = 50) and organizational technical documents (n = 41). Data triangulation and interviewee member checking were used to increase study findings validity. Inter- and intracase analyses, using open and axial coding as well as constant comparative methods, were leveraged to identify 5 key themes namely top management support, information source bias, organizational change management, governance at scale, and service selection. An implication of this study for positive social change is that IT telecom executives might be able to optimize diffusion decisions to benefit downstream consumers in need of services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Bin. „Risk informed service level agreement for cloud brokerage“. Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580347.

Der volle Inhalt der Quelle
Annotation:
The use of distributed computing systems, including Grids and Compute Utilities, and now Clouds, becomes a consideration for businesses hoping to manage start-up costs and times, as well as reduce the physical and environmental footprint of infrastructures. Instead of purchasing and maintaining hardware and software, organisations and individuals can take advantage of pay-per- use (utility) models that relate directly to their requirements of infrastructures, platforms and software. However, such metered services are not widely adopted yet due to the lack of assurance of Quality of Service (QoS). It is suggested that such systems will only attain greater acceptance by a larger audience of commercial consumers if binding Service Level Agreements (SLAs) are provided that encompass service descriptions, costs of provision and, importantly, assurances on availability, performance, and liability. Prediction, quantification of risk, and consideration of liability in case of underperformance are considered essential for the future provision of Computer (Cloud) Economics - in particular, for the provision of SLAs through resource brokers, and generally to be more comparable to financial markets. The principal focus of this thesis is on building brokerage and related services for supporting growth of Cloud and contributing to future computational economics. A brokerage should provide negotiation mechanisms between consumers and providers, and perhaps manage available computer resources, to realise the goals of both parties. SLAs are key to this, where each SLA details price, risk, performance and QoS parameters, amongst others. This thesis presents a novel approach that supports the creation and management of Service Level Agreements, aimed towards improved uptake of cornmoditised computational infrastructures, platforms and software services. By analysing issues within current SLAs, it summarises necessary characteristics to be addressed in Cloud SLAs. Inspired by financial portfolio analysis and in particular by credit derivatives, this work demonstrates how the proposed Cloud Collateralised SLA Obligations (CSO), analogous to synthetic collateralised debt obligations (CDO), can be used to mitigate risk of failure or underperformance through diversification of compute resource portfolios. The CSO prices risk integrates into service insurance, and builds in penalties, and in contrast to well-known Cloud price models, relates variable performance to variable price. This performance-price relationship would also be necessary for the appropriate use of other financial models. Through Value-at-Risk (VaR) style analysis, the probability of failure (risk of underperformance) can be related to a confidence level for each SLA offer - the confidence of meeting the SLA. The thesis further identifies how performance tranches support an autonomic aspect in attempting to ensure satisfaction of higher-value SLAs as a trade off against higher-risk, lower-value SLAs. The approach can readily integrate with any SLA framework that supports real- time dynamic characteristics. Outcomes are broadly relevant to Cloud Computing, and more specifically to Infrastructure as a Service Clouds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Catela, Miguel Ferreira. „Service level agreement em cloud computing : um estudo de caso“. Master's thesis, Instituto Superior de Economia e Gestão, 2012. http://hdl.handle.net/10400.5/10442.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Ciências Empresariais
O cloud computing é um novo modelo de negócio, que pressupõe a utilização de recursos tecnológicos em regime pay-as-you-go, permitindo que as empresas se foquem no seu core business, transformando as despesas de capital em despesas operacionais. Num ambiente cloud computing, o Service Level Agreement (SLA) é um documento que pretende gerir as expectativas do fornecedor de serviços e do cliente, relativamente à qualidade do serviço entregue, medindo e validando se os parâmetros previamente acordados são cumpridos. Com a elaboração deste trabalho pretende-se responder à questão de investigação: "Como negociar um Service Level Agreement para um ambiente cloud computing"?. Desta forma, realizou-se um estudo de caso numa empresa portuguesa, de média dimensão, fornecedora de soluções cloud. Procedeu-se a uma recolha de dados quantitativa e qualitativa, por meio de inquérito aos clientes da empresa, e posteriores entrevistas a um administrador (e responsável estratégico da cloud), e à responsável do serviço de suporte a clientes. Este trabalho contribui com uma reflexão sobre como um SLA deve ser estruturado e qual deverá ser o seu conteúdo; indica o conhecimento que as empresas possuem sobre os SLAs, bem como quais os parâmetros que consideram mais relevantes para a sua organização; e de que forma um SLA deve ser negociado, em regime cloud computing.
Cloud computing is a new business model which assumes that technological resources are used under a pay-as-you-go manner, allowing companies to focus on their core business, turning capital expenditures into operational expenditures. Service Level Agreement (SLA) in cloud computing is a document that aims to manage service provider's and customer's expectations regarding the quality of service, by measuring and validating the parameters previously negotiated. This case study focus on answering the following question of investigation: "How to negotiate a Service Level Agreement (SLA) in a cloud computing environment"?. Therefore, it was performed a case study in a Portuguese mid-sized company, and cloud services provider. Thus, there has been collected a certain amount of quantitative data - through a survey to the company's customers. Next step was an interview with an administrator - and cloud manager - and an interview with the Service Desk manager. This study contributes to a reflection on how an SLA framework should be and what should be its content; tries to show what companies think about SLAs as well as which parameters are considered the most relevant to their organizations; and how should an SLA be negotiated in a cloud computing environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Holoubek, Jiří. „Teorie a praxe cloud computingu - analýza výhod a nevýhod přechodu jednotlivce a firmy na cloud“. Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-162593.

Der volle Inhalt der Quelle
Annotation:
My thesis concerns with a new phenomenon of the ICT industry -- cloud computing. The thesis is based on the fact that requirements of the customers on the cloud may differ, there is a significant difference between requirements of individuals and institutions (SMB, large companies or public institutions). In this regard, the thesis differentiates the cloud services for private and institutional use. The theoretical part of the thesis concerns with the definition of basic terms and history of cloud computing. Further, I define the cloud distribution model and its segmentation according to the method of deployment. The security represents another important factor, as it is the most important factor for the decision making on the transition to the cloud. Finally, there are other factors such as legal aspects, change of business processes, audit, governance and future development of cloud computing. The practical part concerns with complex analysis of the cloud market offer via cloud providers and its complex monitoring. As already mentioned, I differentiate the market offer for private and institutional use. The analysis of advantages and disadvantages of individual's and company's transition to the cloud and comparison of specific requirements of individuals and companies are another outcomes from this analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Deval, Niharika. „Empirical Evaluation of Cloud IAAS Platforms using System-level Benchmarks“. UNF Digital Commons, 2017. https://digitalcommons.unf.edu/etd/765.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing is an emerging paradigm in the field of computing where scalable IT enabled capabilities are delivered ‘as-a-service’ using Internet technology. The Cloud industry adopted three basic types of computing service models based on software level abstraction: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Infrastructure-as-a-Service allows customers to outsource fundamental computing resources such as servers, networking, storage, as well as services where the provider owns and manages the entire infrastructure. This allows customers to only pay for the resources they consume. In a fast-growing IaaS market with multiple cloud platforms offering IaaS services, the user's decision on the selection of the best IaaS platform is quite challenging. Therefore, it is very important for organizations to evaluate and compare the performance of different IaaS cloud platforms in order to minimize cost and maximize performance. Using a vendor-neutral approach, this research focused on four of the top IaaS cloud platforms- Amazon EC2, Microsoft Azure, Google Compute Engine, and Rackspace cloud services. This research compared the performance of IaaS cloud platforms using system-level parameters including server, file I/O, and network. System-level benchmarking provides an objective comparison of the IaaS cloud platforms from performance perspective. Unixbench, Dbench, and Iperf are the system-level benchmarks chosen to test the performance of the server, file I/O, and network respectively. In order to capture the performance variability, the benchmark tests were performed at different time periods on weekdays and weekends. Each IaaS platform's performance was also tested using various parameters. The benchmark tests conducted on different virtual machine (VM) configurations should help cloud users select the best IaaS platform for their needs. Also, based on their applications' requirements, cloud users should get a clearer picture of which VM configuration they should choose. In addition to the performance evaluation, the price-per-performance value of all the IaaS cloud platforms was also examined.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sporre, Moa. „Human Influence on Marine Low-Level Clouds“. Thesis, Uppsala University, Uppsala University, Department of Earth Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-105458.

Der volle Inhalt der Quelle
Annotation:

A study of air mass origin’s effect on marine stratus and stratocumulus clouds has been performed on clouds north of Scandinavia between 2000 and 2004. The aerosol number size distribution of the air masses has been obtained from measurements in northern Finland. A trajectory model has been used to calculate trajectories to and from the measurement stations. The back trajectories were calculated using the measurement site as receptor to make sure the air masses had the right origin, and forward trajectories were calculated from receptor stations to assure adequate flow conditions. Satellite data of microphysical parameters of clouds from the Moderate Resolution Imaging Spectrometer (MODIS) has been downloaded where the trajectories indicated that clouds could be studied, and where the satellite images displayed low-level clouds. The 25 % days with the highest number of aerosol with a diameter over 80 nm (N80) and the 35% with the lowest N80 have been used to represent polluted and clean conditions respectively. After screening trajectories and satellite imagery, 22 cases of clouds with northerly trajectories that had low N80 values (i.e. clean) and 25 southerly cases with high N80 values (i.e. polluted) where identified for further analysis.

   The average cloud optical thickness (τ) for all polluted pixels was more than twice that of the clean pixels. This can most likely be related to the differences in aerosol concentrations in accordance with the indirect effect, yet some difference in τ caused by different meteorological situations cannot be ruled out. The mean cloud droplet effective radius (aef) was for the polluted pixels 11.2 µm and for the clean pixels 15.5 µm, which results in a difference of 4.3 µm and clearly demonstrates the effect that increased aerosol numbers has on clouds. A non-linear relationship between aef and N80 has been obtained which indicates that changes in lower values of aerosol numbers affect aef more than changes in larger aerosol loads. The results from this study also indicate that there is a larger difference in the microphysical cloud parameters between the polluted and clean cases in spring and autumn than in summer.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Maeser, Robert K. III. „A Model-Based Framework for Analyzing Cloud Service Provider Trustworthiness and Predicting Cloud Service Level Agreement Performance“. Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10785821.

Der volle Inhalt der Quelle
Annotation:

Analytics firm Cyence estimated Amazon’s four-hour cloud computing outage on February 28, 2017 “cost S&P 500 companies at least $150 million” (Condliffe 2017) and traffic monitoring firm Apica claimed “54 of the top 100 online retailers saw site performance slump by at least 20 percent” (Condliffe 2017). 2015 data center outages cost Fortune 1000 companies between $1.25 and $2.5 billion (Ponemon 2017). Despite potential risks, the cloud computing industry continues to grow. For example, Internet of Things, which is projected to grow 266% between 2013 and 2020 (MacGillivray et al. 2017), will drive increased demand and dependency on cloud computing as data across multiple industries is collected and sent back to cloud data centers for processing. Enterprises continue to increase demand and dependency with 85% having multi-cloud strategies, up from 2016 (RightScale 2017a). This growth and dependency will influence risk exposure and potential for impact (e.g. availability, reliability, performance, security, financial). The research in this Praxis and proposed solution focuses on calculating cloud service provider (CSP) trustworthiness based on cloud service level agreement (SLA) criteria and predicting cloud SLA availability performance for cloud computing services. Evolving industry standards for cloud SLAs (EC 2014, Hunnebeck et al. 2011, ISO/IEC 2016, NIST 2015, Hogben, Giles and Dekker 2012) and existing work regarding CSP trustworthiness (Ghosh, Ghosh and Das 2015, Taha et al. 2014) will be leveraged as the predictive model (using Linear Regression Analysis) is constructed to analyze CSP cloud computing service, SLA performance and CSP trustworthiness.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Turner, Andrew J. „Input Shaping to Achieve Service Level Objectives in Cloud Computing Environments“. Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/289.

Der volle Inhalt der Quelle
Annotation:
In this thesis we propose a cloud Input Shaper and Dynamic Resource Controller to provide application-level quality of service guarantees in cloud computing environments. The Input Shaper splits the cloud into two areas: one for shaped traffic that achieves quality of service targets, and one for overflow traffic that may not achieve the targets. The Dynamic Resource Controller profiles customers’ applications, then calculates and allocates the resources required by the applications to achieve given quality of service targets. The Input Shaper then shapes the rate of incoming requests to ensure that the applications achieve their quality of service targets based on the amount of allocated resources. To evaluate our system we create a new benchmark application that is suitable for use in cloud computing environments. It is designed to reflect the current design of cloud based applications and can dynamically scale each application tier to handle large and varying workload levels. In addition, the client emulator that drives the benchmark also mimics realistic user behaviors such as browsing from multiple tabs, using JavaScript, and has variable thinking and typing speeds. We show that a cloud management system evaluated using previous benchmarks could violate its estimated quality of service achievement rate by over 20%. The Input Shaper and Dynamic Resource Controller system consist of an application performance modeler, a resource allocator, decision engine, and an Apache HTTP server module to reshape the rate of incoming web requests. By dynamically allocating resources to applications, we show that their response times can be improved by as much as 30%. Also, the amount of resources required to host applications can be decreased by 20% while achieving quality of service objectives. The Input Shaper can reduce VMs’ resource utilization variances by 88%, and reduce the number of servers by 45%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lim, Jun Ming Kelvin. „Multi-level secure information sharing between smart cloud systems of systems“. Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/41410.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited.
Reissued 1 Jul 2014 with corrections to in-text Figure and Table citations.
There is a need to have secure information sharing in the industry and government sectors. For example, countries within the North Atlantic Treaty Organization (NATO) often have a common goal requiring them to communicate, but they lack a technological platform for fast information sharing, especially if the countries have different access rights to the information. Thus, the same information that an organization wants to share with multiple partners needs to be securely shared at multiple levels. In addition, the manner in which information is shared needs to be flexible enough to accommodate changes on demand, due to the nature of the information or relationship with the sharing organizations. This thesis proposes a configurable, cloud infrastructure that enables multiple layers of secure information sharing between multiple organizations. This thesis follows a systems engineering process to propose a preliminary architecture of such a system, including an analysis of alternatives of some of the attributes of the system. Secondly, the thesis instantiates part of the proposed architecture with a proof-of-concept physical system in a laboratory environment. The proof-of-concept chooses a specific scenario of information sharing that would allow NATO members to access shared data faster, and in a secure fashion, in order to make decisions more quickly with the authorized information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Mourning, Chad L. „Disocclusion Mitigation for Point Cloud Image-Based Level-of-Detail Imposters“. Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1448379265.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ghumman, Waheed Aslam. „Automation of The SLA Life Cycle in Cloud Computing“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-229535.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has become a prominent paradigm to offer on-demand services for softwares, infrastructures and platforms. Cloud services are contracted by a service level agreement (SLA) between a cloud service provider (CSP) and a cloud service user (CSU) which contains service definitions, quality of service (QoS) parameters, guarantees and obligations. Cloud service providers mostly offer SLAs in descriptive format which is not directly consumable by a machine or a system. The SLA written in natural language may impede the utility of rapid elasticity in a cloud service. Manual management of SLAs with growing usage of cloud services can be a challenging, erroneous and tedious task especially for the CSUs acquiring multiple cloud services. The necessity of automating the complete SLA life cycle (which includes SLA description in machine readable format, negotiation, monitoring and management) becomes imminent due to complex requirements for the precise measurement of QoS parameters. Current approaches toward automating the complete SLA life cycle, lack in standardization, completeness and applicability to cloud services. Automation of different phases of the SLA life cycle (e.g. negotiation, monitoring and management) is dependent on the availability of a machine readable SLA. In this work, a structural specification for the SLAs in cloud computing (S3LACC in short) is presented which is designed specifically for cloud services, covers complete SLA life cycle and conforms with the available standards. A time efficient SLA negotiation technique is accomplished (based on the S3LACC) for concurrently negotiating with multiple CSPs. After successful negotiation process, next leading task in the SLA life cycle is to monitor the cloud services for ensuring the quality of service according to the agreed SLA. A distributed monitoring approach for the cloud SLAs is presented, in this work, which is suitable for services being used at single or multiple locations. The proposed approach reduces the number of communications of SLA violations to a monitoring coordinator by eliminating the unnecessary communications. The presented work on the complete SLA life cycle automation is evaluated and validated with the help of use cases, experiments and simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kelley, Nancy J. „An investigation into specifying service level agreements for provisioning cloud computing services“. Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/27852.

Der volle Inhalt der Quelle
Annotation:
Within the U.S. Department of Defense (DoD), service level agreements are a widely used tool for acquiring enterprise-level information technology (IT) resources. In order to contain, if not reduce, the total cost of ownership of IT resources to the enterprise, the DoD has undertaken outsourcing its IT needs to Cloud service providers. In this thesis, we explore how service level agreements are specified for non-Cloud-based services, followed by determining how to tailor those practices to specifying service level agreements for Cloud-based service provision, with a focus on end-to-end management of the service-provisioning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Tayarani, Najaran Mahdi. „Transport-level transactions : simple consistency for complex scalable low-latency cloud applications“. Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54520.

Der volle Inhalt der Quelle
Annotation:
The classical move from single-server applications to scalable cloud services is to split the application state along certain dimensions into smaller partitions independently absorbable by a separate server in terms of size and load. Maintaining data consistency in the face of operations that cross partition boundaries imposes unwanted complexity on the application. While for most applications many ideal partitioning schemes readily exist, First-Person Shooter (FPS) games and Relational Database Management Systems (RDBMS) are instances of applications whose state can’t be trivially partitioned. For any partitioning scheme there exists an FPS/RDBMS workload that results in frequent cross-partition operations. In this thesis we propose that it is possible and effective to provide unpartitionable applications with a generic communication infrastructure that enforces strong consistency of the application’s data to simplify cross-partition communications. Using this framework the application can use a sub-optimal partitioning mechanism without having to worry about crossing boundaries. We apply our thesis to take a head-on approach at scaling our target applications. We build three scalable systems with competitive performances, used for storing data in a key/value datastore, scaling fast-paced FPS games to epic sized battles consisting of hundreds of players, and a scalable full-SQL compliant database used for storing tens of millions of items.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Hamilton, Howard Gregory. „An Examination of Service Level Agreement Attributes that Influence Cloud Computing Adoption“. NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/53.

Der volle Inhalt der Quelle
Annotation:
Cloud computing is perceived as the technological innovation that will transform future investments in information technology. As cloud services become more ubiquitous, public and private enterprises still grapple with concerns about cloud computing. One such concern is about service level agreements (SLAs) and their appropriateness. While the benefits of using cloud services are well defined, the debate about the challenges that may inhibit the seamless adoption of these services still continues. SLAs are seen as an instrument to help foster adoption. However, cloud computing SLAs are alleged to be ineffective, meaningless, and costly to administer. This could impact widespread acceptance of cloud computing. This research was based on the transaction cost economics theory with focus on uncertainty, asset specificity and transaction cost. SLA uncertainty and SLA asset specificity were introduced by this research and used to determine the technical and non-technical attributes for cloud computing SLAs. A conceptual model, built on the concept of transaction cost economics, was used to highlight the theoretical framework for this research. This study applied a mixed methods sequential exploratory research design to determine SLA attributes that influence the adoption of cloud computing. The research was conducted using two phases. First, interviews with 10 cloud computing experts were done to identify and confirm key SLA attributes. These attributes were then used as the main thematic areas for this study. In the second phase, the output from phase one was used as the input to the development of an instrument which was administered to 97 businesses to determine their perspectives on the cloud computing SLA attributes identified in the first phase. Partial least squares structural equation modelling was used to test for statistical significance of the hypotheses and to validate the theoretical basis of this study. Qualitative and quantitative analyses were done on the data to establish a set of attributes considered SLA imperatives for cloud computing adoption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Alsrheed, F. „Contribution to agents-based negotiation and monitoring of cloud computing service level agreement“. Thesis, Liverpool John Moores University, 2014. http://researchonline.ljmu.ac.uk/4339/.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing environments are dynamic and open systems, where cloud providers and consumers frequently join and leave the cloud marketplaces. Due to the increasing number of cloud consumers and providers, it is becoming almost impossible to facilitate face to face meetings to negotiate and establish a Service Level Agreement (SLA); thus automated negotiation is needed to establish SLAs between service providers and consumers with no previous knowledge of each other. In this thesis, I propose, an Automated Cloud Service Level Agreement framework (ACSLA). ACSLA is made up of five stages, and the corresponding software agent components: Gathering, Filtering, Negotiation, SLA Agreement and Monitoring. In the Gathering stage all the information about the providers and what they can offer is gathered. In the Filtering stage the customer’s agent will send the request to ACSLA, which will filter all the providers in order to recommend the best matched candidates. In Negotiation stage the customer’s agent will negotiate separately with each candidate provider using different negotiation algorithms, which will be evaluated and for which recommendations and guidelines will be provided. The output of this stage is that the best outcome from the customer’s perspective will be picked up, which will be the agreed value for each parameter in the SLA. In SLA Agreement stage the provider’s agent and the customer‘s agent will be informed about the Agreement, which will be specified in measurable terms. The output of the SLA Agreement stage will be a list of metrics that can be monitored in the Monitoring stage. Customer’s agent and provider’s agent will also negotiate and agree about the penalties and actions will be taken in case the SLA has been violated and unfulfilled. There is a variety of actions that can be taken, like informing both sides, recommending solutions, self-healing and hot-swapping. ACSLA is evaluated using case studies which show its flexibility and effectiveness. ACSAL offers a novel approach to tackle many challenging issues in the current and likely future, cloud computing market. It is the first complete automated framework for cloud SLA. There are many automated negotiation algorithms and protocols, which have been developed over the years in other research areas; establishing functional solutions applicable to the cloud-computing environment is not an easy task. Rubinstein’s Alternating Offers Protocol, also known as the Rubinstein bargaining model, has been investigated for application in automated cloud SLA, and it offers a satisfactory technical solution for this challenging problem. The purpose of this research was also to apply the state of the art in negotiation automated algorithms/agents within a described Cloud Computing SLA framework, to develop new algorithms, and to evaluate and recommend the most appropriate negotiation approach based on many criteria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Omezzine, Aya. „Automated and dynamic multi-level negotiation framework applied to an efficient cloud provisioning“. Thesis, Toulouse 1, 2017. http://www.theses.fr/2017TOU10060/document.

Der volle Inhalt der Quelle
Annotation:
L’approvisionnement du Cloud est le processus de déploiement et de gestion des applications sur les infrastructures publiques du Cloud. Il est de plus en plus utilisé car il permet aux fournisseurs de services métiers de se concentrer sur leurs activités sans avoir à gérer et à investir dans l’infrastructure. Il comprend deux niveaux d’interaction : (1) entre les utilisateurs finaux et les fournisseurs de services pour l’approvisionnement des applications, et (2) entre les fournisseurs de services et les fournisseurs de ressources pour l’approvisionnement des ressources virtuelles. L’environnement Cloud est devenu un marché complexe où tout fournisseur veut maximiser son profit monétaire et où les utilisateurs finaux recherchent les services les plus efficaces tout en minimisant leurs coûts. Avec la croissance de la concurrence dans le Cloud, les fournisseurs de services métiers doivent assurer un approvisionnement efficace qui maximise la satisfaction de la clientèle et optimise leurs profits.Ainsi, les fournisseurs et les utilisateurs doivent être satisfaits en dépit de leurs besoins contradictoires. La négociation est une solution prometteuse qui permet de résoudre les conflits en comblant le gap entre les capacités des fournisseurs et les besoins des utilisateurs. Intuitivement, la négociation automatique des contrats (SLA) permet d’aboutir à un compromis qui satisfait les deux parties. Cependant, pour être efficace, la négociation automatique doit considérer les propriétés de l’approvisionnement du Cloud et les complexités liées à la dynamicité (dynamicité de la disponibilité des ressources, dynamicité des prix). En fait ces critères ont un impact important sur le succès de la négociation. Les principales contributions de cette thèse répondant au défi de la négociation multi-niveau dans un contexte dynamique sont les suivantes: (1) Nous proposons un modèle de négociateur générique qui considère la nature dynamique de l’approvisionnement du Cloud et son impact potentiel sur les résultats décisionnels. Ensuite, nous construisons un cadre de négociation multicouche fondé sur ce modèle en l’instanciant entre les couches du Cloud. Le cadre comprend des agents négociateurs en communication avec les modules en relation avec la qualité et le prix du service à fournir (le planificateur, le moniteur, le prospecteur de marché). (2) Nous proposons une approche de négociation bilatérale entre les utilisateurs finaux et les fournisseurs de service basée sur une approche d’approvisionnement existante. Les stratégies de négociation sont basées sur la communication avec les modules d’approvisionnement (le planificateur et l’approvisionneur de machines virtuelles) afin d’optimiser les bénéfices du fournisseur de service et de maximiser la satisfaction du client. (3) Afin de maximiser le nombre de clients, nous proposons une approche de négociation adaptative et simultanée comme extension de la négociation bilatérale. Nous proposons d’exploiter les changements de charge de travail en termes de disponibilité et de tarification des ressources afin de renégocier simultanément avec plusieurs utilisateurs non acceptés (c’est-à-dire rejetés lors de la première session de négociation) avant la création du contrat SLA. (4) Afin de gérer toute violation possible de SLA, nous proposons une approche proactive de renégociation après l’établissement de SLA. La renégociation est lancée lors de la détection d’un événement inattendu (par exemple, une panne de ressources) pendant le processus d’approvisionnement. Les stratégies de renégociation proposées visent à minimiser la perte de profit pour le fournisseur et à assurer la continuité du service pour le consommateur. Les approches proposées sont mises en œuvre et les expériences prouvent les avantages d’ajouter la (re)négociation au processus d’approvisionnement. L’utilisation de la (re)négociation améliore le bénéfice du fournisseur, le nombre de demandes acceptées et la satisfaction du client
Cloud provisioning is the process of deployment and management of applications on public cloud infrastructures. Cloud provisioning is used increasingly because it enables business providers to focus on their business without having to manage and invest in infrastructure. Cloud provisioning includes two levels of interaction: (1) between end-users and business providers for application provisioning; and (2) between business providers and resource providers for virtual resource provisioning.The cloud market nowadays is a complex environment where business providers need to maximize their monetary profit, and where end-users look for the most efficient services with the lowest prices. With the growth of competition in the cloud, business providers must ensure efficient provisioning that maximizes customer satisfaction and optimizes the providers’ profit. So, both providers and users must be satisfied in spite of their conflicting needs. Negotiation is an appealing solution to solve conflicts and bridge the gap between providers’ capabilities and users’ requirements. Intuitively, automated Service Level Agreement (SLA) negotiation helps in reaching an agreement that satisfies both parties. However, to be efficient, automated negotiation should consider the properties of cloud provisioning mainly the two interaction levels, and complexities related to dynamicity (e.g., dynamically-changing resource availability, dynamic pricing, dynamic market factors related to offers and demands), which greatly impact the success of the negotiation. The main contributions of this thesis tackling the challenge of multi-level negotiation in a dynamic context are as follows: (1) We propose a generic negotiator model that considers the dynamic nature of cloud provisioning and its potential impact on the decision-making outcome. Then, we build a multi-layer negotiation framework built upon that model by instantiating it among Cloud layers. The framework includes negotiator agents. These agents are in communication with the provisioning modules that have an impact on the quality and the price of the service to be provisioned (e.g, the scheduler, the monitor, the market prospector). (2) We propose a bilateral negotiation approach between end-users and business providers extending an existing provisioning approach. The proposed decision-making strategies for negotiation are based on communication with the provisioning modules (the scheduler and the VM provisioner) in order to optimize the business provider’s profit and maximize customer satisfaction. (3) In order to maximize the number of clients, we propose an adaptive and concurrent negotiation approach as an extension of the bilateral negotiation. We propose to harness the workload changes in terms of resource availability and pricing in order to renegotiate simultaneously with multiple non-accepted users (i.e., rejected during the first negotiation session) before the establishment of the SLA. (4) In order to handle any potential SLA violation, we propose a proactive renegotiation approach after SLA establishment. The renegotiation is launched upon detecting an unexpected event (e.g., resource failure) during the provisioning process. The proposed renegotiation decision-making strategies aim to minimize the loss in profit for the provider and to ensure the continuity of the service for the consumer. The proposed approaches are implemented and experiments prove the benefits of adding (re)negotiation to the provisioning process. The use of (re)negotiation improves the provider’s profit, the number of accepted requests, and the client’s satisfaction
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Alkandari, Fatima A. A. A. „Model-driven engineering for analysing, modelling and comparing cloud computing service level agreements“. Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/11690/.

Der volle Inhalt der Quelle
Annotation:
In cloud computing, service level agreements (SLAs) are critical, and underpin a pay- per-consumption business model. Different cloud providers offer different services (of different qualities) on demand, and their pre-defined SLAs are used both to advertise services and to define contracts with consumers. However, different providers express their SLAs using their own vocabularies, typically defined in terms of their own tech- nologies and unique selling points. This can make it difficult for consumers to compare cloud SLAs systematically and precisely. We propose a modelling framework that pro- vides mechanisms that can be used systematically and semi-automatically to model and compare cloud SLAs and consumer requirements. Using MDE principles and tools, we propose a metamodel for cloud provider SLAs and cloud consumer requirements, and thereafter demonstrate how to use model comparison technology for automating differ- ent matching processes, thus helping consumers to choose between different providers. We also demonstrate how the matching process can be interactively configured to take into account consumer preferences, via weighting models. The resulting framework can thus be used to better automate high-level consumer interactions with disparate cloud computing technology platforms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

García, García Andrés. „SLA-Driven Cloud Computing Domain Representation and Management“. Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/36579.

Der volle Inhalt der Quelle
Annotation:
The assurance of Quality of Service (QoS) to the applications, although identified as a key feature since long ago [1], is one of the fundamental challenges that remain unsolved. In the Cloud Computing context, Quality of Service is defined as the measure of the compliance of certain user requirement in the delivery of a cloud resource, such as CPU or memory load for a virtual machine, or more abstract and higher level concepts such as response time or availability. Several research groups, both from academia and industry, have started working on describing the QoS levels that define the conditions under which the service need to be delivered, as well as on developing the necessary means to effectively manage and evaluate the state of these conditions. [2] propose Service Level Agreements (SLAs) as the vehicle for the definition of QoS guarantees, and the provision and management of resources. A Service Level Agreement (SLA) is a formal contract between providers and consumers, which defines the quality of service, the obligations and the guarantees in the delivery of a specific good. In the context of Cloud computing, SLAs are considered to be machine readable documents, which are automatically managed by the provider's platform. SLAs need to be dynamically adapted to the variable conditions of resources and applications. In a multilayer architecture, different parts of an SLA may refer to different resources. SLAs may therefore express complex relationship between entities in a changing environment, and be applied to resource selection to implement intelligent scheduling algorithms. Therefore SLAs are widely regarded as a key feature for the future development of Cloud platforms. However, the application of SLAs for Grid and Cloud systems has many open research lines. One of these challenges, the modeling of the landscape, lies at the core of the objectives of the Ph. D. Thesis.
García García, A. (2014). SLA-Driven Cloud Computing Domain Representation and Management [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/36579
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Francischetti, Emilio Junior. „Garanzie del servizio in ambienti di cloud computing: uno studio sperimentale“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/2323/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Yassa, Sonia. „Allocation optimale multicontraintes des workflows aux ressources d’un environnement Cloud Computing“. Thesis, Cergy-Pontoise, 2014. http://www.theses.fr/2014CERG0730/document.

Der volle Inhalt der Quelle
Annotation:
Le Cloud Computing est de plus en plus reconnu comme une nouvelle façon d'utiliser, à la demande, les services de calcul, de stockage et de réseau d'une manière transparente et efficace. Dans cette thèse, nous abordons le problème d'ordonnancement de workflows sur les infrastructures distribuées hétérogènes du Cloud Computing. Les approches d'ordonnancement de workflows existantes dans le Cloud se concentrent principalement sur l'optimisation biobjectif du makespan et du coût. Dans cette thèse, nous proposons des algorithmes d'ordonnancement de workflows basés sur des métaheuristiques. Nos algorithmes sont capables de gérer plus de deux métriques de QoS (Quality of Service), notamment, le makespan, le coût, la fiabilité, la disponibilité et l'énergie dans le cas de ressources physiques. En outre, ils traitent plusieurs contraintes selon les exigences spécifiées dans le SLA (Service Level Agreement). Nos algorithmes ont été évalués par simulation en utilisant (1) comme applications: des workflows synthétiques et des workflows scientifiques issues du monde réel ayant des structures différentes; (2) et comme ressources Cloud: les caractéristiques des services de Amazon EC2. Les résultats obtenus montrent l'efficacité de nos algorithmes pour le traitement de plusieurs QoS. Nos algorithmes génèrent une ou plusieurs solutions dont certaines surpassent la solution de l'heuristique HEFT sur toutes les QoS considérées, y compris le makespan pour lequel HEFT est censé donner de bons résultats
Cloud Computing is increasingly recognized as a new way to use on-demand, computing, storage and network services in a transparent and efficient way. In this thesis, we address the problem of workflows scheduling on distributed heterogeneous infrastructure of Cloud Computing. The existing workflows scheduling approaches mainly focus on the bi-objective optimization of the makespan and the cost. In this thesis, we propose news workflows scheduling algorithms based on metaheuristics. Our algorithms are able to handle more than two QoS (Quality of Service) metrics, namely, makespan, cost, reliability, availability and energy in the case of physical resources. In addition, they address several constraints according to the specified requirements in the SLA (Service Level Agreement). Our algorithms have been evaluated by simulations. We used (1) synthetic workflows and real world scientific workflows having different structures, for our applications; and (2) the features of Amazon EC2 services for our Cloud. The obtained results show the effectiveness of our algorithms when dealing multiple QoS metrics. Our algorithms produce one or more solutions which some of them outperform the solution produced by HEFT heuristic over all the QoS considered, including the makespan for which HEFT is supposed to give good results
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Ambrose, William, Samuel Athley und Niclas Dagland. „Cloud Computing : Security Risks, SLA, and Trust“. Thesis, Jönköping University, JIBS, Business Informatics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-12484.

Der volle Inhalt der Quelle
Annotation:

With Cloud Computing becoming a popular term on the Information Technology (IT) market, security and accountability has become important issues to highlight. In our research we review these concepts by focusing on security risks with Cloud Computing and the associated services; Software, Platform and Infrastructure (SPI) and connecting them with a social study of trust.

The method that was conducted during our research was reviewing secondary literature, interviewing different experts regarding Cloud Computing and relating standards already established by ENISA, NIST, and CSA to the interviews.

The result of this study shows connections between the specific SPIs, both how they compare, but also how they differ. In the end we were also able to rank the top security risks from interviews with experts and see which SPI could be the most insecure one and  what countermeasures could be applied.

This was further related to trust and Service Level Agreement (SLA) in Cloud Computing to show how the security risks we discuss are related to these two specific areas. By highlighting this we wanted to present useable information for both clients and providers in how to create a better Cloud Computing environment.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Flatt, Taylor. „CrowdCloud: Combining Crowdsourcing with Cloud Computing for SLO Driven Big Data Analysis“. OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2234.

Der volle Inhalt der Quelle
Annotation:
The evolution of structured data from simple rows and columns on a spreadsheet to more complex unstructured data such as tweets, videos, voice, and others, has resulted in a need for more adaptive analytical platforms. It is estimated that upwards of 80% of data on the Internet today is unstructured. There is a drastic need for crowdsourcing platforms to perform better in the wake of the tsunami of data. We investigated the employment of a monitoring service which would allow the system take corrective action in the event the results were trending in away from meeting the accuracy, budget, and time SLOs. Initial implementation and system validation has shown that taking corrective action generally leads to a better success rate of reaching the SLOs. Having a system which can dynamically adjust internal parameters in order to perform better can lead to more harmonious interactions between humans and machine algorithms and lead to more efficient use of resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Hamze, Mohamad. „Autonomie, sécurité et QoS de bout en bout dans un environnement de Cloud Computing“. Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS033/document.

Der volle Inhalt der Quelle
Annotation:
De nos jours, le Cloud Networking est considéré comme étant l'un des domaines de recherche innovants au sein de la communauté de recherche du Cloud Computing. Les principaux défis dans un environnement de Cloud Networking concernent non seulement la garantie de qualité de service (QoS) et de sécurité mais aussi sa gestion en conformité avec un accord de niveau de service (SLA) correspondant. Dans cette thèse, nous proposons un Framework pour l'allocation des ressources conformément à un SLA établi de bout en bout entre un utilisateur de services Cloud (CSU) et plusieurs fournisseurs de services Cloud (CSP) dans un environnement de Cloud Networking (architectures d’inter-Cloud Broker et Fédération). Nos travaux se concentrent sur les services Cloud de types NaaS et IaaS. Ainsi, nous proposons l'auto-établissement de plusieurs types de SLA ainsi que la gestion autonome des ressources de Cloud correspondantes en conformité avec ces SLA en utilisant des gestionnaires autonomes spécifiques de Cloud. De plus, nous étendons les architectures et les SLA proposés pour offrir un niveau de service intégrant une garantie de sécurité. Ainsi, nous permettons aux gestionnaires autonomes de Cloud d'élargir leurs objectifs de gestion autonome aux fonctions de sécurité (auto-protection) tout en étudiant l'impact de la sécurité proposée sur la garantie de QoS. Enfin, nous validons notre architecture avec différents scénarios de simulation. Nous considérons dans le cadre de ces simulations des applications de vidéoconférence et de calcul intensif afin de leur fournir une garantie de QoS et de sécurité dans un environnement de gestion autonome des ressources du Cloud. Les résultats obtenus montrent que nos contributions permettent de bonnes performances pour ce type d’applications. En particulier, nous observons que l'architecture de type Broker est la plus économique, tout en assurant les exigences de QoS et de sécurité. De plus, nous observons que la gestion autonome des ressources du Cloud permet la réduction des violations, des pénalités et limite l'impact de la sécurité sur la garantie de la QoS
Today, Cloud Networking is one of the recent research areas within the Cloud Computing research communities. The main challenges of Cloud Networking concern Quality of Service (QoS) and security guarantee as well as its management in conformance with a corresponding Service Level Agreement (SLA). In this thesis, we propose a framework for resource allocation according to an end-to-end SLA established between a Cloud Service User (CSU) and several Cloud Service Providers (CSPs) within a Cloud Networking environment (Inter-Cloud Broker and Federation architectures). We focus on NaaS and IaaS Cloud services. Then, we propose the self-establishing of several kinds of SLAs and the self-management of the corresponding Cloud resources in conformance with these SLAs using specific autonomic cloud managers. In addition, we extend the proposed architectures and the corresponding SLAs in order to deliver a service level taking into account security guarantee. Moreover, we allow autonomic cloud managers to expand the self-management objectives to security functions (self-protection) while studying the impact of the proposed security on QoS guarantee. Finally, our proposed architecture is validated by different simulation scenarios. We consider, within these simulations, videoconferencing and intensive computing applications in order to provide them with QoS and security guarantee in a Cloud self-management environment. The obtained results show that our contributions enable good performances for these applications. In particular, we observe that the Broker architecture is the most economical while ensuring QoS and security requirements. In addition, we observe that Cloud self-management enables violations and penalties’ reduction as well as limiting security impact on QoS guarantee
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Walther, Sebastian [Verfasser], und Torsten [Akademischer Betreuer] Eymann. „An Investigation of Organizational Level Continuance of Cloud-Based Enterprise Systems / Sebastian Walther. Betreuer: Torsten Eymann“. Bayreuth : Universität Bayreuth, 2014. http://d-nb.info/1059352281/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Sun, Y. L. „The use of high-level requirements ontologies for discovering resources in a multi-provider cloud environment“. Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.677705.

Der volle Inhalt der Quelle
Annotation:
This thesis proposes the use of high-level requirement ontologies for discovering resources in a multi-provider cloud environment. A high-level framework for deploying cloud-oriented applications, which harnessing existing cloud technologies, is developed. The framework provides an abstract multi-layered ontological model for specifying cloud application requirements. Domain-specific ontologies are used to specify high-level application requirements. These are translated into infrastructure ontologies which are agnostic to underlying providers and low-level resources. Resource and cost ontologies are used for specifying the capabilities and cost of infrastructure resources. The proposed model provides an abstract application-centric mechanism for specifying an application's requirements and searching for a set of suitable resources in a multi-provider cloud environment. A two-phase resource discovery approach for selecting cloud resources is developed. In the first phase, a set of possible resources which meet an application's mandatory requirements is identified. In the second phase, a suitable heuristic is used to filter the initial resource set by taking into consideration other requirements. This approach enables a selection of appropriate resources based on the needs of the application at the time it is being deployed. Furthermore, a meta programming model is developed to facilitate an unified approach to the management of cloud resources (offered by different providers). The proposed framework allows cloud users to specify application requirements without being overly concerned about the complexity of underlying provider frameworks and resources. The framework provides an effective mechanism for searching for a set of suitable resources that satisfy the application's requirements, specified at design time, while having the capability to adapt to requirement changes at runtime. Cloud resources can be utilised effectively in order to maximize the performance of an application and minimise its deployment cost in a multi-provider cloud environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Belcher, Rachel Beverly. „Fuzzy Logic Based Module-Level Power Electronics for Mitigation of Rapid Cloud Shading in Photovoltaic Systems“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41201.

Der volle Inhalt der Quelle
Annotation:
A module-level DC optimization proof of concept architecture is proposed to increase the efficiency of photovoltaic (PV) strings by minimizing the negative effects of shading caused by intermittent cloud cover while reducing cloud induced fast frequency fluctuations. The decentralized inverter approach combines the benefits of string and micro-inverter technology. This device can be affixed to pre-existing or new systems and operates in compliance with IEEE 1547 and California rule 21 standards by operating in maximum power point tracking (MPPT) or curtailment mode whenever necessary. The modular level device encapsulates three individual processes: an optimization engine to determine minimum power requirements, a fuzzy logic controller (FLC) to eliminate the effect of passing cloud cover, and a voltage regulation stage to monitor and appropriately adjust the output voltage of the device. Ramp rate reduction was accomplished using adaptive fuzzy logic control with a heuristic rule base inference engine. The modular design can be affixed to grid connected or islanded systems allowing for operation in regulated and variable load conditions. Matlab/Simulink 2019a was used to design and simulate the proof of concept model to verify the resiliency to partial shading, reduction of ramp rates during passing cloud coverage, and optimal output voltage for each panel while maintaining a constant DC link voltage of 120 V. This proof of concept has been successfully validated therefore further testing will be performed for various irradiance conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kathirvel, Anitha, und Siddharth Madan. „Efficient Privacy Preserving Key Management for Public Cloud Networks“. Thesis, KTH, Radio Systems Laboratory (RS Lab), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-148048.

Der volle Inhalt der Quelle
Annotation:
Most applications and documents are stored in a public cloud for storage and management purposes in a cloud computing environment. The major advantages of storing applications and documents in public cloud are lower cost through use of shared computing resources and no upfront infrastructure costs. However, in this case the management of data and other services is insecure. Therefore, security is a major problem in a public cloud as the cloud and the network are open to many other users. In order to provide security, it is necessary for data owners to store their data in the public cloud in a secure way and to use an appropriate access control scheme. Designing a computation and communication efficient key management scheme to selectively share documents based on fine-grained attribute-based access control policies in a public cloud is a challenging task. There are many existing approaches that encrypt documents prior to storage in the public cloud: These approaches use different keys and a public key cryptographic system to implement attribute-based encryption and/or proxy re-encryption. However, these approaches do not efficiently handle users joining and leaving the system when identity attributes and policies change. Moreover, these approaches require keeping multiple encrypted copies of the same documents, which has a high computational cost or incurs unnecessary storage costs. Therefore, this project focused on the design and development of an efficient key management scheme to allow the data owner to store data in a cloud service in a secure way. Additionally, the proposed approach enables cloud users to access the data stored in a cloud in a secure way. Many researchers have proposed key management schemes for wired and wireless networks. All of these existing key management schemes differ from the key management schemes proposed in this thesis. First, the key management scheme proposed in this thesis increases access level security. Second, the proposed key management scheme minimizes the computational complexity of the cloud users by performing only one mathematical operation to find the new group key that was computed earlier by the data owner. In addition, this proposed key management scheme is suitable for a cloud network. Third, the proposed key distribution and key management scheme utilizes privacy preserving methods, thus preserving the privacy of the user. Finally, a batch key updating algorithm (also called batch rekeying) has been proposed to reduce the number of rekeying operations required for performing batch leave or join operations. The key management scheme proposed in this thesis is designed to reduce the computation and communication complexity in all but a few cases, while increasing the security and privacy of the data.
De flesta program och dokument lagras i ett offentligt moln för lagring och hantering ändamål i en molnmiljö. De stora fördelarna med att lagra program och dokument i offentliga moln är lägre kostnad genom användning av delade datorresurser och ingen upfront infrastruktur costs.However, i detta fall hanteringen av data och andra tjänster är osäker. Därför är säkerhet ett stort problem i en offentlig moln som molnet och nätverket är öppna för många andra användare. För att ge trygghet, är det nödvändigt för dataägare att lagra sina data i det offentliga molnet på ett säkert sätt och att använda en lämplig åtkomstkontroll schema. Utforma en beräkning och kommunikation effektiv nyckelhantering system för att selektivt dela dokument som grundar sig på finkorniga attributbaserad åtkomstkontroll politik i en offentlig moln är en utmanande uppgift. Det finns många befintliga metoder som krypterar dokument före lagring i det offentliga molnet: Dessa metoder använder olika tangenter och en publik nyckel kryptografiskt system för att genomföra attributbaserad kryptering och / eller proxy re-kryptering. Dock har dessa metoder inte effektivt hantera användare som ansluter och lämnar systemet när identitetsattribut och politik förändras. Dessutom är dessa metoder kräver att hålla flera krypterade kopior av samma dokument, som har en hög beräkningskostnad eller ådrar sig onödiga lagringskostnader. Därför fokuserade projektet på design och utveckling av en effektiv nyckelhantering system för att möjliggöra dataägaren att lagra data i en molntjänst på ett säkert sätt. Dessutom, den föreslagna metoden gör det möjligt för molnanvändare att få tillgång till uppgifter lagras i ett cloud på ett säkert sätt. Många forskare har föreslagit viktiga förvaltningssystem för fasta och trådlösa nätverk. Alla dessa befintliga system ke, skiljer sig från de centrala förvaltningssystemen som föreslås i denna avhandling. Först föreslog nyckelhanteringssystemet i denna avhandling ökar Medverkan nivå säkerhet. För det andra, minimerar den föreslagna nyckelhanteringssystemet beräkningskomplexiteten för molnanvändare genom att utföra endast en matematisk operation för att hitta den nya gruppknapp som tidigare beräknades av dataägaren. Dessutom är denna föreslagna nyckelhanteringsschema lämpligt för ett moln nätverk. För det tredje, den föreslagna nyckeldistribution och nyckelhantering systemet utnyttjar integritets bevara metoder och därmed skydda privatlivet för användaren. Slutligen har ett parti viktig uppdatering algoritm (även kallad batch nya nycklar) föreslagits för att minska antalet Ny serieläggning av operationer som krävs för att utföra batch ledighet eller gå med i verksamheten. Nyckelhanteringssystemet som föreslås i denna avhandling är utformad för att minska beräknings-och kommunikations komplexitet i alla utom ett fåtal fall, och samtidigt öka säkerheten och integriteten av uppgifterna.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Bergström, Rasmus. „Predicting Container-Level Power Consumption in Data Centers using Machine Learning Approaches“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79416.

Der volle Inhalt der Quelle
Annotation:
Due to the ongoing climate crisis, reducing waste and carbon emissions has become hot topic in many fields of study. Cloud data centers contribute a large portion to the world’s energy consumption. In this work, methodologies are developed using machine learning algorithms to improve prediction of the energy consumption of a container in a data center. The goal is to share this information with the user ahead of time, so that the same can make educated decisions about their environmental footprint.This work differentiates itself in its sole focus on optimizing prediction, as opposed to other approaches in the field where energy modeling and prediction has been studied as a means to building advanced scheduling policies in data centers. In this thesis, a qualitative comparison between various machine learning approaches to energy modeling and prediction is put forward. These approaches include Linear, Polynomial Linear and Polynomial Random Forest Regression as well as a Genetic Algorithm, LSTM Neural Networks and Reinforcement Learning. The best results were obtained using the Polynomial Random Forest Regression, which produced a Mean Absolute Error of of 26.48% when run against data center metrics gathered after the model was built. This prediction engine was then integrated into a Proof of Concept application as an educative tool to estimate what metrics of a cloud job have what impact on the container power consumption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Safieddine, Ibrahim. „Optimisation d'infrastructures de cloud computing sur des green datacenters“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM083/document.

Der volle Inhalt der Quelle
Annotation:
Les centres de données verts de dernière génération ont été conçus pour une consommation optimisée et une meilleure qualité du niveau de service SLA. Cependant,ces dernières années, le marché des centres de données augmente rapidement,et la concentration de la puissance de calcul est de plus en plus importante, ce qui fait augmenter les besoins en puissance électrique et refroidissement. Un centre de données est constitué de ressources informatiques, de systèmes de refroidissement et de distribution électrique. De nombreux travaux de recherche se sont intéressés à la réduction de la consommation des centres de données afin d'améliorer le PUE, tout en garantissant le même niveau de service. Certains travaux visent le dimensionnement dynamique des ressources en fonction de la charge afin de réduire le nombre de serveurs démarrés, d'autres cherchent à optimiser le système de refroidissement qui représente un part important de la consommation globale.Dans cette thèse, afin de réduire le PUE, nous étudions la mise en place d'un système autonome d'optimisation globale du refroidissement, qui se base sur des sources de données externes tel que la température extérieure et les prévisions météorologiques, couplé à un module de prédiction de charge informatique globale pour absorber les pics d'activité, pour optimiser les ressources utilisés à un moindre coût, tout en préservant la qualité de service. Afin de garantir un meilleur SLA, nous proposons une architecture distribuée pour déceler les anomalies de fonctionnements complexes en temps réel, en analysant de gros volumes de données provenant des milliers de capteurs du centre de données. Détecter les comportements anormaux au plus tôt, permet de réagir plus vite face aux menaces qui peuvent impacter la qualité de service, avec des boucles de contrôle autonomes qui automatisent l'administration. Nous évaluons les performances de nos contributions sur des données provenant d'un centre de donnée en exploitation hébergeant des applications réelles
Next-generation green datacenters were designed for optimized consumption and improved quality of service level Service Level Agreement (SLA). However, in recent years, the datacenter market is growing rapidly, and the concentration of the computing power is increasingly important, thereby increasing the electrical power and cooling consumptions. A datacenter consists of computing resources, cooling systems, and power distribution. Many research studies have focused on reducing the consumption of datacenters to improve the PUE, while guaranteeing the same level of service. Some works aims the dynamic sizing of resources according to the load, to reduce the number of started servers, others seek to optimize the cooling system which represents an important part of total consumption. In this thesis, in order to reduce the PUE, we study the design of an autonomous system for global cooling optimization, which is based on external data sources such as the outside temperature and weather forecasting, coupled with an overall IT load prediction module to absorb the peaks of activity, to optimize activere sources at a lower cost while preserving service level quality. To ensure a better SLA, we propose a distributed architecture to detect the complex operation anomalies in real time, by analyzing large data volumes from thousands of sensors deployed in the datacenter. Early identification of abnormal behaviors, allows a better reactivity to deal with threats that may impact the quality of service, with autonomous control loops that automate the administration. We evaluate the performance of our contributions on data collected from an operating datacenter hosting real applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Sinclair, J. G. „An approach to compliance conformance for cloud-based business applications leveraging service level agreements and continuous auditing“. Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.676738.

Der volle Inhalt der Quelle
Annotation:
Organisations increasingly use flexible, adaptable and scalable IT infrastructures, such as cloud computing resources, for hosting business applications and storing customer data. To prevent the misuse of personal data, auditors can assess businesses for legal compliance conformance. For data privacy compliance there are many applicable pieces of legislation as well as regulations and standards. Businesses operate globally and typically have systems that are dynamic and mobile; in contrast current data privacy laws often have geographical jurisdictions and so conflicts can arise between the law and the technological framework of cloud computing. Traditional auditing approaches are unsuitable for cloud-based environments because of the complexity of potentially short-lived, migratory and scalable real-time virtual systems. My research goal is to address the problem of auditing cloud-based services for data privacy compliance by devising an appropriate machine-readable Service Level Agreement (SLA) framework for specifying applicable legal conditions. This allows the development of a scalable Continuous Compliance Auditing Service (CCAS) for monitoring data privacy in cloud-based environments. The CCAS architecture utilises agreed SLA conditions to process service events for compliance conformance. The CCAS architecture has been implemented and customised for a real world Electronic Health Record (EHR) scenario in order to demonstrate geo-location compliance monitoring using data privacy restrictions. Finally, the automated audit process of CCAS has been compared and evaluated against traditional auditing approaches and found to have the potential for providing audit capabilities in complex IT environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Ewelle, Ewelle Richard. „Adapter les communications des jeux dans le cloud“. Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS145/document.

Der volle Inhalt der Quelle
Annotation:
Le Cloud computing émerge comme le nouveau paradigme informatique dans lequel la virtualisation des ressources fournit des services fiables correspondant aux demandes des utilisateurs. De nos jours, la plupart des applications interactives et utilisant beaucoup de données sont développées sur le cloud: Le jeu vidéo en est un exemple. Avec l'arrivée du cloud computing, l'accessibilité et l'ubiquité du jeu ont un brillant avenir; Les jeux peuvent être hébergés dans un serveur centralisé et accessibles via l'Internet par un client léger sur une grande variété de dispositifs avec des capacités modestes : c'est le cloud gaming. Le Cloud computing, dans le contexte de jeu vidéo a beaucoup attiré l'attention en raison de ses facilités d'évolution, de disponibilité et capacité de calcul. Cependant, les systèmes de cloud gaming actuels ont des exigences très fortes en termes de ressources réseau, réduisant ainsi l'accessibilité et l'ubiquité des jeux dans le cloud, car les dispositifs clients avec peu de bande passante et les personnes situées dans la zone avec des conditions de réseau limitées et/ou instables, ne peuvent pas bénéficier de ces services de cloud computing. Dans cette thèse, nous présentons une technique d'adaptation inspirée par l'approche du niveau de détail (Level of detail) dans les graphiques 3D. Elle est basée sur un paradigme du cloud gaming dans l'objectif de fournir une accessibilité multi-plateforme, tout en améliorant la qualité d'expérience (QoE) du joueur en réduisant l'impact des mauvaises conditions réseau (delai, perte, gigue) sur l'interactivité et réactivité du jeu. Notre première contribution se compose de modèles de jeu reliant les objets du jeu à leurs besoins en termes de communication représentés par leurs importances dans le jeu. Nous avons ensuite fourni une approche de niveau de détail pour gérer la distribution des ressources réseau basée sur l'importance des objets dans la scène et les conditions réseau. Nous validons notre approche en utilisant des jeu prototypes et evaluons la QoE du joueur, par des expériences pilotes. Les résultats montrent que le framework proposé fournit une importante amélioration de la QoE
With the arrival of cloud computing technology, game accessibility and ubiquity havea bright future. Games can be hosted in a centralize server and accessed through theInternet by a thin client on a wide variety of devices with modest capabilities: cloudgaming. Some of the advantages of using cloud computing in game context includes:device ubiquity, computing exibility, affordable cost and lowered set up overheads andcompatibility issues. However, current cloud gaming systems have very strong requirementsin terms of network resources, thus reducing their widespread adoption. In factdevices with little bandwidth and people located in area with limited network capacity,cannot take advantage of these cloud services. In this thesis we present an adaptationtechnique inspired by the level of detail (LoD) approach in 3D graphics. It is based ona cloud gaming paradigm in other to maintain user's quality of experience (QoE) byreducing the impact of poor network parameters (delay, loss, bandwidth) on game interactivity.Our first contribution consist of game models expressing game objects and theircommunications needs represented by their importance in the game. We provided twodifferent ways to manage objects' importance using agents organizations and gameplaycomponents. We then provided a level of detail approach for managing network resourcedistribution based on objects importance in the game scene and network conditions. Weexploited the dynamic objects importance adjustment models presented above to proposeLoD systems adapting to changes during game sessions. The experimental validation ofboth adaptation models showed that the suggested adaptation minimizes the effects oflow and/or unstable network conditions in maintaining game responsiveness and player'sQoE
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Raymondi, Luis Guillermo Antezana, Fabricio Eduardo Aguirre Guzman, Jimmy Armas-Aguirre und Paola Agonzalez. „Technological solution for the identification and reduction of stress level using wearables“. IEEE Computer Society, 2020. http://hdl.handle.net/10757/656578.

Der volle Inhalt der Quelle
Annotation:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In this article, a technological solution is proposed to identify and reduce the level of mental stress of a person through a wearable device. The proposal identifies a physiological variable: Heart rate, through the integration between a wearable and a mobile application through text recognition using the back camera of a smartphone. As part of the process, the technological solution shows a list of guidelines depending on the level of stress obtained in a given time. Once completed, it can be measured again in order to confirm the evolution of your stress level. This proposal allows the patient to keep his stress level under control in an effective and accessible way in real time. The proposal consists of four phases: 1. Collection of parameters through the wearable; 2. Data reception by the mobile application; 3. Data storage in a cloud environment and 4. Data collection and processing; this last phase is divided into 4 sub-phases: 4.1. Stress level analysis, 4.2. Recommendations to decrease the level obtained, 4.3. Comparison between measurements and 4.4. Measurement history per day. The proposal was validated in a workplace with people from 20 to 35 years old located in Lima, Peru. Preliminary results showed that 80% of patients managed to reduce their stress level with the proposed solution.
Revisión por pares
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Hamraz, Hamid. „AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR“. UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/69.

Der volle Inhalt der Quelle
Annotation:
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Sangroya, Amit. „Towards dependability and performance benchmarking for cloud computing services“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM016/document.

Der volle Inhalt der Quelle
Annotation:
Le Cloud Computing est en plein essor, grace a ses divers avantages, telsl'elasticite, le cout, ou encore son importante exibilite dans le developpementd'applications. Il demeure cependant des problemes en suspens, lies auxperformances, a la disponibilite, la fiabilite, ou encore la securite. De nombreusesetudes se focalisent sur la fiabilite et les performance dans les servicesdu Cloud, qui sont les points critiques pour le client. On retrouve parmicelles-ci plusieurs themes emergents, allant de l'ordonnancement de tachesau placement de donnees et leur replication, en passant par la tolerance auxfautes adaptative ou a la demande, et l'elaboration de nouveaux modeles defautes.Les outils actuels evaluant la fiabilite des services du Cloud se basent surdes parametres simplifies. Ils ne permettent pas d'analyser les performancesou de comparer l'efficacite des solutions proposees. Cette these aborde precisement ce probleme en proposant un modele d'environnement complet detest destine a evaluer la fiabilite et les performances des services de CloudComputing. La creation d'outils de tests destines a l'evaluation de la fiabiliteet des performances des services du Cloud pose de nombreux defis, en raisonde la grande quantite et de la complexite des donnees traitees par ce genrede services. Les trois principaux modeles de Cloud Computing sont respectivement:Infrastructure en tant que Service (IaaS), Plate-forme en tant queService (PaaS) et Logiciel en tant que Service (SaaS).Dans le cadre de cettethese, nous nous concentrons sur le modele PaaS. Il permet aux systemesd'exploitation ou aux intergiciels d'etre accessibles via une connexion internet.Nous introduisons une architecture de test generique, qui sera utiliseepar la suite lors de la creation d'outils de tests, destines a l'evaluation de lafiabilite et de la performance
Cloud computing models are attractive because of various benefits such asscalability, cost and exibility to develop new software applications. However,availability, reliability, performance and security challenges are still not fullyaddressed. Dependability is an important issue for the customers of cloudcomputing who want to have guarantees in terms of reliability and availability.Many studies investigated the dependability and performance of cloud services,ranging from job scheduling to data placement and replication, adaptiveand on-demand fault-tolerance to new fault-tolerance models. However, thead-hoc and overly simplified settings used to evaluate most cloud service fault toleranceand performance improvement solutions pose significant challengesto the analysis and comparison of the effectiveness of these solutions.This thesis precisely addresses this problem and presents a benchmarkingapproach for evaluating the dependability and performance of cloud services.Designing of dependability and performance benchmarks for a cloud serviceis a particular challenge because of high complexity and the large amount ofdata processed by such service. Infrastructure as a Service (IaaS), Platform asa Service (PaaS) and Software as a Service (SaaS) are the three well definedmodels of cloud computing. In this thesis, we will focus on the PaaS modelof cloud computing. PaaS model enables operating systems and middlewareservices to be delivered from a managed source over a network. We introduce ageneric benchmarking architecture which is further used to build dependabilityand performance benchmarks for PaaS model of cloud services
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lascano, Jorge Edison. „A Pattern Language for Designing Application-Level Communication Protocols and the Improvement of Computer Science Education through Cloud Computing“. DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6547.

Der volle Inhalt der Quelle
Annotation:
Networking protocols have been developed throughout time following layered architectures such as the Open Systems Interconnection model and the Internet model. These protocols are grouped in the Internet protocol suite. Most developers do not deal with low-level protocols, instead they design application-level protocols on top of the low-level protocol. Although each application-level protocol is different, there is commonality among them and developers can apply lessons learned from one protocol to the design of new ones. Design patterns can help by gathering and sharing proven and reusable solution to common, reoccurring design problems. The Application-level Communication Protocols Design Patterns language captures this knowledge about application-level protocol design, so developers can create better, more fitting protocols base on these common and well proven solutions. Another aspect of contemporary development technics is the need of distribution of software artifacts. Most of the development companies have started using Cloud Computing services to overcome this need; either public or private clouds are widely used. Future developers need to manage this technology infrastructure, software, and platform as services. These two aspects, communication protocols design and cloud computing represent an opportunity to contribute to the software development community and to the software engineering education curriculum. The Application-level Communication Protocols Design Patterns language aims to help solve communication software design. The use of cloud computing in programming assignments targets on a positive influence on improving the Analysis to Reuse skills of students of computer science careers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Nilsson, Kristian, und Hans-Eric Jönsson. „A comparison of image and object level annotation performance of image recognition cloud services and custom Convolutional Neural Network models“. Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18074.

Der volle Inhalt der Quelle
Annotation:
Recent advancements in machine learning has contributed to an explosive growth of the image recognition field. Simultaneously, multiple Information Technology (IT) service providers such as Google and Amazon have embraced cloud solutions and software as a service. These factors have helped mature many computer vision tasks from scientific curiosity to practical applications. As image recognition is now accessible to the general developer community, a need arises for a comparison of its capabilities, and what can be gained from choosing a cloud service over a custom implementation. This thesis empirically studies the performance of five general image recognition services (Google Cloud Vision, Microsoft Computer Vision, IBM Watson, Clarifai and Amazon Rekognition) and image recognition models of the Convolutional Neural Network (CNN) architecture that we ourselves have configured and trained. Image and object level annotations of images extracted from different datasets were tested, both in their original state and after being subjected to one of the following six types of distortions: brightness, color, compression, contrast, blurriness and rotation. The output labels and confidence scores were compared to the ground truth of multiple levels of concepts, such as food, soup and clam chowder. The results show that out of the services tested, there is currently no clear top performer over all categories and they all have some variations and similarities in their output, but on average Google Cloud Vision performs the best by a small margin. The services are all adept at identifying high level concepts such as food and most mid-level ones such as soup. However, in terms of further specifics, such as clam chowder, they start to vary, some performing better than others in different categories. Amazon was found to be the most capable at identifying multiple unique objects within the same image, on the chosen dataset. Additionally, it was found that by using synonyms of the ground truth labels, performance increased as the semantic gap between our expectations and the actual output from the services was narrowed. The services all showed vulnerability to image distortions, especially compression, blurriness and rotation. The custom models all performed noticeably worse, around half as well compared to the cloud services, possibly due to the difference in training data standards. The best model, configured with three convolutional layers, 128 nodes and a layer density of two, reached an average performance of almost 0.2 or 20%. In conclusion, if one is limited by a lack of experience with machine learning, computational resources and time, it is recommended to make use of one of the cloud services to reach a more acceptable performance level. Which to choose depends on the intended application, as the services perform differently in certain categories. The services are all vulnerable to multiple image distortions, potentially allowing adversarial attacks. Finally, there is definitely room for improvement in regards to the performance of these services and the computer vision field as a whole.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Moissl, Richard. „Morphology and dynamics of the Venus atmosphere at the cloud top level as observed by the Venus monitoring camera“. Katlenburg-Lindau Copernicus Publ.***5004719, 2008. http://d-nb.info/990118193/04.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Frisk, Arfvidsson Nils, und David Östlin. „Green Cloud Transition & Environmental Impacts of Stock Exchanges : A Case Study of Nasdaq, a Global Stock Exchange Company“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279615.

Der volle Inhalt der Quelle
Annotation:
To address the issues of climate change and reduce the emissions released into the atmosphere, society and companies, including the financial markets, need to adjust how they act and conduct business. The financial markets are vital in the transition towards a more sustainable society and stock exchanges are a central actor to enhance green finance, enabling green securities to be traded. For stock exchange companies to stand tall and encourage a green transition, they need to be aware of their own internal environmental impact. As society is changing to become more serviceoriented, so is stock exchanges. A part of enabling servitization is the usage of cloud services which not only enable companies to focus more on their core business, it also has the potential to reduce companies’ environmental footprint. This study examines the environmental impact of a stock exchange company and how it can be reduced by transitioning to cloud computing. The study uses Nasdaq as a case company and examines environmental performance data from major stock exchanges worldwide. The study furthermore uses the Multi-Level Perspective (MLP) to understand what enables and disables a cloud transition for stock exchanges. This study concludes that the main environmental impact of a stock exchange is Business Travel, electricity and heat for Office Buildings and Data Centres, although the order of these varies throughout the industry. Further, it is concluded that a stock exchange can reduce its environmental footprint by transitioning to cloud computing, in the best-case scenario, emissions are reduced with 10 percent and electricity usage reduced with almost 30 percent of the total usage. However, the impact of a transition is dependent on the rate of renewable energy used for the data centre. The study finds that a cloud transition includes enablers and disablers on all three levels on the MLP and it will most likely be incremental innovations together with a business model shift and technical traits of cloud that will enable and open the window of opportunity for a regime shift. It is concluded that technology or IT-security of cloud computing is not hindering a cloud transition, rather it is organizational culture, assumptions, financial lock-ins, and landscape protectionism that are disablers for a transition. To overcome those, and reduce the environmental footprint, stock exchanges need to work together with cloud providers to create use cases that are in line with the regulatory and financial requirements of a stock exchange.
För att hantera klimatförändringar och reducera utsläppen i atmosfären måste samhället och företag, inklusive de finansiella marknaderna anpassa hur de agerar och bedriver verksamhet. De finansiella marknaderna är vitala för övergången till ett mer hållbart samhälle och börser är en central aktör för att utveckla grön finans och möjliggöra handel av gröna värdepapper. Om börsbolag ska stå rakryggade och uppmuntra till en grön övergång måste de vara medvetna om deras egen interna miljöpåverkan. I takt med att samhället förändras till att bli mer serviceinriktat, förändras också börserna. En faktor för att möjliggöra servitisering är användningen av molntjänster, som inte bara möjliggör mer fokus på kärnverksamhet utan har också potential att minska företags miljöpåverkan. Denna studie undersöker miljöeffekterna av ett börsföretag och hur det kan minskas genom en övergång till molntjänster. Studien använder Nasdaq som ett caseföretag samt undersöker data om miljöpåverkan från de stora börserna världen över. Vidare, för att förstå vad som möjliggör och förhindrar en molnövergång för börser använder studien Multi-Level Perspective (MLP). De viktigaste resultaten från denna studie är att den största miljöpåverkan av en börs är affärsresor, el och värme för kontorsbyggnader samt datacenter, dock varierar ordningen på dessa mellan börser. Studien konkluderar att en börs kan minska deras miljöpåverkan genom att övergå till molntjänster, i bästa fall kan molntjänster minska utsläppen med 10 procent och minska elanvändningen med nästan 30 procent av den totala användningen. Effekterna av en övergång är dock mycket beroende av andelen förnybar energi som användas av de olika datacentren. Studien konstaterar flertalet faktorer på alla tre nivåer av MLP som både möjliggör och förhindrar en molnövergång. Det kommer sannolikt vara inkrementellinnovation tillsammans med affärsmodellsförändringar och tekniska egenskaper för molntjänster som möjliggör och öppnar fönstret till regimskifte. Studien konkluderar att det inte är tekniken eller IT-säkerheten för molntjänster som förhindrar en molnövergång utan det är organisationskulturen, förutfattade meningar, ekonomiska inlåsningar och landskapsprotektionism som förhindrar en molnövergång. För att övervinna dessa och minska miljöpåverkan måste börserna samarbeta med molntjänsteleverantörer för att skapa use cases som är i linje med lagstiftningen och de finansiella kraven på en börs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Li, Ge. „Contrôle des applications fondé sur la qualité de service pour les plate-formes logicielles dématérialisées (Cloud)“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAA018/document.

Der volle Inhalt der Quelle
Annotation:
Le « Cloud computing » est un nouveau modèle de systèmes de calcul. L’infrastructure, les applications et les données sont déplacées de machines localisées sur des systèmes dématérialisés accédés sous forme de service via Internet. Le modèle « coût à l’utilisation » permet des économies de coût en modifiant la configuration à l’exécution (élasticité). L’objectif de cette thèse est de contribuer à la gestion de la Qualité de Service (QdS) des applications s’exécutant dans le Cloud. Les services Cloud prétendent fournir une flexibilité importante dans l’attribution des ressources de calcul tenant compte des variations perçues, telles qu’une fluctuation de la charge. Les capacités de variation doivent être précisément exprimées dans un contrat (le Service Level Agreement, SLA) lorsque l’application est hébergée par un fournisseur de Plateform as a Service (PaaS). Dans cette thèse, nous proposons et nous décrivons formellement le langage de description de SLA PSLA. PSLA est fondé sur WS-Agreement qui est lui-même un langage extensible de description de SLA. Des négociations préalables à la signature du SLA sont indispensables, pendant lesquelles le fournisseur de PaaS doit évaluer la faisabilité des termes du contrat. Cette évaluation, par exemple le temps de réponse, le débit maximal de requêtes servies, etc, est fondée sur une analyse du comportement de l’application déployée dans l’infrastructure cloud. Une analyse du comportement de l’application est donc nécessaire et habituellement assurée par des tests (benchmarks). Ces tests sont relativement coûteux et une étude précise de faisabilité demande en général de nombreux tests. Dans cette thèse, nous proposons une méthode d’étude de faisabilité concernant les critères de performance, à partir d’une proposition de SLA exprimée en PSLA. Cette méthode est un compromis entre la précision d’une étude exhaustive de faisabilité et les coûts de tests. Les résultats de cette étude constituent le modèle initial de la correspondance charge entrante-allocation de ressources utilisée à l’exécution. Le contrôle à l’exécution (runtime control) d’une application gère l’allocation de ressources en fonction des besoins, s’appuyant en particulier sur les capacités de passage à l’échelle (scalability) des infrastructures de cloud. Nous proposons RCSREPRO (Runtime Control method based on Schedule, REactive and PROactive methods), une méthode de contrôle à l’exécution fondée sur la planification et des contrôles réactifs et prédictifs. Les besoins d’adaptation à l’exécution sont essentiellement dus à une variation de la charge soumise à l’application, variations difficiles à estimer avant exécution et seulement grossièrement décrites dans le SLA. Il est donc nécessaire de reporter à l’exécution les décisions d’adaptation et d’y évaluer les possibles variations de charge. Comme les actions de modification des ressources attribuées peuvent prendre plusieurs minutes, RCSREPRO réalise un contrôle prédictif fondée sur l’analyse de charge et la correspondance indicateurs de performance-ressources attribuées, initialement définie via des tests. Cette correspondance est améliorée en permanence à l’exécution. En résumé, les contributions de cette thèse sont la proposition de langage PSLA pour décrire les SLA ; une proposition de méthode pour l’étude de faisabilité d’un SLA ; une proposition de méthode (RCSREPRO) de contrôle à l’exécution de l’application pour garantir le SLA. Les travaux de cette thèse s’inscrivent dans le contexte du projet FSN OpenCloudware (www.opencloudware.org) et ont été financés en partie par celui-ci
Cloud computing is a new computing model. Infrastructure, application and data are moved from local machines to internet and provided as services. Cloud users, such as application owners, can greatly save budgets from the elasticity feature, which refers to the “pay as you go” and on-demand characteristics, of cloud service. The goal of this thesis is to manage the Quality of Service (QoS) for applications running in cloud environments Cloud services provide application owners with great flexibility to assign “suitable” amount of resources according to the changing needs, for example caused by fluctuating request rate. “Suitable” or not needs to be clearly documented in Service Level Agreements (SLA) if this resource demanding task is hosted in a third party, such as a Platform as a Service (PaaS) provider. In this thesis, we propose and formally describe PSLA, which is a SLA description language for PaaS. PSLA is based on WS-Agreement, which is extendable and widely accepted as a SLA description language. Before signing the SLA contract, negotiations are unavoidable. During negotiations, the PaaS provider needs to evaluate if the SLA drafts are feasible or not. These evaluations are based on the analysis of the behavior of the application deployed in the cloud infrastructure, for instance throughput of served requests, response time, etc. Therefore, application dependent analysis, such as benchmark, is needed. Benchmarks are relatively costly and precise feasibility study usually imply large amount of benchmarks. In this thesis, we propose a benchmark based SLA feasibility study method to evaluate whether or not a SLA expressed in PSLA, including QoS targets, resource constraints, cost constraints and workload constraints can be achieved. This method makes tradeoff between the accuracy of a SLA feasibility study and benchmark costs. The intermediate of this benchmark based feasibility study process will be used as the workload-resource mapping model of our runtime control method. When application is running in a cloud infrastructure, the scalability feature of cloud infrastructures allows us to allocate and release resources according to changing needs. These resource provisioning activities are named runtime control. We propose the Runtime Control method based onSchedule, REactive and PROactive methods (RCSREPRO). Changing needs are mainly caused by the fluctuating workload for majority of the applications running in the cloud. The detailed workload information, for example the request arrival rates at scheduled points in time, is difficult to be known before running the application. Moreover, workload information listed in PSLA is too rough to give a fitted resource provisioning schedule before runtime. Therefore, runtime control decisions are needed to be performed in real time. Since resource provisioning actions usually require several minutes, RCSREPRO performs a proactive runtime control which means that it predicts future needs and assign resources in advance to have them ready when they are needed. Hence, prediction of the workload and workload-resource mapping are two problems involved in proactive runtime control. The workload-resource mapping model, which is initially derived from benchmarks in SLA feasibility study is continuously improved in a feedback way at runtime, increasing the accuracy of the control.To sum up, we contribute with three aspects to the QoS management of application running in the cloud: creation of PSLA, a PaaS level SLA description language; proposal of a benchmark based SLA feasibility study method; proposal of a runtime control method, RCSREPRO, to ensure the SLA when the application is running. The work described in this thesis is motivated and funded by the FSN OpenCloudware project (www.opencloudware.org)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Östlin, David, und Arfvidsson Nils Frisk. „Green Cloud Transition & Environmental Impacts of Stock Exchanges : A Case Study of Nasdaq, a Global Stock Exchange Company“. Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278173.

Der volle Inhalt der Quelle
Annotation:
To address the issues of climate change and reduce the emissions released into the atmosphere, society and companies, including the financial markets, need to adjust how they act and conduct business. The financial markets are vital in the transition towards a more sustainable society and stock exchanges are a central actor to enhance green finance, enabling green securities to be traded. For stock exchange companies to stand tall and encourage a green transition, they need to be aware of their own internal environmental impact. As society is changing to become more service-oriented, so is stock exchanges. A part of enabling servitization is the usage of cloud services which not only enable companies to focus more on their core business, it also has the potential to reduce companies’ environmental footprint. This study examines the environmental impact of a stock exchange company and how it can be reduced by transitioning to cloud computing. The study uses Nasdaq as a case company and examines environmental performance data from major stock exchanges worldwide. The study furthermore uses the Multi-Level Perspective (MLP) to understand what enables and disables a cloud transition for stock exchanges. This study concludes that the main environmental impact of a stock exchange is Business Travel, electricity and heat for Office Buildings and Data Centres, although the order of these varies throughout the industry. Further, it is concluded that a stock exchange can reduce its environmental footprint by transitioning to cloud computing, in the best-case scenario, emissions are reduced with 10 percent and electricity usage reduced with almost 30 percent of the total usage. However, the impact of a transition is dependent on the rate of renewable energy used for the data centre. The study finds that a cloud transition includes enablers and disablers on all three levels on the MLP and it will most likely be incremental innovations together with a business model shift and technical traits of cloud that will enable and open the window of opportunity for a regime shift. It is concluded that technology or IT-security of cloud computing is not hindering a cloud transition, rather it is organizational culture, assumptions, financial lock-ins, and landscape protectionism that are disablers for a transition. To overcome those, and reduce the environmental footprint, stock exchanges need to work together with cloud providers to create use cases that are in line with the regulatory and financial requirements of a stock exchange.
För att hantera klimatförändringar och reducera utsläppen i atmosfären måste samhället och företag, inklusive de finansiella marknaderna anpassa hur de agerar och bedriver verksamhet. De finansiella marknaderna är vitala för övergången till ett mer hållbart samhälle och börser är en central aktör för att utveckla grön finans och möjliggöra handel av gröna värdepapper. Om börsbolag ska stå rakryggade och uppmuntra till en grön övergång måste de vara medvetna om deras egen interna miljöpåverkan. I takt med att samhället förändras till att bli mer serviceinriktat, förändras också börserna. En faktor för att möjliggöra servitisering är användningen av molntjänster, som inte bara möjliggör mer fokus på kärnverksamhet utan har också potential att minska företags miljöpåverkan. Denna studie undersöker miljöeffekterna av ett börsföretag och hur det kan minskas genom en övergång till molntjänster. Studien använder Nasdaq som ett caseföretag samt undersöker data om miljöpåverkan från de stora börserna världen över. Vidare, för att förstå vad som möjliggör och förhindrar en molnövergång för börser använder studien Multi-Level Perspective (MLP). De viktigaste resultaten från denna studie är att den största miljöpåverkan av en börs är affärsresor, el och värme för kontorsbyggnader samt datacenter, dock varierar ordningen på dessa mellan börser. Studien konkluderar att en börs kan minska deras miljöpåverkan genom att övergå till molntjänster, i bästa fall kan molntjänster minska utsläppen med 10 procent och minska elanvändningen med nästan 30 procent av den totala användningen. Effekterna av en övergång är dock mycket beroende av andelen förnybar energi som användas av de olika datacentren. Studien konstaterar flertalet faktorer på alla tre nivåer av MLP som både möjliggör och förhindrar en molnövergång. Det kommer sannolikt vara inkrementellinnovation tillsammans med affärsmodellsförändringar och tekniska egenskaper för molntjänster som möjliggör och öppnar fönstret till regimskifte. Studien konkluderar att det inte är tekniken eller IT-säkerheten för molntjänster som förhindrar en molnövergång utan det är organisationskulturen, förutfattade meningar, ekonomiska inlåsningar och landskapsprotektionism som förhindrar en molnövergång. För att övervinna dessa och minska miljöpåverkan måste börserna samarbeta med molntjänsteleverantörer för att skapa use cases som är i linje med lagstiftningen och de finansiella kraven på en börs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Silva, Ticiana Linhares Coelho da. „Processamento elástico e não-intrusivo de consultas em ambientes de nuvem considerando o SLA“. reponame:Repositório Institucional da UFC, 2013. http://www.repositorio.ufc.br/handle/riufc/18653.

Der volle Inhalt der Quelle
Annotation:
SILVA, Ticiana Linhares Coelho da. Processamento elástico e não-intrusivo de consultas em ambientes de nuvem considerando o SLA. 2013. 54 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2013.
Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-12T18:22:53Z No. of bitstreams: 1 2013_dis_tlcsilva.pdf: 1298584 bytes, checksum: 63992b01ae2b16152aa961224cc5edfb (MD5)
Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-22T12:41:07Z (GMT) No. of bitstreams: 1 2013_dis_tlcsilva.pdf: 1298584 bytes, checksum: 63992b01ae2b16152aa961224cc5edfb (MD5)
Made available in DSpace on 2016-07-22T12:41:07Z (GMT). No. of bitstreams: 1 2013_dis_tlcsilva.pdf: 1298584 bytes, checksum: 63992b01ae2b16152aa961224cc5edfb (MD5) Previous issue date: 2013
Computação em Nuvem ou Cloud Computing é um paradigma promissor de computação orientada a serviços. O seu maior benefício é a elasticidade, isto é, a capacidade do sistema de adicionar e remover recursos automaticamente em tempo de execução. Para isso, é essencial projetar e implementar uma técnica efetiva e eficiente que tire proveito da flexibilidade do sistema. Dessa forma, prover elasticidade requer monitorar continuamente (ou prever) a demanda do sistema por recursos, com objetivo de decidir quando adicioná-los e removê-los. Este trabalho apresenta um método de monitoramento não-intrusivo e contínuo de SGBDs relacionais em uma infraestrutura de nuvem, visando minimizar a quantidade de máquinas virtuais provisionadas para o processamento de consultas, e consequentemente maximizar o uso eficiente do ambiente do provedor. Além disso, ele visa satisfazer um "acordo de nível de serviço", em inglês service-level agrement (SLA), associado a cada consulta submetida ao sistema. Dessa forma, um objetivo desse trabalho também é minimizar a penalidade paga pelo provedor para os casos em que ocorre a violação do SLA. Além do método de monitoramento, este trabalho também apresenta um método de provisionamento de MVs para o processamento da consulta como contribuições. Nossa estratégia de monitoramento é aplicada a consultas select-range e consultas com agregação sobre uma única tabela. Os experimentos foram realizados na infraestrutura de nuvem da Amazon, confirmando que nossa técnica é elástica, permitindo ajustar os recursos alocados no sistema de forma automática e dinâmica, com base no SLA acordado.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Pham, Manh Linh. „Roboconf : une plateforme autonomique pour l'élasticité multi-niveau, multi-granularité pour les applications complexes dans le cloud“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM009/document.

Der volle Inhalt der Quelle
Annotation:
Les applications logicielles sont de plus en plus diversifié et complexe. Avec le développement orageux du Cloud Computing et de ses applications, les applications logicielles deviennent encore plus complexes que jamais. Les applications de cloud computing complexes peuvent contenir un grand nombre de composants logiciels qui nécessitent et consomment une grande quantité de ressources (matériel ou d'autres composants logiciels) répartis en plusieurs niveaux en fonction de la granularité de ces ressources. En outre, ces composants logiciels peuvent être situés sur différents nuages. Les composants logiciels et de leurs ressources requises d'une application Nuage ont des relations complexes dont certains pourraient être résolus au moment de la conception, mais certains sont nécessaires pour faire face au moment de l'exécution. La complexité des logiciels et de l'hétérogénéité de l'environnement Couverture devenir défis que les solutions d'élasticité actuelles ont besoin de trouver des réponses appropriées à résoudre. L'élasticité est l'un des avantages du cloud computing, qui est la capacité d'un système Cloud pour adapter à la charge de travail des changements par des ressources d'approvisionnement et deprovisioning d'une manière autonome. Par conséquent, les ressources disponibles correspondent à la demande actuelle d'aussi près que possible à chaque moment. Pour avoir une solution d'élasticité efficace, qui ne reflète pas seulement la complexité des applications Cloud mais également à déployer et à gérer eux d'une manière autonome, nous proposons une approche d'élasticité roman. Il est appelé à plusieurs niveaux élasticité fine qui comprend deux aspects de la complexité de l'application: plusieurs composants logiciels et la granularité des ressources. Le multi-niveau élasticité fine concerne les objets touchés par les actions d'élasticité et la granularité de ces actions. Dans cette thèse, nous introduisons plateforme Roboconf un système de cloud computing autonome (ACCS) pour installer et reconfigurer les applications complexes ainsi que soutenir le multi-niveau élasticité fine. A cet effet, Roboconf est également un gestionnaire d'élasticité autonome. Merci à cette plate-forme, nous pouvons abstraire les applications cloud complexes et automatiser leur installation et de reconfiguration qui peut être de plusieurs centaines d'heures de travail. Nous utilisons également Roboconf à mettre en œuvre les algorithmes de multi-niveau élasticité fine sur ces applications. Les expériences menées indiquent non seulement l'efficacité de l'élasticité fine multi-niveau, mais aussi de valider les caractéristiques de support de cette approche de la plateforme Roboconf
Software applications are becoming more diverse and complex. With the stormy development of Cloud Computing and its applications, software applications become even more complex than ever. The complex Cloud applications may contain a lot of software components that require and consume a large amount of resources (hardware or other software components) distributed into multiple levels based on granularity of these resources. Moreover these software components might be located on different clouds. The software components and their required resources of a Cloud application have complex relationships which some could be resolved at design time but some are required to tackle at run time. The complexity of software and heterogeneity of Cloud environment become challenges that current elasticity solutions need to find appropriate answers to resolve. Elasticity is one of benefits of Cloud computing, which is capability of a Cloud system to adapt to workload changes by provisioning and deprovisioning resources in an autonomic manner. Hence, the available resources fit the current demand as closely as possible at each point in time. To have an efficient elasticity solution which not only reflects the complexity of Cloud applications but also deploy and manage them in an autonomic manner, we propose a novel elasticity approach. It is called multi-level fine-grained elasticity which includes two aspects of application’s complexity: multiple software components and the granularity of resources. The multi-level fine-grained elasticity concerns objects impacted by elasticity actions and granularity of these actions. In this thesis, we introduce Roboconf platform an autonomic Cloud computing system (ACCS) to install and reconfigure the complex applications as well as support the multi-level fine-grained elasticity. To this end, Roboconf is also an autonomic elasticity manager. Thanks to this platform, we can abstract the complex Cloud applications and automate their installation and reconfiguration that can be up to several hundred hours of labour. We also use Roboconf to implement the algorithms of multi-level fine-grained elasticity on these applications. The conducted experiments not only indicate efficiency of the multi-level fine-grained elasticity but also validate features supporting this approach of Roboconf platform
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Dib, Djawida. „Optimizing PaaS provider profit under service level agreement constraints“. Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S044/document.

Der volle Inhalt der Quelle
Annotation:
L'informatique en nuage (cloud computing) est un paradigme émergent qui révolutionne l'utilisation et la commercialisation des services informatiques. De nos jours, l'impact socio-économique de l'informatique en nuage et plus particulièrement des services de PaaS (plate-forme en tant que service) devient essentiel, puisque le nombre d'utilisateurs et de fournisseurs des cloud PaaS est en pleine croissance. L'objectif principal des fournisseurs de cloud PaaS est de générer le maximum de profit des services qu'ils fournissent. Cela les oblige à faire face à un certain nombre de défis, tels que la gestion efficace des ressources sous-jacentes et la satisfaction des SLAs (contrat de service) des applications hébergées. Dans cette thèse, nous considérons un environnement PaaS hybride de cloud bursting, où le fournisseur PaaS possède un nombre limité de ressources privées et a la possibilité de louer des ressources publiques. Ce choix permet au fournisseur PaaS d'avoir un contrôle complet sur les services hébergés dans les ressources privées et de profiter de ressources publiques pour gérer les périodes de pointe. De plus, nous proposons une solution rentable pour gérer un tel système PaaS sous des contraintes de SLA. Nous définissons une politique d'optimisation de profit qui, à chaque requête d'un nouveau client, évalue le coût d'hébergement de son application en utilisant les ressources publiques et privées et choisit l'option qui génère le plus de profit. Pendant les périodes de pointe la politique considère deux autres options. La première option consiste à emprunter quelques ressources aux applications en cours d'exécution tout en considérant le paiement de pénalités si leur qualité de service est affectée. La seconde option consiste à attendre que des ressources privées soient libérés tout en considérant le paiement de pénalités si la qualité de service de la nouvelle application est affectée. En outre, nous avons conçu et mis en œuvre une architecture de cloud PaaS, appelée Meryn, qui intègre la politique d'optimisation proposée, supporte le cloud bursting et héberge des applications du type batch et MapReduce. Les résultats de notre évaluation montrent l'efficacité de notre approche dans l'optimisation du profit du fournisseur. En effet, comparée à une approche de base, notre approche fournit jusqu'à 11.59 % et 9.02 % plus de profits pour le fournisseur dans respectivement les simulations et les expériences
Cloud computing is an emerging paradigm revolutionizing the use and marketing of information technology. As the number of cloud users and providers grows, the socio-economical impact of cloud solutions and particularly PaaS (platform as a service) solutions is becoming increasingly critical. The main objective of PaaS providers is to generate the maximum profit from the services they provide. This requires them to face a number of challenges such as efficiently managing the underlying resources and satisfying the SLAs of the hosted applications. This thesis considers a cloud-bursting PaaS environment where the PaaS provider owns a limited number of private resources and is able to rent public cloud resources, when needed. This environment enables the PaaS provider to have full control over services hosted on the private cloud and to take advantage of public clouds for managing peak periods. In this context, we propose a profit-efficient solution for managing the cloud-bursting PaaS system under SLA constraints. We define a profit optimization policy that, after each client request, evaluates the cost of hosting the application using public and private resources and chooses the option that generates the highest profit. During peak periods the optimization policy considers two more options. The first option is to take some resources from running applications, taking into account the payment of penalties if their promised quality of service is affected. The second option is to wait until private resources become available, taking into account the payment of penalties if the promised quality of service of the new application is affected. Furthermore we designed and implemented an open cloud-bursting PaaS system, called Meryn, which integrates the proposed optimization policy and provides support for batch and MapReduce applications. The results of our evaluation show the effectiveness of our approach in optimizing the provider profit. Indeed, compared to a basic approach, our approach provides up to 11.59% and 9.02% more provider profit in, respectively, simulations and experiments
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Roozbeh, Amir. „Toward Next-generation Data Centers : Principles of Software-Defined “Hardware” Infrastructures and Resource Disaggregation“. Licentiate thesis, KTH, Kommunikationssystem, CoS, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249618.

Der volle Inhalt der Quelle
Annotation:
The cloud is evolving due to additional demands introduced by new technological advancements and the wide movement toward digitalization. Therefore, next-generation data centers (DCs) and clouds are expected (and need) to become cheaper, more efficient, and capable of offering more predictable services. Aligned with this, we examine the concept of software-defined “hardware” infrastructures (SDHI) based on hardware resource disaggregation as one possible way of realizing next-generation DCs. We start with an overview of the functional architecture of a cloud based on SDHI. Following this, we discuss a series of use-cases and deployment scenarios enabled by SDHI and explore the role of each functional block of SDHI’s architecture, i.e., cloud infrastructure, cloud platforms, cloud execution environments, and applications. Next, we propose a framework to evaluate the impact of SDHI on techno-economic efficiency of DCs, specifically focusing on application profiling, hardware dimensioning, and total cost of ownership (TCO). Our study shows that combining resource disaggregation and software-defined capabilities makes DCs less expensive and easier to expand; hence they can rapidly follow the exponential demand growth. Additionally, we elaborate on technologies behind SDHI, its challenges, and its potential future directions. Finally, to identify a suitable memory management scheme for SDHI and show its advantages, we focus on the management of Last Level Cache (LLC) in currently available Intel processors. Aligned with this, we investigate how better management of LLC can provide higher performance, more predictable response time, and improved isolation between threads. More specifically, we take advantage of LLC’s non-uniform cache architecture (NUCA) in which the LLC is divided into “slices,” where access by the core to which it closer is faster than access to other slices. Based upon this, we introduce a new memory management scheme, called slice-aware memory management, which carefully maps the allocated memory to LLC slices based on their access time latency rather than the de facto scheme that maps them uniformly. Many applications can benefit from our memory management scheme with relatively small changes. As an example, we show the potential benefits that Key-Value Store (KVS) applications gain by utilizing our memory management scheme. Moreover, we discuss how this scheme could be used to provide explicit CPU slicing – which is one of the expectations of SDHI  and hardware resource disaggregation.

QC 20190415

APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Kandi, Mohamed Mehdi. „Allocation de ressources élastique pour l'optimisation de requêtes“. Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30172.

Der volle Inhalt der Quelle
Annotation:
Le Cloud Computing est devenu un moyen largement utilisé pour l'interrogation de bases de données. Les fournisseurs de cloud actuels proposent une variété de services implémentés sur des architectures parallèles. Les objectifs de performances et les éventuelles pénalités dans le cas de violation sont établis au préalable dans un contrat appelé Service-Level Agreement (SLA). Le but du fournisseur est de maximiser son bénéfice tout en respectant les besoins des locataires. Avant la naissance des systèmes cloud, plusieurs travaux ont considéré le problème d'allocation de ressources pour l'interrogation de bases de données sur des architectures parallèles. Le plan d'exécution de chaque requête est un graphe de tâches dépendantes. L'expression "allocation de ressources" dans ces travaux sous-entend souvent le placement des tâches sur les ressources disponibles et aussi leur ordonnancement qui tient compte des liens de dépendance. Le but consistait principalement à minimiser le temps d'exécution de requêtes et maximiser l'utilisation de ressources. Par contre, dans le cloud ce but ne garantit pas nécessairement le meilleur bénéfice économique pour le fournisseur. Afin de maximiser le bénéfice et satisfaire les besoins des locataires, il est essentiel d'inclure le modèle économique et les SLAs dans le processus d'allocation de ressources. En effet, les besoins des locataires en terme de performances sont différents, il serait donc intéressant d'allouer les ressources d'une façon qui favorise les locataires les plus exigeants en assurant quand même une certaine qualité de service pour les locataires les moins exigeants. De plus, dans le cloud le nombre de ressources attribuées peut augmenter/diminuer selon la demande (élasticité) et le coût monétaire dépend du nombre de ressources attribuées, il devient donc intéressant de mettre en place un mécanisme pour choisir automatiquement le moment adéquat pour ajouter ou supprimer des ressources en fonction de la charge (dimensionnement automatique). Nous nous intéressons dans le cadre de cette thèse à la conception de méthodes d'allocation de ressources élastique pour les services d'interrogation de bases de données dans le cloud : (1) une méthode d'allocation de ressources statique en deux phases pour assurer un bon compromis entre le bénéfice du fournisseur et la satisfaction des locataires, tout en garantissant un coût d'allocation raisonnable, (2) une méthode de réallocation de ressources dirigée par les SLAs pour limiter l'impact des erreurs d'estimation sur le bénéfice et (3) une méthode de dimensionnement automatique basée sur l'apprentissage par renforcement qui répond aux spécificités de l'interrogation de bases de données. Afin d'évaluer nos contributions, nous avons implémenté nos méthodes dans un environnement cloud simulé et nous les avons comparées à des méthodes de l'état de l'art en terme de coût monétaire de l'exécution de requêtes ainsi que le coût d'allocation
Cloud computing has become a widely used way to query databases. Today's cloud providers offer a variety of services implemented on parallel architectures. Performance targets and possible penalties in case of violation are established in advance in a contract called Service-Level Agreement (SLA). The provider's goal is to maximize its benefit while respecting the needs of tenants. Before the birth of cloud systems, several studies considered the problem of resource allocation for database querying in parallel architectures. The execution plan for each query is a graph of dependent tasks. The expression "Resource allocation" in this context often implies the placement of tasks within available resources and also their scheduling that takes into account dependencies between tasks. The main goal was to minimize query execution time and maximize the use of resources. However, this goal does not necessarily guarantee the best economic benefit for the provider in the cloud. In order to maximize the provider's benefit and meet the needs of tenants, it is important to include the economic model and SLAs in the resource allocation process. Indeed, the needs of tenants in terms of performance are different, so it would be interesting to allocate resources in a way that favors the most demanding tenants and ensure an acceptable quality of service for the least demanding tenants. In addition, in the cloud the number of assigned resources can increase/decrease according to demand (elasticity) and the monetary cost depends on the number of assigned resources, so it would be interesting to set up a mechanism to automatically choose the right moment to add or remove resources according to the load (auto-scaling). In this thesis, we are interested in designing elastic resource allocation methods for database queries in the cloud. This solution includes: (1) a static two-phase resource allocation method to ensure a good compromise between provider benefit and tenant satisfaction, while ensuring a reasonable allocation cost, (2) an SLA-driven resource reallocation to limit the impact of estimation errors on the benefit and (3) an auto-scaling method based on reinforcement learning that meet the specificities of database queries. In order to evaluate our contributions, we have implemented our methods in a simulated cloud environment and compared them with state-of-the-art methods in terms of monetary cost of the execution of queries as well as the allocation cost
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Stanik, Alexander [Verfasser], Odej [Akademischer Betreuer] Kao, Odej [Gutachter] Kao, Gero [Gutachter] Mühl und Matthias [Gutachter] Hovestadt. „Service level agreement mediation, negotiation and evaluation for cloud services in intercloud environments / Alexander Stanik ; Gutachter: Odej Kao, Gero Mühl, Matthias Hovestadt ; Betreuer: Odej Kao“. Berlin : Technische Universität Berlin, 2016. http://d-nb.info/1152969528/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ziegler, Wolfgang [Verfasser], Jens [Akademischer Betreuer] [Gutachter] Grabowski, Ramin [Gutachter] Yahyapour und Dieter [Gutachter] Kranzlmüller. „A Framework for managing Quality of Service in Cloud Computing through Service Level Agreements / Wolfgang Ziegler ; Gutachter: Ramin Yahyapour, Jens Grabowski, Dieter Kranzlmüller ; Betreuer: Jens Grabowski“. Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2017. http://d-nb.info/1123803358/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Bawain, Abdullah Mohammed Ali [Verfasser], Anke [Akademischer Betreuer] Hildebrandt und Sabine [Akademischer Betreuer] Attinger. „Influence of vegetation on water fluxes at the ground level in a semi-arid cloud forest in oman / Mohammed Ali Bawain. Gutachter: Anke Hildebrandt ; Sabine Attinger“. Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2012. http://d-nb.info/1024080455/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Yaqub, Edwin [Verfasser], Ramin [Akademischer Betreuer] Yahyapour, Jens [Akademischer Betreuer] Grabowski, Stephan [Akademischer Betreuer] Waack, Dieter [Akademischer Betreuer] Hogrefe, Carsten [Akademischer Betreuer] Damm und Konrad [Akademischer Betreuer] Rieck. „Generic Methods for Adaptive Management of Service Level Agreements in Cloud Computing / Edwin Yaqub. Betreuer: Ramin Yahyapour. Gutachter: Ramin Yahyapour ; Jens Grabowski ; Stephan Waack ; Dieter Hogrefe ; Carsten Damm ; Konrad Rieck“. Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2015. http://d-nb.info/107971796X/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie