Gotowa bibliografia na temat „CLOUD FRAMEWORK”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „CLOUD FRAMEWORK”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Rozprawy doktorskie na temat "CLOUD FRAMEWORK"

1

Falk, Matthew D. "Cryptographic cloud storage framework." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85417.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 59).<br>The cloud prevents cheap and convenient ways to create shared remote repositories. One concern when creating systems that provide security is if the system will be able to remain secure when new attacks are developed. As tools and techniques for breaking security systems advance, new ideas are required to provide the security guarantees that may have been exploited. This project presents a framework which can handle the ever growing need for new security defenses. This thesis describes the Key Derivation Module that I have constructed, including many new Key Derivation Functions, that is used in our system.<br>by Matthew D. Falk.<br>M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
2

RODRIGUES, Thiago Gomes. "Cloudacc: a cloud-based accountability framework for federated cloud." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18590.

Pełny tekst źródła
Streszczenie:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-19T15:09:08Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tgr_thesis.pdf: 4801672 bytes, checksum: ce1d30377cfe8fad52dbfd02d55554e6 (MD5)<br>Made available in DSpace on 2017-04-19T15:09:08Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tgr_thesis.pdf: 4801672 bytes, checksum: ce1d30377cfe8fad52dbfd02d55554e6 (MD5) Previous issue date: 2016-09-08<br>The evolution of software service delivery has changed the way accountability is performed. The complexity related to cloud computing environments increases the difficulty in properly performing accountability, since the evidences are spread through the whole infrastructure, from different servers, in physical, virtualization and application layers. This complexity increases when the cloud federation is considered because besides the inherent complexity of the virtualized environment, the federation members may not implement the same security procedures and policies. The main objective of this thesis is to propose an accountability framework named CloudAcc, that supports audit, management, planning and billing process in federated cloud environments, increasing trust and transparency. Furthermore, CloudAcc considers the legal safeguard requirements presented in Brazilian Marco Civil da Internet. We confirm the CloudAcc effectiveness when some infrastructure elements were submitted against Denial of Service (DoS) and Brute Force attacks, and our framework was able to detect them. Facing the results obtained, we can conclude that CloudAcc contributes to the state-of-the-art once it provides the holistic vision of the cloud federated environment through the evidence collection considering the three layers, supporting audit, management, planning and billing process in federated cloud environments.<br>A maneira de realizar accountability tem variado à medida em que o modo de entrega de serviços de Tecnologia da Informação (TI) tem evoluído. Em ambientes de nuvem a complexidade de realizar accountability apropriadamente é alta porque as evidências devem ser coletadas considerando-se as camadas física, de virtualização e de aplicações, que estão espalhadas em diferentes servidores e elementos da infraestrutura. Esta complexidade é ampliada quando ocorre a federação das infraestruturas de nuvem porque além da complexidade inerente ao ambiente virtualizado, os membros da federação podem não ter os mesmos grupos de políticas e práticas de segurança. O principal objetivo desta tese é propor um framework de accountability, denominado CloudAcc, que suporte processos de auditoria, gerenciamento, planejamento e cobrança, em nuvens federadas, aumentando a confiança e a transparência. Além disso, o CloudAcc também considera os requisitos legais para a salvaguarda dos registros, conforme descrito no Marco Civil da Internet brasileira. A efetividade do CloudAcc foi confirmada quando alguns componentes da infraestrutura da nuvem foram submetidos a ataques de negação de serviço e de força bruta, e o framework foi capaz de detectá-los. Diante dos resultados obtidos, pode-se concluir que o CloudAcc contribui para o estado-da-arte, uma vez que fornece uma visão holística do ambiente de nuvem federada através da coleta de evidências em três camadas suportando os processos de auditoria, gerenciamento, planejamento e cobrança.
Style APA, Harvard, Vancouver, ISO itp.
3

Aldakheel, Eman A. "A Cloud Computing Framework for Computer Science Education." Bowling Green State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1322873621.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Falk, Sebastian, and Andriy Shyshka. "The Cloud Marketplace : A Capability-Based Framework for Cloud Ecosystem Governance." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23968.

Pełny tekst źródła
Streszczenie:
Within the last five years, the market of cloud computing has shown rapid growth. However, despite the increasing popularity, researchers highlight numerous concerns regarding limited interoperability of systems hosted by different cloud providers as well as restricted customization of cloud solutions. In order to counter aforemen-tioned challenges, this study investigates the idea of introducing a marketplace for cloud services that leverage the service-oriented architecture (SOA) paradigm and of-fers software solutions, computing capabilities from cloud providers, components developed by third parties, as well as access to integration and audit services. The goal of the study lies in conceptualizing the idea and the evaluation of demand it may raise from the key cloud actors. In this regard, existing frameworks of cloud compu-ting and SOA contributed to the development of an initial model that was further improved through the interviewing process. The results of this study include a capa-bility-based framework for the cloud marketplace which not only clarifies the role and activities of the different actors but also contains the necessary features of the marketplace that are needed to ensure the proper workflow. In addition to that, the actors’ incentives and concerns regarding the marketplace were analyzed by applying SWOT-analysis. While the analysis revealed both positive interest and present de-mand among the actors, the identified weaknesses and threats highlight the need for further investigations in order to put the idea into practice.
Style APA, Harvard, Vancouver, ISO itp.
5

Jallow, Alieu. "CLOUD-METRIC: A Cost Effective Application Development Framework for Cloud Infrastructures." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300681.

Pełny tekst źródła
Streszczenie:
Classic application development model primarily focuses on two key objectives: scalable system architecture and best possible performance. However, This model of application development works well on the private resources, but with the growing amount of public IaaS it is essential to find a balance between the cost and the performance of an application. In this thesis, we have proposed CLOUD-METRIC: A Cost Effective Application Development for Cloud Infrastructures. The framework allows users to estimate the cost of running applications on public cloud infrastructures during the development phase. We will consider two major cloud services providers, Amazon AWS and Google Cloud Platform. The provided estimates can be very useful to make improvements in the users' application architecture. In addition to cost estimation, the framework allows users to monitor resources utilized by their applications. Finally, we will provide users with recommendation of instances on AWS and GCP based on resources utilized by the applications over a period of time.
Style APA, Harvard, Vancouver, ISO itp.
6

Khan, Syeduzzaman. "A PROBABILISTIC MACHINE LEARNING FRAMEWORK FOR CLOUD RESOURCE SELECTION ON THE CLOUD." Scholarly Commons, 2020. https://scholarlycommons.pacific.edu/uop_etds/3720.

Pełny tekst źródła
Streszczenie:
The execution of the scientific applications on the Cloud comes with great flexibility, scalability, cost-effectiveness, and substantial computing power. Market-leading Cloud service providers such as Amazon Web service (AWS), Azure, Google Cloud Platform (GCP) offer various general purposes, memory-intensive, and compute-intensive Cloud instances for the execution of scientific applications. The scientific community, especially small research institutions and undergraduate universities, face many hurdles while conducting high-performance computing research in the absence of large dedicated clusters. The Cloud provides a lucrative alternative to dedicated clusters, however a wide range of Cloud computing choices makes the instance selection for the end-users. This thesis aims to simplify Cloud instance selection for end-users by proposing a probabilistic machine learning framework to allow to users select a suitable Cloud instance for their scientific applications. This research builds on the previously proposed A2Cloud-RF framework that recommends high-performing Cloud instances by profiling the application and the selected Cloud instances. The framework produces a set of objective scores called the A2Cloud scores, which denote the compatibility level between the application and the selected Cloud instances. When used alone, the A2Cloud scores become increasingly unwieldy with an increasing number of tested Cloud instances. Additionally, the framework only examines the raw application performance and does not consider the execution cost to guide resource selection. To improve the usability of the framework and assist with economical instance selection, this research adds two Naïve Bayes (NB) classifiers that consider both the application’s performance and execution cost. These NB classifiers include: 1) NB with a Random Forest Classifier (RFC) and 2) a standalone NB module. Naïve Bayes with a Random Forest Classifier (RFC) augments the A2Cloud-RF framework's final instance ratings with the execution cost metric. In the training phase, the classifier builds the frequency and probability tables. The classifier recommends a Cloud instance based on the highest posterior probability for the selected application. The standalone NB classifier uses the generated A2Cloud score (an intermediate result from the A2Cloud-RF framework) and execution cost metric to construct an NB classifier. The NB classifier forms a frequency table and probability (prior and likelihood) tables. For recommending a Cloud instance for a test application, the classifier calculates the highest posterior probability for all of the Cloud instances. The classifier recommends a Cloud instance with the highest posterior probability. This study performs the execution of eight real-world applications on 20 Cloud instances from AWS, Azure, GCP, and Linode. We train the NB classifiers using 80% of this dataset and employ the remaining 20% for testing. The testing yields more than 90% recommendation accuracy for the chosen applications and Cloud instances. Because of the imbalanced nature of the dataset and multi-class nature of classification, we consider the confusion matrix (true positive, false positive, true negative, and false negative) and F1 score with above 0.9 scores to describe the model performance. The final goal of this research is to make Cloud computing an accessible resource for conducting high-performance scientific executions by enabling users to select an effective Cloud instance from across multiple providers.
Style APA, Harvard, Vancouver, ISO itp.
7

Mengistu, Tessema Mindaye. "RESOURCE MANAGEMENT FRAMEWORK FOR VOLUNTEER CLOUD COMPUTING." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1613.

Pełny tekst źródła
Streszczenie:
The need for high computing resources is on the rise, despite the exponential increase of the computing capacity of workstations, the proliferation of mobile devices, and the omnipresence of data centers with massive server farms that housed tens (if not hundreds) of thousands of powerful servers. This is mainly due to the unprecedented increase in the number of Internet users worldwide and the Internet of Things (IoTs). So far, Cloud Computing has been providing the necessary computing infrastructures for applications, including IoT applications. However, the current cloud infrastructures that are based on dedicated datacenters are expensive to set-up; running the infrastructure needs expertise, a lot of electrical power for cooling the facilities, and redundant supply of everything in a data center to provide the desired resilience. Moreover, the current centralized cloud infrastructures will not suffice for IoT's network intensive applications with very fast response requirements. Alternative cloud computing models that depend on spare resources of volunteer computers are emerging, including volunteer cloud computing, in addition to the conventional data center based clouds. These alternative cloud models have one characteristic in common -- they do not rely on dedicated data centers to provide the cloud services. Volunteer clouds are opportunistic cloud systems that run over donated spare resources of volunteer computers. On the one hand, volunteer clouds claim numerous outstanding advantages: affordability, on-premise, self-provision, greener computing (owing to consolidate use of existent computers), etc. On the other hand, full-fledged implementation of volunteer cloud computing raises unique technical and research challenges: management of highly dynamic and heterogeneous compute resources, Quality of Service (QoS) assurance, meeting Service Level Agreement (SLA), reliability, security/trust, which are all made more difficult due to the high dynamics and heterogeneity of the non-dedicated cloud hosts. This dissertation investigates the resource management aspect of volunteer cloud computing. Due to the intermittent availability and heterogeneity of computing resource involved, resource management is one of the challenging tasks in volunteer cloud computing. The dissertation, specifically, focuses on the Resource Discovery and VM Placement tasks of resource management. The resource base over which volunteer cloud computing depends on is a scavenged, sporadically available, aggregate computing power of individual volunteer computers. Delivering reliable cloud services over these unreliable nodes is a big challenge in volunteer cloud computing. The fault tolerance of the whole system rests on the reliability and availability of the infrastructure base. This dissertation discusses the modelling of a fault tolerant prediction based resource discovery in volunteer cloud computing. It presents a multi-state semi-Markov process based model to predict the future availability and reliability of nodes in volunteer cloud systems. A volunteer node is modelled as a semi-Markov process, whose future state depends only on its current state. This exactly matches with a key observation made in analyzing the traces of personal computers in enterprises that the daily patterns of resource availability are comparable to those in the most recent days. The dissertation illustrates how prediction based resource discovery enables volunteer cloud systems to provide reliable cloud services over the unreliable and non-dedicated volunteer hosts with empirical evidences. VM placement algorithms play crucial role in Cloud Computing in fulfilling its characteristics and achieving its objectives. In general, VM placement is a challenging problem that has been extensively studied in conventional Cloud Computing context. Due to its divergent characteristics, volunteer cloud computing needs a novel and unique way of solving the existing Cloud Computing problems, including VM placement. Intermittent availability of nodes, unreliable infrastructure, and resource constrained nodes are some of the characteristics of volunteer cloud computing that make VM placement problem more complicated. In this dissertation, we model the VM placement problem as a \textit{Bounded 0-1 Multi-Dimensional Knapsack Problem}. As a known NP-hard problem, the dissertation discusses heuristic based algorithms that takes the typical characteristics of volunteer cloud computing into consideration, to solve the VM placement problem formulated as a knapsack problem. Three algorithms are developed to meet the objectives and constraints specific to volunteer cloud computing. The algorithms are tested on a real volunteer cloud computing test-bed and showed a good performance results based on their optimization objectives. The dissertation also presents the design and implementation of a real volunteer cloud computing system, cuCloud, that bases its resource infrastructure on donated computing resource of computers. The need for the development of cuCloud stems from the lack of experimentation platform, real or simulation, that specifically works for volunteer cloud computing. The cuCloud is a system that can be called a genuine volunteer cloud computing system, which manifests the concept of ``Volunteer Computing as a Service'' (VCaaS), with a particular significance in edge computing and related applications. In the course of this dissertation, empirical evaluations show that volunteer clouds can be used to execute range of applications reliably and efficiently. Moreover, the physical proximity of volunteer nodes to where applications originate, edge of the network, helps them in reducing the round trip time latency of applications. However, the overall computing capability of volunteer clouds will not suffice to handle highly resource intensive applications by itself. Based on these observations, the dissertation also proposes the use of volunteer clouds as a resource fabric in the emerging Edge Computing paradigm as a future work.
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Amy (Amy X. ). "A functional flow framework for cloud computing." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77453.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 53).<br>This thesis covers a basic framework to calculate the maximum computation rate of a set of functions over a network. These functions are broken down into a series of computations, which are distributed among nodes of the network, with the output sent to the terminal node. We analyze two models with different types of computation costs, a linear computation cost model and a maximum computation cost model. We show how computation distribution through the given network changes with different types of computation and communication limitations. This framework can also be used in cloud design, where a network of given complexity is designed to maximize computation rate for a given set of functions. We provide a greedy algorithm that provides one solution to this problem, and create simulations for each framework, and analyze the results.<br>by Amy Zhang.<br>M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
9

Chaudhry, Nauman Riaz. "Workflow framework for cloud-based distributed simulation." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/14778.

Pełny tekst źródła
Streszczenie:
Although distributed simulation (DS) using parallel computing has received considerable research and development in a number of compute-intensive fields, it has still to be significantly adopted by the wider simulation community. According to scientific literature, major reasons for low adoption of cloud-based services for DS execution are the perceived complexities of understanding and managing the underlying architecture and software for deploying DS models, as well as the remaining challenges in performance and interoperability of cloud-based DS. The focus of this study, therefore, has been to design and test the feasibility of a well-integrated, generic, workflow structured framework that is universal in character and transparent in implementation. The choice of a workflow framework for implementing cloud-based DS was influenced by the ability of scientific workflow management systems to define, execute, and actively manage computing workflows. As a result of this study, a hybrid workflow framework, combined with four cloud-based implementation services, has been used to develop an integrated potential standard for workflow implementation of cloud-based DS, which has been named the WORLDS framework (Workflow Framework for Cloud-based Distributed Simulation). The main contribution of this research study is the WORLDS framework itself, which identifies five services (including a Parametric Study Service) that can potentially be provided through the use of workflow technologies to deliver effective cloud-based distributed simulation that is transparently provisioned for the user. This takes DS a significant step closer to its provision as a viable cloud-based service (DSaaS). In addition, the study introduces a simple workflow solution to applying parametric studies to distributed simulations. Further research to confirm the generic nature of the workflow framework, to apply and test modified HLA standards, and to introduce a simulation analytics function by modifying the workflow is anticipated.
Style APA, Harvard, Vancouver, ISO itp.
10

Li, Min. "A resource management framework for cloud computing." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47804.

Pełny tekst źródła
Streszczenie:
The cloud computing paradigm is realized through large scale distributed resource management and computation platforms such as MapReduce, Hadoop, Dryad, and Pregel. These platforms enable quick and efficient development of a large range of applications that can be sustained at scale in a fault-tolerant fashion. Two key technologies, namely resource virtualization and feature-rich enterprise storage, are further driving the wide-spread adoption of virtualized cloud environments. Many challenges arise when designing resource management techniques for both native and virtualized data centers. First, parameter tuning of MapReduce jobs for efficient resource utilization is a daunting and time consuming task. Second, while the MapReduce model is designed for and leverages information from native clusters to operate efficiently, the emergence of virtual cluster topology results in overlaying or hiding the actual network information. This leads to two resource selection and placement anomalies: (i) loss of data locality, and (ii) loss of job locality. Consequently, jobs may be placed physically far from their associated data or related jobs, which adversely affect the overall performance. Finally, the extant resource provisioning approach leads to significant wastage as enterprise cloud providers have to consider and provision for peak loads instead of average load (that is many times lower). In this dissertation, we design and develop a resource management framework to address the above challenges. We first design an innovative resource scheduler, CAM, aimed at MapReduce applications running in virtualized cloud environments. CAM reconciles both data and VM resource allocation with a variety of competing constraints, such as storage utilization, changing CPU load and network link capacities based on a flow-network algorithm. Additionally, our platform exposes the typically hidden lower-level topology information to the MapReduce job scheduler, which enables it to make optimal task assignments. Second, we design an online performance tuning system, mrOnline, which monitors the MapReduce job execution, tunes the parameters based on collected statistics and provides fine-grained control over parameter configuration changes to the user. To this end, we employ a gray-box based smart hill-climbing algorithm that leverages MapReduce runtime statistics and effectively converge to a desirable configuration within a single iteration. Finally, we target enterprise applications in virtualized environment where typically a network attached centralized storage system is deployed. We design a new protocol to share primary data de-duplication information available at the storage server with the client. This enables better client-side cache utilization and reduces server-client network traffic, which leads to overall high performance. Based on the protocol, a workload aware VM management strategy is further introduced to decrease the load to the storage server and enhance the I/O efficiency for clients.<br>Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii