Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: CLOUD FRAMEWORK.

Rozprawy doktorskie na temat „CLOUD FRAMEWORK”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „CLOUD FRAMEWORK”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Falk, Matthew D. "Cryptographic cloud storage framework". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85417.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 59).
The cloud prevents cheap and convenient ways to create shared remote repositories. One concern when creating systems that provide security is if the system will be able to remain secure when new attacks are developed. As tools and techniques for breaking security systems advance, new ideas are required to provide the security guarantees that may have been exploited. This project presents a framework which can handle the ever growing need for new security defenses. This thesis describes the Key Derivation Module that I have constructed, including many new Key Derivation Functions, that is used in our system.
by Matthew D. Falk.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
2

RODRIGUES, Thiago Gomes. "Cloudacc: a cloud-based accountability framework for federated cloud". Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18590.

Pełny tekst źródła
Streszczenie:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-19T15:09:08Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tgr_thesis.pdf: 4801672 bytes, checksum: ce1d30377cfe8fad52dbfd02d55554e6 (MD5)
Made available in DSpace on 2017-04-19T15:09:08Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tgr_thesis.pdf: 4801672 bytes, checksum: ce1d30377cfe8fad52dbfd02d55554e6 (MD5) Previous issue date: 2016-09-08
The evolution of software service delivery has changed the way accountability is performed. The complexity related to cloud computing environments increases the difficulty in properly performing accountability, since the evidences are spread through the whole infrastructure, from different servers, in physical, virtualization and application layers. This complexity increases when the cloud federation is considered because besides the inherent complexity of the virtualized environment, the federation members may not implement the same security procedures and policies. The main objective of this thesis is to propose an accountability framework named CloudAcc, that supports audit, management, planning and billing process in federated cloud environments, increasing trust and transparency. Furthermore, CloudAcc considers the legal safeguard requirements presented in Brazilian Marco Civil da Internet. We confirm the CloudAcc effectiveness when some infrastructure elements were submitted against Denial of Service (DoS) and Brute Force attacks, and our framework was able to detect them. Facing the results obtained, we can conclude that CloudAcc contributes to the state-of-the-art once it provides the holistic vision of the cloud federated environment through the evidence collection considering the three layers, supporting audit, management, planning and billing process in federated cloud environments.
A maneira de realizar accountability tem variado à medida em que o modo de entrega de serviços de Tecnologia da Informação (TI) tem evoluído. Em ambientes de nuvem a complexidade de realizar accountability apropriadamente é alta porque as evidências devem ser coletadas considerando-se as camadas física, de virtualização e de aplicações, que estão espalhadas em diferentes servidores e elementos da infraestrutura. Esta complexidade é ampliada quando ocorre a federação das infraestruturas de nuvem porque além da complexidade inerente ao ambiente virtualizado, os membros da federação podem não ter os mesmos grupos de políticas e práticas de segurança. O principal objetivo desta tese é propor um framework de accountability, denominado CloudAcc, que suporte processos de auditoria, gerenciamento, planejamento e cobrança, em nuvens federadas, aumentando a confiança e a transparência. Além disso, o CloudAcc também considera os requisitos legais para a salvaguarda dos registros, conforme descrito no Marco Civil da Internet brasileira. A efetividade do CloudAcc foi confirmada quando alguns componentes da infraestrutura da nuvem foram submetidos a ataques de negação de serviço e de força bruta, e o framework foi capaz de detectá-los. Diante dos resultados obtidos, pode-se concluir que o CloudAcc contribui para o estado-da-arte, uma vez que fornece uma visão holística do ambiente de nuvem federada através da coleta de evidências em três camadas suportando os processos de auditoria, gerenciamento, planejamento e cobrança.
Style APA, Harvard, Vancouver, ISO itp.
3

Aldakheel, Eman A. "A Cloud Computing Framework for Computer Science Education". Bowling Green State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1322873621.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Falk, Sebastian, i Andriy Shyshka. "The Cloud Marketplace : A Capability-Based Framework for Cloud Ecosystem Governance". Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23968.

Pełny tekst źródła
Streszczenie:
Within the last five years, the market of cloud computing has shown rapid growth. However, despite the increasing popularity, researchers highlight numerous concerns regarding limited interoperability of systems hosted by different cloud providers as well as restricted customization of cloud solutions. In order to counter aforemen-tioned challenges, this study investigates the idea of introducing a marketplace for cloud services that leverage the service-oriented architecture (SOA) paradigm and of-fers software solutions, computing capabilities from cloud providers, components developed by third parties, as well as access to integration and audit services. The goal of the study lies in conceptualizing the idea and the evaluation of demand it may raise from the key cloud actors. In this regard, existing frameworks of cloud compu-ting and SOA contributed to the development of an initial model that was further improved through the interviewing process. The results of this study include a capa-bility-based framework for the cloud marketplace which not only clarifies the role and activities of the different actors but also contains the necessary features of the marketplace that are needed to ensure the proper workflow. In addition to that, the actors’ incentives and concerns regarding the marketplace were analyzed by applying SWOT-analysis. While the analysis revealed both positive interest and present de-mand among the actors, the identified weaknesses and threats highlight the need for further investigations in order to put the idea into practice.
Style APA, Harvard, Vancouver, ISO itp.
5

Jallow, Alieu. "CLOUD-METRIC: A Cost Effective Application Development Framework for Cloud Infrastructures". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300681.

Pełny tekst źródła
Streszczenie:
Classic application development model primarily focuses on two key objectives: scalable system architecture and best possible performance. However, This model of application development works well on the private resources, but with the growing amount of public IaaS it is essential to find a balance between the cost and the performance of an application. In this thesis, we have proposed CLOUD-METRIC: A Cost Effective Application Development for Cloud Infrastructures. The framework allows users to estimate the cost of running applications on public cloud infrastructures during the development phase. We will consider two major cloud services providers, Amazon AWS and Google Cloud Platform. The provided estimates can be very useful to make improvements in the users' application architecture. In addition to cost estimation, the framework allows users to monitor resources utilized by their applications. Finally, we will provide users with recommendation of instances on AWS and GCP based on resources utilized by the applications over a period of time.
Style APA, Harvard, Vancouver, ISO itp.
6

Khan, Syeduzzaman. "A PROBABILISTIC MACHINE LEARNING FRAMEWORK FOR CLOUD RESOURCE SELECTION ON THE CLOUD". Scholarly Commons, 2020. https://scholarlycommons.pacific.edu/uop_etds/3720.

Pełny tekst źródła
Streszczenie:
The execution of the scientific applications on the Cloud comes with great flexibility, scalability, cost-effectiveness, and substantial computing power. Market-leading Cloud service providers such as Amazon Web service (AWS), Azure, Google Cloud Platform (GCP) offer various general purposes, memory-intensive, and compute-intensive Cloud instances for the execution of scientific applications. The scientific community, especially small research institutions and undergraduate universities, face many hurdles while conducting high-performance computing research in the absence of large dedicated clusters. The Cloud provides a lucrative alternative to dedicated clusters, however a wide range of Cloud computing choices makes the instance selection for the end-users. This thesis aims to simplify Cloud instance selection for end-users by proposing a probabilistic machine learning framework to allow to users select a suitable Cloud instance for their scientific applications. This research builds on the previously proposed A2Cloud-RF framework that recommends high-performing Cloud instances by profiling the application and the selected Cloud instances. The framework produces a set of objective scores called the A2Cloud scores, which denote the compatibility level between the application and the selected Cloud instances. When used alone, the A2Cloud scores become increasingly unwieldy with an increasing number of tested Cloud instances. Additionally, the framework only examines the raw application performance and does not consider the execution cost to guide resource selection. To improve the usability of the framework and assist with economical instance selection, this research adds two Naïve Bayes (NB) classifiers that consider both the application’s performance and execution cost. These NB classifiers include: 1) NB with a Random Forest Classifier (RFC) and 2) a standalone NB module. Naïve Bayes with a Random Forest Classifier (RFC) augments the A2Cloud-RF framework's final instance ratings with the execution cost metric. In the training phase, the classifier builds the frequency and probability tables. The classifier recommends a Cloud instance based on the highest posterior probability for the selected application. The standalone NB classifier uses the generated A2Cloud score (an intermediate result from the A2Cloud-RF framework) and execution cost metric to construct an NB classifier. The NB classifier forms a frequency table and probability (prior and likelihood) tables. For recommending a Cloud instance for a test application, the classifier calculates the highest posterior probability for all of the Cloud instances. The classifier recommends a Cloud instance with the highest posterior probability. This study performs the execution of eight real-world applications on 20 Cloud instances from AWS, Azure, GCP, and Linode. We train the NB classifiers using 80% of this dataset and employ the remaining 20% for testing. The testing yields more than 90% recommendation accuracy for the chosen applications and Cloud instances. Because of the imbalanced nature of the dataset and multi-class nature of classification, we consider the confusion matrix (true positive, false positive, true negative, and false negative) and F1 score with above 0.9 scores to describe the model performance. The final goal of this research is to make Cloud computing an accessible resource for conducting high-performance scientific executions by enabling users to select an effective Cloud instance from across multiple providers.
Style APA, Harvard, Vancouver, ISO itp.
7

Mengistu, Tessema Mindaye. "RESOURCE MANAGEMENT FRAMEWORK FOR VOLUNTEER CLOUD COMPUTING". OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1613.

Pełny tekst źródła
Streszczenie:
The need for high computing resources is on the rise, despite the exponential increase of the computing capacity of workstations, the proliferation of mobile devices, and the omnipresence of data centers with massive server farms that housed tens (if not hundreds) of thousands of powerful servers. This is mainly due to the unprecedented increase in the number of Internet users worldwide and the Internet of Things (IoTs). So far, Cloud Computing has been providing the necessary computing infrastructures for applications, including IoT applications. However, the current cloud infrastructures that are based on dedicated datacenters are expensive to set-up; running the infrastructure needs expertise, a lot of electrical power for cooling the facilities, and redundant supply of everything in a data center to provide the desired resilience. Moreover, the current centralized cloud infrastructures will not suffice for IoT's network intensive applications with very fast response requirements. Alternative cloud computing models that depend on spare resources of volunteer computers are emerging, including volunteer cloud computing, in addition to the conventional data center based clouds. These alternative cloud models have one characteristic in common -- they do not rely on dedicated data centers to provide the cloud services. Volunteer clouds are opportunistic cloud systems that run over donated spare resources of volunteer computers. On the one hand, volunteer clouds claim numerous outstanding advantages: affordability, on-premise, self-provision, greener computing (owing to consolidate use of existent computers), etc. On the other hand, full-fledged implementation of volunteer cloud computing raises unique technical and research challenges: management of highly dynamic and heterogeneous compute resources, Quality of Service (QoS) assurance, meeting Service Level Agreement (SLA), reliability, security/trust, which are all made more difficult due to the high dynamics and heterogeneity of the non-dedicated cloud hosts. This dissertation investigates the resource management aspect of volunteer cloud computing. Due to the intermittent availability and heterogeneity of computing resource involved, resource management is one of the challenging tasks in volunteer cloud computing. The dissertation, specifically, focuses on the Resource Discovery and VM Placement tasks of resource management. The resource base over which volunteer cloud computing depends on is a scavenged, sporadically available, aggregate computing power of individual volunteer computers. Delivering reliable cloud services over these unreliable nodes is a big challenge in volunteer cloud computing. The fault tolerance of the whole system rests on the reliability and availability of the infrastructure base. This dissertation discusses the modelling of a fault tolerant prediction based resource discovery in volunteer cloud computing. It presents a multi-state semi-Markov process based model to predict the future availability and reliability of nodes in volunteer cloud systems. A volunteer node is modelled as a semi-Markov process, whose future state depends only on its current state. This exactly matches with a key observation made in analyzing the traces of personal computers in enterprises that the daily patterns of resource availability are comparable to those in the most recent days. The dissertation illustrates how prediction based resource discovery enables volunteer cloud systems to provide reliable cloud services over the unreliable and non-dedicated volunteer hosts with empirical evidences. VM placement algorithms play crucial role in Cloud Computing in fulfilling its characteristics and achieving its objectives. In general, VM placement is a challenging problem that has been extensively studied in conventional Cloud Computing context. Due to its divergent characteristics, volunteer cloud computing needs a novel and unique way of solving the existing Cloud Computing problems, including VM placement. Intermittent availability of nodes, unreliable infrastructure, and resource constrained nodes are some of the characteristics of volunteer cloud computing that make VM placement problem more complicated. In this dissertation, we model the VM placement problem as a \textit{Bounded 0-1 Multi-Dimensional Knapsack Problem}. As a known NP-hard problem, the dissertation discusses heuristic based algorithms that takes the typical characteristics of volunteer cloud computing into consideration, to solve the VM placement problem formulated as a knapsack problem. Three algorithms are developed to meet the objectives and constraints specific to volunteer cloud computing. The algorithms are tested on a real volunteer cloud computing test-bed and showed a good performance results based on their optimization objectives. The dissertation also presents the design and implementation of a real volunteer cloud computing system, cuCloud, that bases its resource infrastructure on donated computing resource of computers. The need for the development of cuCloud stems from the lack of experimentation platform, real or simulation, that specifically works for volunteer cloud computing. The cuCloud is a system that can be called a genuine volunteer cloud computing system, which manifests the concept of ``Volunteer Computing as a Service'' (VCaaS), with a particular significance in edge computing and related applications. In the course of this dissertation, empirical evaluations show that volunteer clouds can be used to execute range of applications reliably and efficiently. Moreover, the physical proximity of volunteer nodes to where applications originate, edge of the network, helps them in reducing the round trip time latency of applications. However, the overall computing capability of volunteer clouds will not suffice to handle highly resource intensive applications by itself. Based on these observations, the dissertation also proposes the use of volunteer clouds as a resource fabric in the emerging Edge Computing paradigm as a future work.
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Amy (Amy X. ). "A functional flow framework for cloud computing". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77453.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 53).
This thesis covers a basic framework to calculate the maximum computation rate of a set of functions over a network. These functions are broken down into a series of computations, which are distributed among nodes of the network, with the output sent to the terminal node. We analyze two models with different types of computation costs, a linear computation cost model and a maximum computation cost model. We show how computation distribution through the given network changes with different types of computation and communication limitations. This framework can also be used in cloud design, where a network of given complexity is designed to maximize computation rate for a given set of functions. We provide a greedy algorithm that provides one solution to this problem, and create simulations for each framework, and analyze the results.
by Amy Zhang.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
9

Chaudhry, Nauman Riaz. "Workflow framework for cloud-based distributed simulation". Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/14778.

Pełny tekst źródła
Streszczenie:
Although distributed simulation (DS) using parallel computing has received considerable research and development in a number of compute-intensive fields, it has still to be significantly adopted by the wider simulation community. According to scientific literature, major reasons for low adoption of cloud-based services for DS execution are the perceived complexities of understanding and managing the underlying architecture and software for deploying DS models, as well as the remaining challenges in performance and interoperability of cloud-based DS. The focus of this study, therefore, has been to design and test the feasibility of a well-integrated, generic, workflow structured framework that is universal in character and transparent in implementation. The choice of a workflow framework for implementing cloud-based DS was influenced by the ability of scientific workflow management systems to define, execute, and actively manage computing workflows. As a result of this study, a hybrid workflow framework, combined with four cloud-based implementation services, has been used to develop an integrated potential standard for workflow implementation of cloud-based DS, which has been named the WORLDS framework (Workflow Framework for Cloud-based Distributed Simulation). The main contribution of this research study is the WORLDS framework itself, which identifies five services (including a Parametric Study Service) that can potentially be provided through the use of workflow technologies to deliver effective cloud-based distributed simulation that is transparently provisioned for the user. This takes DS a significant step closer to its provision as a viable cloud-based service (DSaaS). In addition, the study introduces a simple workflow solution to applying parametric studies to distributed simulations. Further research to confirm the generic nature of the workflow framework, to apply and test modified HLA standards, and to introduce a simulation analytics function by modifying the workflow is anticipated.
Style APA, Harvard, Vancouver, ISO itp.
10

Li, Min. "A resource management framework for cloud computing". Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47804.

Pełny tekst źródła
Streszczenie:
The cloud computing paradigm is realized through large scale distributed resource management and computation platforms such as MapReduce, Hadoop, Dryad, and Pregel. These platforms enable quick and efficient development of a large range of applications that can be sustained at scale in a fault-tolerant fashion. Two key technologies, namely resource virtualization and feature-rich enterprise storage, are further driving the wide-spread adoption of virtualized cloud environments. Many challenges arise when designing resource management techniques for both native and virtualized data centers. First, parameter tuning of MapReduce jobs for efficient resource utilization is a daunting and time consuming task. Second, while the MapReduce model is designed for and leverages information from native clusters to operate efficiently, the emergence of virtual cluster topology results in overlaying or hiding the actual network information. This leads to two resource selection and placement anomalies: (i) loss of data locality, and (ii) loss of job locality. Consequently, jobs may be placed physically far from their associated data or related jobs, which adversely affect the overall performance. Finally, the extant resource provisioning approach leads to significant wastage as enterprise cloud providers have to consider and provision for peak loads instead of average load (that is many times lower). In this dissertation, we design and develop a resource management framework to address the above challenges. We first design an innovative resource scheduler, CAM, aimed at MapReduce applications running in virtualized cloud environments. CAM reconciles both data and VM resource allocation with a variety of competing constraints, such as storage utilization, changing CPU load and network link capacities based on a flow-network algorithm. Additionally, our platform exposes the typically hidden lower-level topology information to the MapReduce job scheduler, which enables it to make optimal task assignments. Second, we design an online performance tuning system, mrOnline, which monitors the MapReduce job execution, tunes the parameters based on collected statistics and provides fine-grained control over parameter configuration changes to the user. To this end, we employ a gray-box based smart hill-climbing algorithm that leverages MapReduce runtime statistics and effectively converge to a desirable configuration within a single iteration. Finally, we target enterprise applications in virtualized environment where typically a network attached centralized storage system is deployed. We design a new protocol to share primary data de-duplication information available at the storage server with the client. This enables better client-side cache utilization and reduces server-client network traffic, which leads to overall high performance. Based on the protocol, a workload aware VM management strategy is further introduced to decrease the load to the storage server and enhance the I/O efficiency for clients.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
11

Pech, David. "Cloud Framework on Infrastructure as a Service". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236185.

Pełny tekst źródła
Streszczenie:
Práce se zabývá podrobnou analýzou požadavků na moderní aplikační rámec pro prostředí cloud. Za pomoci standardních návrhových vzorů a technik připravuje teoretický základ a pravidla, která musí uvnitř rámce platit. V práci je realizována referenční implementace a připravena demonstrační aplikace středního rozsahu, aby představila výhody plynoucí z užití frameworku.
Style APA, Harvard, Vancouver, ISO itp.
12

Akrir, Khaled Ali Ahmed. "Cloud computing technology framework and reducing risks". Doctoral thesis, Česká zemědělská univerzita v Praze, 2015. http://www.nusl.cz/ntk/nusl-259675.

Pełny tekst źródła
Streszczenie:
The thesis investigates, in a qualitative way, the vectors that contribute to cloud computing risks in the areas of security, business, and compliance. The focus of this research is on the identification of risk vectors that affect cloud computing and the creation of a framework that can help IT managers in their cloud adoption process. Economic pressures on businesses are creating a demand for an alternative delivery of the model that can provide flexible payments, dramatic cuts in capital investment, and reductions in operational cost. Cloud computing is positioned to take advantage of these economic pressures with low cost IT services and a flexible payment model, but at what risk to the business? Security concerns about cloud computing are heightened and fueled by misconceptions related to security and compliance risks. Unfortunately, these security concerns are seldom, expressed quantifiably. To bring clarity to cloud computing security, compliance, and business risks, this research focuses on a qualitative analysis of risk vectors drawn from one-on-one interviews with top IT experts selected. The qualitative aspect of this research separates facts from unfounded suspicions, and creates a framework that can help align perceived risks of cloud computing with actual risks. The qualitative research was done through interviews with experts and through the survey to measure risk perceptions about cloud computing using a Likert scale. The decision-making model and the framework created by this research help to rationalize the risk vectors on cloud environments and recommend reducing strategies to bring the IT industry one step closer to a clearer understanding of the risks-tradeoffs implications of cloud computing environments.
Style APA, Harvard, Vancouver, ISO itp.
13

Kercher, Kellie Elizabeth. "Distributed Agent Cloud-Sourced Malware Reporting Framework". BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4250.

Pełny tekst źródła
Streszczenie:
Malware is a fast growing threat that consists of a malicious script or piece of software that is used to disrupt the integrity of a user's experience. Antivirus software can help protect a user against these threats and there are numerous vendors users can choose from for their antivirus protection. However, each vendor has their own set of virus definitions varying in resources and capabilities in recognizing new threats. Currently, a persistent system is not in place that measures and displays data on the performance of antivirus vendors in responding to new malware over a continuous period of time. There is a need for a system that can evaluate antivirus performance in order to better inform end users of their security options, in addition to informing clients of prevalent threats occurring in their network. This project is dedicated to assessing the viability of a cloud sourced malware reporting framework that uses distributed agents to evaluate the performance of antivirus software based on malware signatures.
Style APA, Harvard, Vancouver, ISO itp.
14

Al-Dhuraibi, Yahya. "Flexible framework for elasticity in cloud computing". Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I079/document.

Pełny tekst źródła
Streszczenie:
Le Cloud computing a gagné beaucoup de popularité et a reçu beaucoup d'attention des deux mondes, industriel et académique, puisque cela les libère de la charge et le coût de la gestion de centres de données locaux. Toutefois, le principal facteur motivant l'utilisation du Cloud est sa capacité de fournir des ressources en fonction des besoins du client. Ce concept est appelé l’élasticité. Adapter les applications Cloud lors de leur exécution en fonction des variations de la demande est un grand défi. En outre, l'élasticité de Cloud est diverse et hétérogène car elle englobe différentes approches, stratégies, objectifs, etc. Nous sommes intéressés à étudier: Comment résoudre le problème de sur/sous-approvisionnement? Comment garantir la disponibilité des ressources et surmonter les problèmes d'hétérogénéité et de granularité des ressources? Comment standardiser, unifier les solutions d'élasticité et de modéliser sa diversité à un haut niveau d'abstraction? Dans cette thèse, trois majeures contributions ont été proposées: Tout d’abord, un état de l’art à jour de l’élasticité du Cloud ; cet état de l’art passe en revue les différents travaux relatifs à l’élasticité des machines virtuelles et des conteneurs. Deuxièmement, ElasticDocker, une approche permettant de gérer l’élasticité des conteneurs, notamment l’élasticité verticale, la migration et l’élasticité combinée. Troisièmement, MoDEMO, un nouveau cadre de gestion d'élasticité unifié, basé sur un standard, dirigé par les modèles, hautement extensible et reconfigurable, supportant plusieurs stratégies, différents types d’élasticité, différentes techniques de virtualisation et plusieurs fournisseurs de Cloud
Cloud computing has been gaining popularity and has received a great deal of attention from both industrial and academic worlds since it frees them from the burden and cost of managing local data centers. However, the main factor motivating the use of cloud is its ability to provide resources according to the customer needs or what is referred to as elasticity. Adapting cloud applications during their execution according to demand variation is a challenging task. In addition, cloud elasticity is diverse and heterogeneous because it encompasses different approaches, policies, purposes, etc. We are interested in investigating: How to overcome the problem of over-provisioning/under-provisioning? How to guaranty the resource availability and overcome the problems of heterogeneity and resource granularity? How to standardize, unify elasticity solutions and model its diversity at a high level of abstraction? In this thesis, we solved such challenges and we investigated many aspects of elasticity to manage efficiently the resources in the cloud. Three contributions are proposed. Firstly, an up-to-date state-of-the-art of the cloud elasticity, this state of art reviews different works related to elasticity for both Virtual Machines and containers. Secondly, ElasticDocker, an approach to manage container elasticity including vertical elasticity, live migration, and elasticity combination between different virtualization techniques. Thirdly, MoDEMO, a new unified standard-based, model-driven, highly extensible and reconfigurable framework that supports multiple elasticity policies, vertical and horizontal elasticity, different virtualization techniques and multiple cloud providers
Style APA, Harvard, Vancouver, ISO itp.
15

Maskara, Arvind. "A Process Framework for Managing Quality of Service in Private Cloud". ScholarWorks, 2014. https://scholarworks.waldenu.edu/dissertations/3220.

Pełny tekst źródła
Streszczenie:
As information systems leaders tap into the global market of cloud computing-based services, they struggle to maintain consistent application performance due to lack of a process framework for managing quality of service (QoS) in the cloud. Guided by the disruptive innovation theory, the purpose of this case study was to identify a process framework for meeting the QoS requirements of private cloud service users. Private cloud implementation was explored by selecting an organization in California through purposeful sampling. Information was gathered by interviewing 23 information technology (IT) professionals, a mix of frontline engineers, managers, and leaders involved in the implementation of private cloud. Another source of data was documents such as standard operating procedures, policies, and guidelines related to private cloud implementation. Interview transcripts and documents were coded and sequentially analyzed. Three prominent themes emerged from the analysis of data: (a) end user expectations, (b) application architecture, and (c) trending analysis. The findings of this study may help IT leaders in effectively managing QoS in cloud infrastructure and deliver reliable application performance that may help in increasing customer population and profitability of organizations. This study may contribute to positive social change as information systems managers and workers can learn and apply the process framework for delivering stable and reliable cloud-hosted computer applications.
Style APA, Harvard, Vancouver, ISO itp.
16

Abbasi, Abdul Ghafoor. "CryptoNET : Generic Security Framework for Cloud Computing Environments". Doctoral thesis, KTH, Kommunikationssystem, CoS, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32786.

Pełny tekst źródła
Streszczenie:
The area of this research is security in distributed environment such as cloud computing and network applications. Specific focus was design and implementation of high assurance network environment, comprising various secure and security-enhanced applications. “High Assurance” means that -               our system is guaranteed to be secure, -               it is verifiable to provide the complete set of security services, -               we prove that it always functions correctly, and -               we justify our claim that it can not be compromised without user neglect and/or consent.   We do not know of any equivalent research results or even commercial security systems with such properties. Based on that, we claim several significant research and also development contributions to the state–of–art of computer networks security. In the last two decades there were many activities and contributions to protect data, messages and other resources in computer networks, to provide privacy of users, reliability, availability and integrity of resources, and to provide other security properties for network environments and applications. Governments, international organizations, private companies and individuals are investing a great deal of time, efforts and budgets to install and use various security products and solutions. However, in spite of all these needs, activities, on-going efforts, and all current solutions, it is general belief that the security in today networks and applications is not adequate. At the moment there are two general approaches to network application’s security. One approach is to enforce isolation of users, network resources, and applications. In this category we have solutions like firewalls, intrusion–detection systems, port scanners, spam filters, virus detection and elimination tools, etc. The goal is to protect resources and applications by isolation after their installation in the operational environment. The second approach is to apply methodology, tools and security solutions already in the process of creating network applications. This approach includes methodologies for secure software design, ready–made security modules and libraries, rules for software development process, and formal and strict testing procedures. The goal is to create secure applications even before their operational deployment. Current experience clearly shows that both approaches failed to provide an adequate level of security, where users would be guaranteed to deploy and use secure, reliable and trusted network applications. Therefore, in the current situation, it is obvious that a new approach and a new thinking towards creating strongly protected and guaranteed secure network environments and applications are needed. Therefore, in our research we have taken an approach completely different from the two mentioned above. Our first principle is to use cryptographic protection of all application resources. Based on this principle, in our system data in local files and database tables are encrypted, messages and control parameters are encrypted, and even software modules are encrypted. The principle is that if all resources of an application are always encrypted, i.e. “enveloped in a cryptographic shield”, then -               its software modules are not vulnerable to malware and viruses, -               its data are not vulnerable to illegal reading and theft, -               all messages exchanged in a networking environment are strongly protected, and -               all other resources of an application are also strongly protected.   Thus, we strongly protect applications and their resources before they are installed, after they are deployed, and also all the time during their use. Furthermore, our methodology to create such systems and to apply total cryptographic protection was based on the design of security components in the form of generic security objects. First, each of those objects – data object or functional object, is itself encrypted. If an object is a data object, representing a file, database table, communication message, etc., its encryption means that its data are protected all the time. If an object is a functional object, like cryptographic mechanisms, encapsulation module, etc., this principle means that its code cannot be damaged by malware. Protected functional objects are decrypted only on the fly, before being loaded into main memory for execution. Each of our objects is complete in terms of its content (data objects) and its functionality (functional objects), each supports multiple functional alternatives, they all provide transparent handling of security credentials and management of security attributes, and they are easy to integrate with individual applications. In addition, each object is designed and implemented using well-established security standards and technologies, so the complete system, created as a combination of those objects, is itself compliant with security standards and, therefore, interoperable with exiting security systems. By applying our methodology, we first designed enabling components for our security system. They are collections of simple and composite objects that also mutually interact in order to provide various security services. The enabling components of our system are:  Security Provider, Security Protocols, Generic Security Server, Security SDKs, and Secure Execution Environment. They are all mainly engine components of our security system and they provide the same set of cryptographic and network security services to all other security–enhanced applications. Furthermore, for our individual security objects and also for larger security systems, in order to prove their structural and functional correctness, we applied deductive scheme for verification and validation of security systems. We used the following principle: “if individual objects are verified and proven to be secure, if their instantiation, combination and operations are secure, and if protocols between them are secure, then the complete system, created from such objects, is also verifiably secure”. Data and attributes of each object are protected and secure, and they can only be accessed by authenticated and authorized users in a secure way. This means that structural security properties of objects, upon their installation, can be verified. In addition, each object is maintained and manipulated within our secure environment so each object is protected and secure in all its states, even after its closing state, because the original objects are encrypted and their data and states stored in a database or in files are also protected. Formal validation of our approach and our methodology is performed using Threat Model. We analyzed our generic security objects individually and identified various potential threats for their data, attributes, actions, and various states. We also evaluated behavior of each object against potential threats and established that our approach provides better protection than some alternative solutions against various threats mentioned. In addition, we applied threat model to our composite generic security objects and secure network applications and we proved that deductive approach provides better methodology for designing and developing secure network applications. We also quantitatively evaluated the performance of our generic security objects and found that the system developed using our methodology performs cryptographic functions efficiently. We have also solved some additional important aspects required for the full scope of security services for network applications and cloud environment: manipulation and management of cryptographic keys, execution of encrypted software, and even secure and controlled collaboration of our encrypted applications in cloud computing environments. During our research we have created the set of development tools and also a development methodology which can be used to create cryptographically protected applications. The same resources and tools are also used as a run–time supporting environment for execution of our secure applications. Such total cryptographic protection system for design, development and run–time of secure network applications we call CryptoNET system. CrytpoNET security system is structured in the form of components categorized in three groups: Integrated Secure Workstation, Secure Application Servers, and Security Management Infrastructure Servers. Furthermore, our enabling components provide the same set of security services to all components of the CryptoNET system. Integrated Secure Workstation is designed and implemented in the form of a collaborative secure environment for users. It protects local IT resources, messages and operations for multiple applications. It comprises four most commonly used PC applications as client components: Secure Station Manager (equivalent to Windows Explorer), Secure E-Mail Client, Secure Web Browser, and Secure Documents Manager. These four client components for their security extensions use functions and credentials of the enabling components in order to provide standard security services (authentication, confidentiality, integrity and access control) and also additional, extended security services, such as transparent handling of certificates, use of smart cards, Strong Authentication protocol, Security Assertion Markup Language (SAML) based Single-Sign-On protocol, secure sessions, and other security functions. Secure Application Servers are components of our secure network applications: Secure E-Mail Server, Secure Web Server, Secure Library Server, and Secure Software Distribution Server. These servers provide application-specific services to client components. Some of the common security services provided by Secure Application Servers to client components are Single-Sign-On protocol, secure communication, and user authorization. In our system application servers are installed in a domain but it can be installed in a cloud environment as services. Secure Application Servers are designed and implemented using the concept and implementation of the Generic Security Server. It provides extended security functions using our engine components. So by adopting this approach, the same sets of security services are available to each application server. Security Management Infrastructure Servers provide domain level and infrastructure level services to the components of the CryptoNET architecture. They are standard security servers, known as cloud security infrastructure, deployed as services in our domain level could environment. CryptoNET system is complete in terms of functions and security services that it provides. It is internally integrated, so that the same cryptographic engines are used by all applications. And finally, it is completely transparent to users – it applies its security services without expecting any special interventions by users. In this thesis, we developed and evaluated secure network applications of our CryptoNET system and applied Threat Model to their validation and analysis. We found that deductive scheme of using our generic security objects is effective for verification and testing of secure, protected and verifiable secure network applications. Based on all these theoretical research and practical development results, we believe that our CryptoNET system is completely and verifiably secure and, therefore, represents a significant contribution to the current state-of-the-art of computer network security.
QC 20110427
Style APA, Harvard, Vancouver, ISO itp.
17

Wang, Yongzhi. "Constructing Secure MapReduce Framework in Cloud-based Environment". FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2238.

Pełny tekst źródła
Streszczenie:
MapReduce, a parallel computing paradigm, has been gaining popularity in recent years as cloud vendors offer MapReduce computation services on their public clouds. However, companies are still reluctant to move their computations to the public cloud due to the following reason: In the current business model, the entire MapReduce cluster is deployed on the public cloud. If the public cloud is not properly protected, the integrity and the confidentiality of MapReduce applications can be compromised by attacks inside or outside of the public cloud. From the result integrity’s perspective, if any computation nodes on the public cloud are compromised,thosenodes can return incorrect task results and therefore render the final job result inaccurate. From the algorithmic confidentiality’s perspective, when more and more companies devise innovative algorithms and deploy them to the public cloud, malicious attackers can reverse engineer those programs to detect the algorithmic details and, therefore, compromise the intellectual property of those companies. In this dissertation, we propose to use the hybrid cloud architecture to defeat the above two threats. Based on the hybrid cloud architecture, we propose separate solutions to address the result integrity and the algorithmic confidentiality problems. To address the result integrity problem, we propose the Integrity Assurance MapReduce (IAMR) framework. IAMR performs the result checking technique to guarantee high result accuracy of MapReduce jobs, even if the computation is executed on an untrusted public cloud. We implemented a prototype system for a real hybrid cloud environment and performed a series of experiments. Our theoretical simulations and experimental results show that IAMR can guarantee a very low job error rate, while maintaining a moderate performance overhead. To address the algorithmic confidentiality problem, we focus on the program control flow and propose the Confidentiality Assurance MapReduce (CAMR) framework. CAMR performs the Runtime Control Flow Obfuscation (RCFO) technique to protect the predicates of MapReduce jobs. We implemented a prototype system for a real hybrid cloud environment. The security analysis and experimental results show that CAMR defeats static analysis-based reverse engineering attacks, raises the bar for the dynamic analysis-based reverse engineering attacks, and incurs a modest performance overhead.
Style APA, Harvard, Vancouver, ISO itp.
18

Schmahmann, Adin R. "SURGE : the Secure Cloud Storage and Collaboration Framework". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/96456.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 38).
SURGE is a Secure Cloud Storage and Collaboration Framework that is designed to be easy for application developers to use. The motivation is to allow application developers to mimic existing cloud based applications, but make them cryptographically secure, in addition to allowing application developers to come up with entirely new secure cloud based applications. SURGE stores all of its data as operations and as a result can leverage techniques like Operational Transforms to allow offline usage as well as lowering network bandwidth consumption. Additionally, storing data as operations allows SURGE to develop a rich permissions system. This permission system allows basic permissions such as read-only, read-write, and administrator in addition to more advanced permissions such as write-only, group read/write, and anonymous permissions. To evaluate the usability of SURGE a C# prototype was constructed and used to create a collaborative text editor that performs well under real-world user tests.
by Adin R. Schmahmann.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
19

Gonzalez, Nelson Mimura. "MPSF: cloud scheduling framework for distributed workflow execution". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-03032017-083914/.

Pełny tekst źródła
Streszczenie:
Cloud computing represents a distributed computing paradigm that gained notoriety due to its properties related to on-demand elastic and dynamic resource provisioning. These characteristics are highly desirable for the execution of workflows, in particular scientific workflows that required a great amount of computing resources and that handle large-scale data. One of the main questions in this sense is how to manage resources of one or more cloud infrastructures to execute workflows while optimizing resource utilization and minimizing the total duration of the execution of tasks (makespan). The more complex the infrastructure and the tasks to be executed are, the higher the risk of incorrectly estimating the amount of resources to be assigned to each task, leading to both performance and monetary costs. Scenarios which are inherently more complex, such as hybrid and multiclouds, rarely are considered by existing resource management solutions. Moreover, a thorough research of relevant related work revealed that most of the solutions do not address data-intensive workflows, a characteristic that is increasingly evident for modern scientific workflows. In this sense, this proposal presents MPSF, the Multiphase Proactive Scheduling Framework, a cloud resource management solution based on multiple scheduling phases that continuously assess the system to optimize resource utilization and task distribution. MPSF defines models to describe and characterize workflows and resources. MPSF also defines performance and reliability models to improve load distribution among nodes and to mitigate the effects of performance fluctuations and potential failures that might occur in the system. Finally, MPSF defines a framework and an architecture to integrate all these components and deliver a solution that can be implemented and tested in real applications. Experimental results show that MPSF is able to predict with much better accuracy the duration of workflows and workflow phases, as well as providing performance gains compared to greedy approaches.
A computação em nuvem representa um paradigma de computação distribuída que ganhoudestaque devido a aspectos relacionados à obtenção de recursos sob demanda de modo elástico e dinâmico. Estas características são consideravelmente desejáveis para a execução de tarefas relacionadas a fluxos de trabalho científicos, que exigem grande quantidade de recursos computacionais e grande fluxo de dados. Uma das principais questões neste sentido é como gerenciar os recursos de uma ou mais infraestruturas de nuvem para execução de fluxos de trabalho de modo a otimizar a utilização destes recursos e minimizar o tempo total de execução das tarefas. Quanto mais complexa a infraestrutura e as tarefas a serem executadas, maior o risco de estimar incorretamente a quantidade de recursos destinada para cada tarefa, levando a prejuízos não só em termos de tempo de execução como também financeiros. Cenários inerentemente mais complexos como nuvens híbridas e múltiplas nuvens raramente são considerados em soluções existentes de gerenciamento de recursos para nuvens. Além destes fatores, a maioria das soluções não oferece mecanismos claros para tratar de fluxos de trabalho com alta intensidade de dados, característica cada vez mais proeminente em fluxos de trabalho moderno. Neste sentido, esta proposta apresenta MPSF, uma solução de gerenciamento de recursos baseada em múltiplas fases de gerenciamento baseadas em mecanismos dinâmicos de alocação de tarefas. MPSF define modelos para descrever e caracterizar fluxos de trabalho e recursos de modo a suportar cenários simples e complexos, como nuvens híbridas e nuvens integradas. MPSF também define modelos de desempenho e confiabilidade para melhor distribuir a carga e para combater os efeitos de possíveis falhas que possam ocorrer no sistema. Por fim, MPSF define um arcabouço e um arquitetura que integra todos estes componentes de modo a definir uma solução que possa ser implementada e utilizada em cenários reais. Testes experimentais indicam que MPSF não só é capaz de prever com maior precisão a duração da execução de tarefas, como também consegue otimizar a execução das mesmas, especialmente para tarefas que demandam alto poder computacional e alta quantidade de dados.
Style APA, Harvard, Vancouver, ISO itp.
20

Ullah, Amjad. "Towards a novel biologically-inspired cloud elasticity framework". Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/26064.

Pełny tekst źródła
Streszczenie:
With the widespread use of the Internet, the popularity of web applications has significantly increased. Such applications are subject to unpredictable workload conditions that vary from time to time. For example, an e-commerce website may face higher workloads than normal during festivals or promotional schemes. Such applications are critical and performance related issues, or service disruption can result in financial losses. Cloud computing with its attractive feature of dynamic resource provisioning (elasticity) is a perfect match to host such applications. The rapid growth in the usage of cloud computing model, as well as the rise in complexity of the web applications poses new challenges regarding the effective monitoring and management of the underlying cloud computational resources. This thesis investigates the state-of-the-art elastic methods including the models and techniques for the dynamic management and provisioning of cloud resources from a service provider perspective. An elastic controller is responsible to determine the optimal number of cloud resources, required at a particular time to achieve the desired performance demands. Researchers and practitioners have proposed many elastic controllers using versatile techniques ranging from simple if-then-else based rules to sophisticated optimisation, control theory and machine learning based methods. However, despite an extensive range of existing elasticity research, the aim of implementing an efficient scaling technique that satisfies the actual demands is still a challenge to achieve. There exist many issues that have not received much attention from a holistic point of view. Some of these issues include: 1) the lack of adaptability and static scaling behaviour whilst considering completely fixed approaches; 2) the burden of additional computational overhead, the inability to cope with the sudden changes in the workload behaviour and the preference of adaptability over reliability at runtime whilst considering the fully dynamic approaches; and 3) the lack of considering uncertainty aspects while designing auto-scaling solutions. This thesis seeks solutions to address these issues altogether using an integrated approach. Moreover, this thesis aims at the provision of qualitative elasticity rules. This thesis proposes a novel biologically-inspired switched feedback control methodology to address the horizontal elasticity problem. The switched methodology utilises multiple controllers simultaneously, whereas the selection of a suitable controller is realised using an intelligent switching mechanism. Each controller itself depicts a different elasticity policy that can be designed using the principles of fixed gain feedback controller approach. The switching mechanism is implemented using a fuzzy system that determines a suitable controller/- policy at runtime based on the current behaviour of the system. Furthermore, to improve the possibility of bumpless transitions and to avoid the oscillatory behaviour, which is a problem commonly associated with switching based control methodologies, this thesis proposes an alternative soft switching approach. This soft switching approach incorporates a biologically-inspired Basal Ganglia based computational model of action selection. In addition, this thesis formulates the problem of designing the membership functions of the switching mechanism as a multi-objective optimisation problem. The key purpose behind this formulation is to obtain the near optimal (or to fine tune) parameter settings for the membership functions of the fuzzy control system in the absence of domain experts’ knowledge. This problem is addressed by using two different techniques including the commonly used Genetic Algorithm and an alternative less known economic approach called the Taguchi method. Lastly, we identify seven different kinds of real workload patterns, each of which reflects a different set of applications. Six real and one synthetic HTTP traces, one for each pattern, are further identified and utilised to evaluate the performance of the proposed methods against the state-of-the-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
21

Coss, David. "Cloud Privacy Audit Framework: A Value-Based Design". VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3106.

Pełny tekst źródła
Streszczenie:
The rapid expansion of cloud technology provides enormous capacity, which allows for the collection, dissemination and re-identification of personal information. It is the cloud’s resource capabilities such as these that fuel the concern for privacy. The impetus of these concerns are not too far removed from those expressed by Mason in 1986, when he identified privacy as one of the biggest ethical issues facing the information age. There seems to be continuous ebb and flow relationship with respect to privacy concerns and the development of new information communication technologies such as cloud computing. Privacy issues are a concern to all types of stakeholders in the cloud. Individuals using the cloud are exposed to privacy threats when they are persuaded to provide personal information unwantedly. An Organization using a cloud service is at risk of non-compliance to internal privacy policies or legislative privacy regulations. The cloud service provider has a privacy risk of legal liability and credibility concerns if sensitive information is exposed. The data subject is at risk of having personal information exposed. In essence everyone who is involved in cloud computing has some level of privacy risk that needs to be evaluated before, during and after they or an organization they interact with adopts a cloud technology solution. This resonates a need for organizations to develop privacy practices that are socially responsible towards the protection of their stakeholders’ information privacy. This research is about understanding the relationship between individual values and their privacy objectives. There is a lack of clarity in organizations as to what individuals consider privacy to be. Therefore, it is essential to understand an individual’s privacy values. Individuals seem to have divergent perspectives on the nature and scope of how their personal information is to be kept private in different modes of technologies. This study is concerned with identifying individual privacy objectives for cloud computing. We argue that privacy is an elusive concept due to the evolving relationship between technology and privacy. Understanding and identifying individuals’ privacy objectives are an influential step in the process of protecting the privacy in cloud computing environments. The aim of this study is to identify individual privacy values and develop cloud privacy objectives, which can be used to design a privacy audit for cloud computing environments. We used Keeney’s (1992) value focused thinking approach to identify individual privacy values with respect to emerging cloud technologies, and to develop an understanding of how cloud privacy objectives are shaped by the individual’s privacy values. We discuss each objective and how they relate to privacy concerns in cloud computing. We also use the cloud privacy objectives in a design science study to design a cloud privacy audit framework. We then discuss the how this research helps privacy managers develop a cloud privacy strategy, evaluate cloud privacy practices and develop a cloud privacy audit to ensure privacy. Lastly, future research directions are proposed.
Style APA, Harvard, Vancouver, ISO itp.
22

Al-Aqrabi, Hussain. "Cloud BI : a multi-party authentication framework for securing business intelligence on the Cloud". Thesis, University of Derby, 2016. http://hdl.handle.net/10545/615020.

Pełny tekst źródła
Streszczenie:
Business intelligence (BI) has emerged as a key technology to be hosted on Cloud computing. BI offers a method to analyse data thereby enabling informed decision making to improve business performance and profitability. However, within the shared domains of Cloud computing, BI is exposed to increased security and privacy threats because an unauthorised user may be able to gain access to highly sensitive, consolidated business information. The business process contains collaborating services and users from multiple Cloud systems in different security realms which need to be engaged dynamically at runtime. If the heterogamous Cloud systems located in different security realms do not have direct authentication relationships then it is technically difficult to enable a secure collaboration. In order to address these security challenges, a new authentication framework is required to establish certain trust relationships among these BI service instances and users by distributing a common session secret to all participants of a session. The author addresses this challenge by designing and implementing a multiparty authentication framework for dynamic secure interactions when members of different security realms want to access services. The framework takes advantage of the trust relationship between session members in different security realms to enable a user to obtain security credentials to access Cloud resources in a remote realm. This mechanism can help Cloud session users authenticate their session membership to improve the authentication processes within multi-party sessions. The correctness of the proposed framework has been verified by using BAN Logics. The performance and the overhead have been evaluated via simulation in a dynamic environment. A prototype authentication system has been designed, implemented and tested based on the proposed framework. The research concludes that the proposed framework and its supporting protocols are an effective functional basis for practical implementation testing, as it achieves good scalability and imposes only minimal performance overhead which is comparable with other state-of-art methods.
Style APA, Harvard, Vancouver, ISO itp.
23

Firozbakht, Farzad. "Cloud Computing Service Discovery Framework for IaaS and PaaS Models". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35595.

Pełny tekst źródła
Streszczenie:
Cloud service discovery is a new challenge which requires a dedicated framework in order to approach it. Over the past few years, several methods and frameworks have been developed for cloud service discovery but they are mostly designed for all cloud computing models in general which are not optimal. The three cloud computing models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), each computing model has its own set of resources. Having one single discovery framework for all three is not very efficient and the implementation of such a framework is complex with lots of overhead. The existing frameworks for cloud service discovery are mostly semantic-based and there are a few syntax-based frameworks that are using the filter by Attribute method as their solution. This research proposes a cloud service discovery framework focusing on IaaS and PaaS cloud computing models. Our framework is using a syntax-based query engine at its core and uses Extensible Markup Language (XML) for storing cloud service information. We eventually test the framework from the user point of view with IaaS and PaaS cloud services from real cloud service providers. Such a framework could be a good solution for IaaS and PaaS since it is accurate enough for service discovery and easy to update.
Style APA, Harvard, Vancouver, ISO itp.
24

Yadekar, Yaser. "A framework to manage uncertainties in cloud manufacturing environment". Thesis, Cranfield University, 2016. http://dspace.lib.cranfield.ac.uk/handle/1826/11776.

Pełny tekst źródła
Streszczenie:
This research project aims to develop a framework to manage uncertainty in cloud manufacturing for small and medium enterprises (SMEs). The framework includes a cloud manufacturing taxonomy; guidance to deal with uncertainty in cloud manufacturing, by providing a process to identify uncertainties; a detailed step-by-step approach to managing the uncertainties; a list of uncertainties; and response strategies to security and privacy uncertainties in cloud manufacturing. Additionally, an online assessment tool has been developed to implement the uncertainty management framework into a real life context. To fulfil the aim and objectives of the research, a comprehensive literature review was performed in order to understand the research aspects. Next, an uncertainty management technique was applied to identify, assess, and control uncertainties in cloud manufacturing. Two well-known approaches were used in the evaluation of the uncertainties in this research: Simple Multi-Attribute Rating Technique (SMART) to prioritise uncertainties; and a fuzzy rule-based system to quantify security and privacy uncertainties. Finally, the framework was embedded into an online assessment tool and validated through expert opinion and case studies. Results from this research are useful for both academia and industry in understanding aspects of cloud manufacturing. The main contribution is a framework that offers new insights for decisions makers on how to deal with uncertainty at adoption and implementation stages of cloud manufacturing. The research also introduced a novel cloud manufacturing taxonomy, a list of uncertainty factors, an assessment process to prioritise uncertainties and quantify security and privacy related uncertainties, and a knowledge base for providing recommendations and solutions.
Style APA, Harvard, Vancouver, ISO itp.
25

Ekanayake, Mudiyanselage Wijaya Dheeshakthi. "An SDN-based Framework for QoSaware Mobile Cloud Computing". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35117.

Pełny tekst źródła
Streszczenie:
In mobile cloud computing (MCC), rich mobile application data is processed at the cloud infrastructure by reliving resource limited mobile devices from computationally complex tasks. However, due to the ubiquitous and mobility nature, providing time critical rich applications over remote cloud infrastructure is a challenging task for mobile application service providers. Therefore, according to the literature, close proximity placement of cloud services has been identified as a way to achieve lower end-to-end access delay and thereby provide a higher quality of experience (QoE) for rich mobile application users. However, providing a higher Quality of Service (QoS) with mobility is still a challenge within close proximity clouds. Access delay to a closely placed cloud tends to be increased over time when users move away from the cloud. However, reactive resource relocation mechanism proposed in literature does not provide a comprehensive mechanism to guarantee the QoS and as well as to minimize service provisioning cost for mobile cloud service providers. As a result, using the benefits of SDN and the data plane programmability with logically centralized controllers, a resource allocation framework was proposed for IaaS mobile clouds with regional datacenters. The user mobility problem was analyzed within SDN-enabled wireless networks and addressed the possible service level agreement violations that could occur with inter-regional mobility. The proposed framework is composed of an optimization algorithm to provide seamless cloud service during user mobility. Further a service provisioning cost minimization criteria was considered during an event of resource allocation and inter-regional user mobility.
Style APA, Harvard, Vancouver, ISO itp.
26

Runsewe, Olubisi Atinuke. "A Policy-Based Management Framework for Cloud Computing Security". Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31503.

Pełny tekst źródła
Streszczenie:
Cloud Computing has changed how computing is done as applications and services are being consumed from the cloud. It has attracted a lot of attention in recent times due to the opportunities it offers. While Cloud Computing is economical, the security challenges it poses are quite significant and this has affected the adoption rate of the technology. With the potential vulnerabilities being introduced by moving data to the cloud, it has become imperative for cloud service providers to guarantee the security of information, leaving cloud service consumers (e.g., enterprises) with the task of negotiating the terms and conditions of services provided by the cloud service providers as well as trusting them with their data. Although various security solutions used for addressing the security of data within the enterprises are now being applied to the cloud, these security solutions are challenged due to the dynamic, distributed and complex nature of the cloud technology. This thesis proposes a novel Policy-Based Management (PBM) framework capable of achieving cross-tenant authorization, handling dynamic and anonymous users while reducing the security management task to address cloud security. The framework includes an access control model adapted to the cloud environment that adopts features from role-based, task-based and attribute-based access control frameworks for a fine-grained access control. We demonstrate how this framework can be applied to develop an access control system for an enterprise using cloud services. The framework verifies the correctness of access control policies for cloud security through reasoning technique.
Style APA, Harvard, Vancouver, ISO itp.
27

Figueira, André. "Secure framework for cloud data sharing based on blockchain". Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/24258.

Pełny tekst źródła
Streszczenie:
Blockchain is a relatively new and disruptive technology that is considered a distributed database working as a ledger, with the ability to facilitate the recording of transactions and tracking of assets. It is a growing list of records called blocks linked together by the hash of the previous block, where each block contains the most recent transactions in the network. Smart contracts are agreements between entities that are written in code, and, when associated with Blockchain they will operate without interference, censorship or malicious intentions. Data Sharing on Clouds is common but requires trust on third parties to ensure various aspects, such as security and privacy, but these are unknown aspects the owner has no controls over. The Cloud Storage providers control the data access and sharing over the data. Sharing data through third party services, using unknown methods is a delicate process regarding the privacy and security aspects. These two aspects are crucial points when it comes to personal and private data. In this work, the concept of using Blockchain to create a Data Sharing mechanism is explored. This proof of concept explores how data access and permissions can be controlled using blockchain and smart contracts, by giving control to the owner and focusing on smart contracts and blockchain; Sumário: Framework Segura para Partilha de Dados em Clouds sobre Blockchain Blockchain é uma tecnologia relativamente nova e disruptiva, considerada uma base de dados distribuída e funcionando como um livro de registo. Tem a capacidade de facilitar o registo de transacções e de rastreamento de bens. É uma lista crescente de conjuntos de registos chamados blocos, ligados um aos outros através da hash do bloco anterior. Os Contractos Inteligentes são acordos entre entidades escritas em código. Quando associadados à tecnologia Blockchain operam sem qualquer inteferência de terceiros, censura ou intenções maliciosas. A partilha de dados na cloud é bastante comum mas requere confiança em terceiros para garantir que vários aspectos como segurança e privacidade são assegurados, mas estes são aspectos sobre quais o dono dos dados não tem controlo. A cloud tem controlo sobre o acesso e a partilha de dados, sendo que a partilha de dados por serviços de terceiros é um processo delicado quando se refere a privacidade e a segurança desses dados. Estes são aspectos que são cruciais quando se refere a informação pessoal e confidencial Neste trabalho, o conceito de blockchain para criar um mecanismo de partilha de dados é explorado. Esta prova de conceito explora como a partilha de dados e o controlo de acesso pode ser executados usando blockchain e contractos inteligentes. Isto dando controlo ao dono dos dados e focos em blockchain e contractos inteligentes. Permitindo que o dono dos dados seja responsável pelos seus dados.
Style APA, Harvard, Vancouver, ISO itp.
28

Yahya, Farashazillah. "A security framework to protect data in cloud storage". Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415861/.

Pełny tekst źródła
Streszczenie:
According to Cisco Global Cloud Index, cloud storage users will store 1.6 Gigabytes data per month by 2019, compared to 992 megabytes data per month in 2014. With this trend, it has been shown that more and more data will reside in cloud storage and it is expected to grow further. As cloud storage is becoming an option for users for keeping their data online, it comes with security concerns for protecting data from threats. This thesis addresses the need to investigate the security factors that will enable efficient security protection for data in cloud storage and the relationships that exist between the different security factors. Consequently, this research has developed a conceptual framework that supports security in cloud storage. The main contribution of this research is the development of a Cloud Storage Security Framework (CSSF) to support an integrative approach to understanding and evaluating security in cloud storage. The framework enables understanding of the makeup of security in cloud storage and measures the understanding of security in cloud storage. Drawing upon established theories and prior research findings, the framework indicates that security in cloud storage can be determined by nine factors: (1) security policies implementation in cloud storage, security measure that relates to (2) protecting the data accessed in cloud storage; (3) modifications of data stored; (4) accessibility of data stored in cloud storage; (5) non-repudiation to the data stored; (6) authenticity of the original data; (7) reliability of the cloud storage services; (8) accountability of service provision; and (9) auditability of the data accessed and stored in cloud storage. An example of CSSF application has been demonstrated through the development of a measuring instrument called Security Rating Score (SecRaS) and through a series of experiments, SecRaS has been validated and used in a research scenario. The instrument consists of several items generated using goal-question-metric approach. These potential items were evaluated by a series of experiments; the security experts assessed using content validity ratio while the security practitioners took part in the validation study. The validation study completed two experiments that look into the correlation analyses and internal reliability. SecRaS instrument was later applied in a research scenario; the validated instrument was distributed and a number of 218 usable responses were received. Using structural equation modelling, the data has revealed a good fit of the measurement analyses and structural model. The key findings were as follow: the relationships between factors were found to have both direct and indirect effects in the result. While establishing the relationship(s) among the factors, the structural model proposes three types of causal relationships in terms of how the security implementation in cloud storage could be affected by the security factors. This thesis presents a detailed discussion of the CSSF development, confirmation, and application in a research scenario. For security managers, CSSF offers a new paradigm on how stakeholders can make cloud storage security implementation successful in some depth. For security practitioners, the CSSF enables deconstruction of the concept of security in cloud storage into smaller, conceptually distinct and manageable factors to guide the design of security in cloud storage. For researchers, the CSSF provides a common framework in which to conceptualise their research and make it easier to see how the security factors fit into the larger picture.
Style APA, Harvard, Vancouver, ISO itp.
29

Zhang, Yulong. "TOWARDS AN INCENTIVE COMPATIBLE FRAMEWORK OF SECURE CLOUD COMPUTING". VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2739.

Pełny tekst źródła
Streszczenie:
Cloud computing has changed how services are provided and supported through the computing infrastructure. It has the advantages such as flexibility , scalability , compatibility and availability . However, the current architecture design also brings in some troublesome problems, like the balance of cooperation benefits and privacy concerns between the cloud provider and the cloud users, and the balance of cooperation benefits and free-rider concerns between different cloud users. Theses two problems together form the incentive problem in cloud environment. The first conflict lies between the reliance of services and the concerns of secrets of cloud users. To solve it, we proposes a novel architecture, NeuCloud, to enable partially, trusted, transparently, accountably privacy manipulation and revelation. With the help of this architecture, the privacy-sensitive users can be more confident to move to public clouds. A trusted computing base is not enough, in order to stimulate incentive-compatible privacy trading, we present a theoretical framework and provide the guidelines for cloud provider to compensate the cloud user's privacy-risk-aversion. We implement the NeuCloud and evaluate it. Moreover, a improved model of NeuCloud is discussed. The second part of this thesis strives to solve the free-rider problem in cloud environment. For example, the VM-colocation attacks have become serious threats to cloud environment. We propose to construct an incentive-compatible moving-target-defense by periodically migrating VMs, making it much harder for adversaries to locate the target VMs. We developed theories about whether the migration of VMs is worthy and how the optimal migration interval can be determined. To the best of our knowledge, our work is the first effort to develop a formal and quantified model to guide the migration strategy of clouds to improve security. Our analysis shows that our placement based defense can significantly improve the security level of the cloud with acceptable costs. In summary, the main objective of this study is to provide an incentive-compatible to eliminate the cloud user's privacy or cooperative concerns. The proposed methodology can directly be applied in commercial cloud and help this new computing fashion go further in the history. The theoretical part of this work can be extended to other fields where privacy and free-rider concerns exist.
Style APA, Harvard, Vancouver, ISO itp.
30

Stanogias, Nikolaos. "Combining analytics framework and Cloud schedulers in order to optimise resource utilisation in a distributed Cloud". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177582.

Pełny tekst źródła
Streszczenie:
Analytics frameworks were initially created to run on bare-metal hardware so they contain scheduling mechanisms to optimise the distribution of the cpu load and data allocation. Generally, the scheduler is part of the analytics framework resource manager. There are different resources managers used in the market and the open-source community that can serve for different analytics frameworks. For example, Spark is initially built with Mesos. Hadoop is now using YARN. Spark is also available as a YARN application. On the other hand, cloud environments (Like OpenStack) contain theirs own mechanisms of distributing resources between users and services. While analytics applications are increasingly being migrated to the cloud, the scheduling decisions for running an analytic job is still done in isolation between the different scheduler layers (Cloud/Infrastructure vs analytics resource manager). This can seriously impact performance of analytics or other services running jointly in the same infrastructure as well as limit load-balancing, and autoscaling capabilities. This master thesis identifies what are the scheduling decisions that should be taken at the different layers (Infrastructure, Platform and Software) as well as the required metrics from the environment when mul-tiple schedulers are used in order to get the best performance and maximise the resource utilisation.
Style APA, Harvard, Vancouver, ISO itp.
31

Dautov, Rustem. "EXCLAIM framework : a monitoring and analysis framework to support self-governance in Cloud Application Platforms". Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/13379/.

Pełny tekst źródła
Streszczenie:
The Platform-as-a-Service segment of Cloud Computing has been steadily growing over the past several years, with more and more software developers opting for cloud platforms as convenient ecosystems for developing, deploying, testing and maintaining their software. Such cloud platforms also play an important role in delivering an easily-accessible Internet of Services. They provide rich support for software development, and, following the principles of Service-Oriented Computing, offer their subscribers a wide selection of pre-existing, reliable and reusable basic services, available through a common platform marketplace and ready to be seamlessly integrated into users' applications. Such cloud ecosystems are becoming increasingly dynamic and complex, and one of the major challenges faced by cloud providers is to develop appropriate scalable and extensible mechanisms for governance and control based on run-time monitoring and analysis of (extreme amounts of) raw heterogeneous data. In this thesis we address this important research question -- how can we support self-governance in cloud platforms delivering the Internet of Services in the presence of large amounts of heterogeneous and rapidly changing data? To address this research question and demonstrate our approach, we have created the Extensible Cloud Monitoring and Analysis (EXCLAIM) framework for service-based cloud platforms. The main idea underpinning our approach is to encode monitored heterogeneous data using Semantic Web languages, which then enables us to integrate these semantically enriched observation streams with static ontological knowledge and to apply intelligent reasoning. This has allowed us to create an extensible, modular, and declaratively defined architecture for performing run-time data monitoring and analysis with a view to detecting critical situations within cloud platforms. By addressing the main research question, our approach contributes to the domain of Cloud Computing, and in particular to the area of autonomic and self-managing capabilities of service-based cloud platforms. Our main contributions include the approach itself, which allows monitoring and analysing heterogeneous data in an extensible and scalable manner, the prototype of the EXCLAIM framework, and the Cloud Sensor Ontology. Our research also contributes to the state of the art in Software Engineering by demonstrating how existing techniques from several fields (i.e., Autonomic Computing, Service-Oriented Computing, Stream Processing, Semantic Sensor Web, and Big Data) can be combined in a novel way to create an extensible, scalable, modular, and declaratively defined monitoring and analysis solution.
Style APA, Harvard, Vancouver, ISO itp.
32

Nigro, Michele. "Progettazione di un cluster sul cloud con il framework HPC". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20316/.

Pełny tekst źródła
Streszczenie:
Nell'ambito del calcolo distribuito, Microsoft HPC Pack offre un importante strumento per la gestione delle risorse computazionali in maniera efficiente orchestrando lo scheduling delle unità di lavoro all’interno dell’infrastruttura. HPC supporta nativamente l’integrazione sul cloud di Microsoft Azure tramite strategie di networking e virtualizzazione ben definite. Dopo una breve presentazione di Prometeia, presso cui il progetto ha avuto luogo, sono presentate nel dettaglio le tecnologie Microsoft utilizzate nel percorso. Segue una presentazione del progetto, che si compone di due passi: il primo consiste nella realizzazione dell’infrastruttura applicativa e HPC sul cloud Azure tramite template automatizzato (realizzazione di macchine virtuali, rete virtuale, installazione dei servizi e di HPC); il secondo passo è la realizzazione di un’applicazione che consenta, in base alle esigenze dell’utente, di creare ed eliminare risorse di calcolo dall’infrastruttura tramite comandi appositamente implementati. Questa soluzione apporta vantaggi di tempo ed economici sia rispetto agli scenari on-premise, in quanto non è più richiesto l’acquisto, la manutenzione e l’aggiornamento di server fisici, sia rispetto a soluzioni cloud più statiche, in cui la presenza di risorse di calcolo inattive per lunghi periodi di tempo producono costi molto più elevati. La parte finale dell’elaborato si concentra sull’analisi dei vantaggi economici che la soluzione presentata apporta, mostrando nel dettaglio le differenze tra i costi delle varie soluzioni offerte da Azure.
Style APA, Harvard, Vancouver, ISO itp.
33

Nyoni, Tamsanqa B. "Towards a framework for enhancing user trust in cloud computing". Thesis, University of Fort Hare, 2014. http://hdl.handle.net/10353/d1014674.

Pełny tekst źródła
Streszczenie:
Cloud computing is one of the latest appealing technological trends to emerge in the Information Technology (IT) industry. However, despite the surge in activity and interest, there are significant and persistent concerns about cloud computing, particularly with regard to trusting the platform in terms of confidentiality, integrity and availability of user data stored through these applications. These factors are significant in determining trust in cloud computing and thus provide the foundation for this study. The significant role that trust plays in the use of cloud computing was considered in relation to various trust models, theories and frameworks. Cloud computing is still considered to be a new technology in the business world, therefore minimal work and academic research has been done on enhancing trust in cloud computing. Academic research which focuses on the adoption of cloud computing and, in particular, the building of user trust has been minimal. The available trust models, frameworks and cloud computing adoption strategies that exist mainly focus on cost reduction and the various benefits that are associated with migrating to a cloud computing platform. Available work on cloud computing does not provide clear guidelines for establishing user trust in a cloud computing application. The issue of establishing a reliable trust context for data and security within cloud computing is, up to this point, not well defined. This study investigates the impact that a lack of user trust has on the use of cloud computing. Strategies for enhancing user trust in cloud computing are required to overcome the data security concerns. This study focused on establishing methods to enhance user trust in cloud computing applications through the theoretical contributions of the Proposed Trust Model by Mayer, Davis, and Schoorman (1995) and the Confidentiality, Integrity, Availability (CIA) Triad by Steichen (2010). A questionnaire was used as a means of gathering data on trust-related perceptions of the use of cloud computing. The findings of this questionnaire administered to users and potential users of cloud computing applications are reported in this study. The questionnaire primarily investigates key concerns which result in self-moderation of cloud computing use and factors which would improve trust in cloud computing. Additionally, results relating to user awareness of potential confidentiality, integrity and availability risks are described. An initial cloud computing adoption model was proposed based on a content analysis of existing cloud computing literature. This initial model, empirically tested through the questionnaire, was an important foundation for the establishment of the Critical Success Factors (CSFs) and therefore the framework to enhance user trust in cloud computing applications. The framework proposed by this study aims to assist new cloud computing users to determine the appropriateness of a cloud computing service, thereby enhancing their trust in cloud computing applications.
Style APA, Harvard, Vancouver, ISO itp.
34

Alhammadi, Abdullah. "A knowledge management based cloud computing adoption decision making framework". Thesis, Staffordshire University, 2016. http://eprints.staffs.ac.uk/2380/.

Pełny tekst źródła
Streszczenie:
Cloud computing represents a paradigm shift in the way that IT services are delivered within enterprises. There are numerous challenges for enterprises planning to migrate to cloud computing environment as cloud computing impacts multiple different aspects of an organisation and cloud computing adoption issues vary between organisations. A literature review identified that a number of models and frameworks have been developed to support cloud adoption. However, existing models and frameworks have been devised for technologically developed environments and there has been very little examination to determine whether the factors that affect cloud adoption in technologically developing countries are different. The primary research carried out for this thesis included an investigation of the factors that influence cloud adoption in Saudi Arabia, which is regarded as a technologically developing country. This thesis presents an holistic Knowledge Management Based Cloud Adoption Decision Making Framework which has been developed to support decision makers at all stages of the cloud adoption decision making process. The theoretical underpinnings for the research come from Knowledge Management, including the literature on decision making, organisational learning and technology adoption and technology diffusion theories. The framework includes supporting models and tools, combining the Analytical Hierarchical Process and Case Based Reasoning to support decision making at Strategic and Tactical levels and the Pugh Decision Matrix at the Operational level. The Framework was developed based on secondary and primary research and was validated with expert users. The Framework is customisable, allowing decision makers to set their own weightings and add or remove decision making criteria. The results of validation show that the framework enhances Cloud Adoption decision making and provides support for decision makers at all levels of the decision making process.
Style APA, Harvard, Vancouver, ISO itp.
35

Metwally, Khaled. "A Resource Management Framework for IaaS in Cloud Computing Environment". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34951.

Pełny tekst źródła
Streszczenie:
Cloud computing Infrastructure-as-a-Service (IaaS) has gained momentum in the cloud computing research field due to its ability to provide efficient infrastructures. Cloud Service Providers (CSPs) are striving to offer Quality of Service (QoS)-guaranteed IaaS services while also improving their resource utilization and maximizing profit. In addition, CSPs are challenged by the need to manipulate diverse and heterogeneous resources, realizing multiple objectives for both customers and CSPs, and handling scalability issues. These challenges are the motivations behind this work which aims at developing a multi-layered framework for constructing and managing efficient IaaS. The fundamental layer in this framework, the Virtual Infrastructure (VI) composition layer, is dedicated to composing and delivering VIs as an IaaS service. This framework relies on a preparatory step that is defined when all the available resources in the managed space are collected in a large repository, the Virtual Resource Pool (VRP). The VRP creation process unifies the representation of all the diverse and heterogeneous resources available. Subsequently, the proposed framework performs various resource allocation approaches as working solutions through the VI composition layer. These approaches adopt efficient techniques and methodologies in performing their operations. The working solutions are initiated by designing a composition approach that relies on an ontology-based model representation. The composition approach exploits semantic similarity, closeness centrality, and random walk techniques for efficient resource allocation. As a result, it provides an efficient solution in a reasonable computational time with no guarantee for the optimality of the obtained solutions. To achieve an optimal solution, the composition approach uses a mathematical modeling formulation. In this solution, the concepts of the composition approach have been integrated into a multi-objective Mixed Integer Linear Programming (MILP) model that has been solved optimally. Despite the optimality of the resulting solution, the MILP-based model restricts IaaS resource allocation to a computational running-time challenge, and the issue of limited-size datacenters. To circumvent these issues, a cost-efficient model is proposed. The new model introduces a Column Generation (CG) formulation for the IaaS resource allocation problem in large datacenters acquainted with QoS requirements. Furthermore, this formulation is realistic, adopts large-scale optimization tools that are adequate for large datacenters, and ensures optimal solutions in a reasonable time. However, growing costs in large datacenters in accordance with the growth of recent large-scale application demands, makes large datacenters economically inefficient. Thus, we advocate a distributed framework for IaaS provisioning that guarantees affordable, scalable, and QoS-assured infrastructure for hosting large-scale applications in geo-distributed datacenters. The framework incorporates two decentralized resource allocation approaches, hierarchical and distributed, that use efficient economic models. These approaches are quite promising solutions for the scalability and computational complexity issues of existing centralized approaches. Finally, the cost-efficient model has been extended to fit the distributed infrastructure by considering additional constraints that impact CSP revenue. Simulation results showcase the effectiveness of the presented work along with the potential benefits of the proposed solutions in terms of satisfying the customers’ requirements, while achieving a better resource utilization and CSP payoffs.
Style APA, Harvard, Vancouver, ISO itp.
36

Flanagan, Keith Stanley. "A grid and cloud-based framework for high throughput bioinformatics". Thesis, University of Newcastle Upon Tyne, 2010. http://hdl.handle.net/10443/1410.

Pełny tekst źródła
Streszczenie:
Recent advances in genome sequencing technologies have unleashed a flood of new data. As a result, the computational analysis of bioinformatics data sets has been rapidly moving from a labbased desktop computer environment to exhaustive analyses performed by large dedicated computing resources. Traditionally, large computational problems have been performed on dedicated clusters of high performance machines that are typically local to, and owned by, a particular institution. The current trend in Grid computing has seen institutions pooling their computational resources in order to offload excess computational work to remote locations during busy periods. In the last year or so, commercial Cloud computing initiatives have matured enough to offer a viable remote source of reliable computational power. Collections of idle desktop computers have also been used as a source of computational power in the form of ‘volunteer Grids’. The field of bioinformatics is highly dynamic, with new or updated versions of software tools and databases continually being developed. Several different tools and datasets must often be combined into a coherent, automated workflow or pipeline. While existing solutions are available for constructing workflows, there is a clear need for long-lived analyses consisting of many interconnected steps to be able to migrate among Grid and cloud computational resources dynamically. This project involved research into the principles underlying the design and architecture of flexible, high-throughput bioinformatics processes. Following extensive research into requirements gathering, a novel Grid-based platform, Microbase, has been implemented that is based on service-oriented architectures and peer-to-peer data transfer technology. This platform has been shown to be amenable to utilising a wide range of hardware from commodity desktop computers, to high-performance cloud infrastructure. The system has been shown to drastically reduce the bandwidth requirements of bioinformatics data distribution, and therefore reduces both the financial and computational costs associated with cloud computing. The system is inherently modular in nature, comprising a service based notification system, a data storage system scheduler and a job manager. In keeping with e-Science principles, each module can operate in physical isolation from each other, distributed within an intranet or Internet. Moreover, since each module is loosely coupled via Web services, modules have the potential to be used in combination with external service oriented components or in isolation as part of another system. In order to demonstrate the utility of such an open source system to the bioinformatics community, a pipeline of inter-connected bioinformatics applications was developed using the Microbase system to form a high throughput application for the comparative and visual analysis of microbial genomes. This application, Automated Genome Analyser (AGA) has been developed to operate without user interaction. AGA exposes its results via Web-services which can be used by further analytical stages within Microbase, by external computational resources via a Web service interface or which can be queried by users via an interactive genome browser. In addition to providing the necessary infrastructure for scalable Grid applications, a modular development framework has been provided, which simplifies the process of writing Grid applications. Microbase has been adopted by a number of projects ranging from comparative genomics to synthetic biology simulations.
Style APA, Harvard, Vancouver, ISO itp.
37

Watzl, Johannes. "A framework for exchange-based trading of cloud computing commodities". Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-168702.

Pełny tekst źródła
Streszczenie:
Cloud computing is a paradigm for using IT services with characteristics such as flexible and scalable service usage, on-demand availability, and pay-as-you-go billing. Respective services are called cloud services and their nature usually motivates a differentiation in three layers: Infrastructure as a Service (IaaS) for cloud services offering functionality of hardware resources in a virtualised way, Platform as a Service (PaaS) for services acting as execution platforms, and Software as a Service (SaaS) representing applications provided in a cloud computing way. Any of these services is offered with the illusion of unlimited scalability. The infinity gained by this illusion implies the need for some kind of regulation mechanism to manage sup- ply and demand. Today’s static pricing mechanisms are limited in their capabilities to adapt to dynamic characteristics of cloud environments such as changing workloads. The solution is a dy- namic pricing approch compareable to today’s exchanges. This requires comparability of cloud services and the need of standardised access to avoid vendor lock-in. To achieve comparability, a classification for cloud services is introcuced, where classes of cloud services representing tradable goods are expressed by the minimum requirements for a certain class. The main result of this work is a framework for exchange-based trading of cloud com- puting commodities, which is composed of four core components derived from existing ex- change market places. An exchange component takes care of accepting orders from buyers and sellers and determines the price for the goods. A clearing component is responsible for the fi- nancial closing of a trade. The settlement component takes care of the delivery of the cloud service. A rating component monitors cloud services and logs service level agreement breaches to calculate provider ratings, especially for reliability, which is an important factor in cloud computing. The framework establishes a new basis for using cloud services and more advanced business models. Additionally, an overview of selected economic aspects including ideas for derivative financial instruments like futures, options, insurances, and more complex ones is pro- vided. A first version of the framework is currently being implemented and in use at Deutsche Bo ̈rse Cloud Exchange AG.
Cloud Computing repra ̈sentiert eine neue Art von IT-Diensten mit bestimmten Eigenschaften wie Flexibilita ̈t, Skalierbarkeit, sta ̈ndige Verfu ̈gbarkeit und nutzungsbezogene (pay-as-you-go) Abrechnung. IT-Dienste, die diese Eigenschaften besitzen, werden als Cloud Dienste bezeichnet und lassen sich in drei Ebenen einteilen: Infrastructure as a Service (IaaS), womit virtuelle Hardware Ressourcen zur Verfu ̈gung gestellt werden, Platform as a Service (PaaS), das eine Ausfu ̈hrungsumgebung darstellt und Software as a Service (SaaS), welches das Anbieten von Applikationen als Cloud Dienst bezeichnet. Cloud Dienste werden mit der Illusion unendlicher Skalierbarkeit angeboten. Diese Unendlichkeit erfordert einen Mechanismus, der in der Lage ist, Angebot und Nachfrage zu regulieren. Derzeit eingesetzte Preisbildungsmechanismen sind in ihren Mo ̈glichkeiten beschra ̈nkt sich auf die Dynamik in Cloud Umgebungen, wie schnell wechselnde Bedarfe an Ressourcen, einzustellen. Eine mo ̈gliche Lo ̈sung stellt ein dynamischer Preisbildungsmechanismus dar, der auf dem Modell heutiger Bo ̈rsen beruht. Dieser erfordert die Standardisierung und Vergleichbarkeit von Cloud Diensten und eine standardisierte Art des Zugriffs. Um die Vergleichbarkeit von Cloud Diensten zu erreichen, werden diese in Klassen eingeteilt, die jeweils ein am Bo ̈rsenplatz handelbares Gut darstellen. Das Ergebnis dieser Arbeit ist ein Rahmenwerk zum bo ̈rsenbasierten Handel von Cloud Computing Commodities, welches aus vier Kernkomponenten besteht, die von existieren- den Bo ̈rsen und Rohstoffhandeslpla ̈tzen abgeleitet werden ko ̈nnen. Die Bo ̈rsenkomponente nimmt Kauf- und Verkaufsorders entgegen und bestimmt die aktuellen Preise der handelbaren Cloud Rohstoffe. Die Clearing Komponente stellt die finanzielle Abwicklung eines Gescha ̈ftes sicher, das Settlement ist fu ̈r die tatsa ̈chliche Lieferung zusta ̈ndig und die Rating Komponente u ̈berwacht die Cloud Dienste im Hinblick auf die Nichteinhaltung von Service Level Agree- ments und vor allem deren Zuverla ̈ssigkeit, die einen wichtigen Faktor im Cloud Computing darstellt. Das Rahmenwerk begru ̈ndet eine neue Basis fu ̈r die Cloudnutzung und ermo ̈glicht fort- geschrittenere Gescha ̈ftsmodelle. In der Arbeit wird weiters ein U ̈berblick u ̈ber o ̈konomis- che Aspekte wie Ideen zu derivaten Finanzinstrumenten auf Cloud Computing Commodities gegeben. Dieses Rahmenwerk wird derzeit an der Deutsche Bo ̈rse Cloud Exchange AG imple- mentiert und bereits in einer ersten Version eingesetzt.
Style APA, Harvard, Vancouver, ISO itp.
38

Gopal, Vineet. "PhysioMiner : a scalable cloud based framework for physiological waveform mining". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91815.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Thesis: S.B., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-70).
This work presents PhysioMiner, a large scale machine learning and analytics framework for physiological waveform mining. It is a scalable and flexible solution for researchers and practitioners to build predictive models from physiological time series data. It allows users to specify arbitrary features and conditions to train the model, computing everything in parallel in the cloud. PhysioMiner is tested on a large dataset of electrocardiography (ECG) from 6000 patients in the MIMIC database. Signals are cleaned and processed, and features are extracted per period. A total of 1.2 billion heart beats were processed and 26 billion features were extracted resulting in half a terabyte database. These features were aggregated for windows corresponding to patient events. These aggregated features were fed into DELPHI, a multi algorithm multi parameter cloud based system to build a predictive model. An area under the curve of 0.693 was achieved for an acute hypotensive event prediction from the ECG waveform alone. The results demonstrate the scalability and flexibility of PhysioMiner on real world data. PhysioMiner will be an important tool for researchers to spend less time building systems, and more time building predictive models.
by Vineet Gopal.
M. Eng.
S.B.
Style APA, Harvard, Vancouver, ISO itp.
39

Sajjad, Ali. "A secure and scalable communication framework for inter-cloud services". Thesis, City University London, 2015. http://openaccess.city.ac.uk/14415/.

Pełny tekst źródła
Streszczenie:
A lot of contemporary cloud computing platforms offer Infrastructure-as-a-Service provisioning model, which offers to deliver basic virtualized computing resources like storage, hardware, and networking as on-demand and dynamic services. However, a single cloud service provider does not have limitless resources to offer to its users, and increasingly users are demanding the features of extensibility and inter-operability with other cloud service providers. This has increased the complexity of the cloud ecosystem and resulted in the emergence of the concept of an Inter-Cloud environment where a cloud computing platform can use the infrastructure resources of other cloud computing platforms to offer a greater value and flexibility to its users. However, there are no common models or standards in existence that allows the users of the cloud service providers to provision even some basic services across multiple cloud service providers seamlessly, although admittedly it is not due to any inherent incompatibility or proprietary nature of the foundation technologies on which these cloud computing platforms are built. Therefore, there is a justified need of investigating models and frameworks which allow the users of the cloud computing technologies to benefit from the added values of the emerging Inter-Cloud environment. In this dissertation, we present a novel security model and protocols that aims to cover one of the most important gaps in a subsection of this field, that is, the problem domain of provisioning secure communication within the context of a multi-provider Inter-Cloud environment. Our model offers a secure communication framework that enables a user of multiple cloud service providers to provision a dynamic application-level secure virtual private network on top of the participating cloud service providers. We accomplish this by taking leverage of the scalability, robustness, and flexibility of peer-to-peer overlays and distributed hash tables, in addition to novel usage of applied cryptography techniques to design secure and efficient admission control and resource discovery protocols. The peer-to-peer approach helps us in eliminating the problems of manual configurations, key management, and peer churn that are encountered when setting up the secure communication channels dynamically, whereas the secure admission control and secure resource discovery protocols plug the security gaps that are commonly found in the peer-to-peer overlays. In addition to the design and architecture of our research contributions, we also present the details of a prototype implementation containing all of the elements of our research, as well as showcase our experimental results detailing the performance, scalability, and overheads of our approach, that have been carried out on actual (as opposed to simulated) multiple commercial and non-commercial cloud computing platforms. These results demonstrate that our architecture incurs minimal latency and throughput overheads for the Inter-Cloud VPN connections among the virtual machines of a service deployed on multiple cloud platforms, which are 5% and 10% respectively. Our results also show that our admission control scheme is approximately 82% more efficient and our secure resource discovery scheme is about 72% more efficient than a standard PKI-based (Public Key Infrastructure) scheme.
Style APA, Harvard, Vancouver, ISO itp.
40

Jayapandian, Catherine Praveena. "Cloudwave: A Cloud Computing Framework for Multimodal Electrophysiological Big Data". Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405516626.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Zia-ur-Rehman. "A framework for QoS driven user-side cloud service management". Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/742.

Pełny tekst źródła
Streszczenie:
This thesis presents a comprehensive framework that assists the cloud service user in making cloud service management decisions, such as service selection and migration. The proposed framework utilizes the QoS history of the available services for QoS forecasting and multi-criteria decision making. It then integrates all the inherent necessary processes, such as QoS monitoring, forecasting, service comparison and ranking to recommend the best and optimal decision to the user.
Style APA, Harvard, Vancouver, ISO itp.
42

GAUDENZI, FILIPPO. "A FRAMEWORK FOR CLOUD ASSURANCE AND TRANSPARENCY BASED ON CONTINUOUS EVIDENCE COLLECTION". Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/615644.

Pełny tekst źródła
Streszczenie:
The cloud computing paradigm is changing the design, development, deployment, and provisioning of services and corresponding IT infrastructures. Nowadays, users and companies incrementally rely on on-demand cloud resources to access and deliver services, while IT infrastructures are continuously evolving to address cloud needs and support cloud service delivery. This scenario points to a multi-tenant environment where services are built with strong security and scalability requirements, and cost, performance, security and privacy are key factors enabling cloud adoption. New business opportunities for providers and customers come at the price of growing concerns about how data and processes are managed and operated once deployed in the cloud. This context, where companies externalise the IT services to third parties, makes the trustworthiness of IT partners and services a prerequisite for its success. Trustworthiness can be expressed and guaranteed through contracts that enforce Service Level Agreements (SLAs), and in a more general way by assurance techniques. By the term security assurance, we mean all the techniques able to assess and evaluate a given target to demonstrate that a security property is satisfied and the target behaves as expected. However, traditional assurance solutions rely on static verification techniques and assume continuous availability of a trusted evaluator. Such conditions are not valid anymore in the cloud that instead requires new approaches that match its dynamic, distributed and heterogeneous nature. In this thesis, we describe an assurance technique based on certification, towards the definition of a transparent and trusted cloud, from the bare metal to the application layer. The presented assurance approach follows the traditional certification process and extends it by providing continuous, incremental, adaptive and multi-layer verification. We propose a test-based certification scheme assessing non-functional properties of cloud-based services. The scheme is driven by non-functional requirements defined by the certification authority and by a model of the service under certification. We then define an automatic approach to verification of consistency between requirements and models, which is at the basis of the chain of trust supported by the certification scheme. We also present a continuous certificate life cycle management process including both certificate issuing and its adaptation to address contextual changes, versioning and migration. The proposed certification scheme is however partial if certification of cloud composite services is not supported. Cloud computing paradigm in fact, supports service composition and re-use at high rates. This clearly aects cloud service evaluation that cannot be simply seen as an assessment on a single target, but it should follow an holistic view that permits to compose certificates. Moreover, while traditional approaches to service composition are driven by the desired functionality and requirements on deployment costs, more recent approaches also focus on SLAs and non-functional requirements. In fact service composition in the cloud introduces new requirements on composition approaches including the need to i) select component services on the basis of their non-functional properties, ii) continuously adapt to both functional and non-functional changes of the component services, iii) depart from the assumption that the cost of the composition is only the sum of the deployment costs of the component services, and also consider the costs of SLA and non-functional requirement verification. In this thesis, we first extended out certification process to evaluate non-functional properties of composite services. We then focus on the definition of an approach to the composition of cloud services driven by certified non-functional properties. We define a cost-evaluation methodology aimed to build a service composition with a set of certified properties that minimizes the total costs experienced by the cloud providers, taking into account both deployment and certification/verification costs. From the analysis and the definition of certification models and processes, we propose and develop a test-based security certification framework for the cloud, which supports providers and users in the design and development of ready-to-be-certified services/applications. The framework implements a distributed approach to reach all targets at all cloud layers and a paradigm to develop test cases to assess the requested non-functional properties. The outcome of this thesis is finally validated through an experimental evaluation carried out on real scenarios that i)evaluate the assurance of a Web Hosting System provided by the Universitá degli Studi di Milano against the ICT security guidelines for Italian public administration provided by the "Agenzia per l’Italia Digitale" (AgID) and ii propose and test a security benchmark for the cloud infrastructure manager OpenStack. In summary, the contribution of the thesis is manifold: i) we design and implement a certification scheme for the cloud, ii we extend and adapt the certification of single cloud services to meet cloud composite certification; iii) we integrate our certification scheme with the cloud service composition process, developing an algorithm to deploy cloud composite services based on non-functional requirements while minimizing the cost from the cloud service provider point of view; iv we design and develop an assurance framework for cloud services certification and validate it in real scenarios.
Style APA, Harvard, Vancouver, ISO itp.
43

Maeser, Robert K. III. "A Model-Based Framework for Analyzing Cloud Service Provider Trustworthiness and Predicting Cloud Service Level Agreement Performance". Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10785821.

Pełny tekst źródła
Streszczenie:

Analytics firm Cyence estimated Amazon’s four-hour cloud computing outage on February 28, 2017 “cost S&P 500 companies at least $150 million” (Condliffe 2017) and traffic monitoring firm Apica claimed “54 of the top 100 online retailers saw site performance slump by at least 20 percent” (Condliffe 2017). 2015 data center outages cost Fortune 1000 companies between $1.25 and $2.5 billion (Ponemon 2017). Despite potential risks, the cloud computing industry continues to grow. For example, Internet of Things, which is projected to grow 266% between 2013 and 2020 (MacGillivray et al. 2017), will drive increased demand and dependency on cloud computing as data across multiple industries is collected and sent back to cloud data centers for processing. Enterprises continue to increase demand and dependency with 85% having multi-cloud strategies, up from 2016 (RightScale 2017a). This growth and dependency will influence risk exposure and potential for impact (e.g. availability, reliability, performance, security, financial). The research in this Praxis and proposed solution focuses on calculating cloud service provider (CSP) trustworthiness based on cloud service level agreement (SLA) criteria and predicting cloud SLA availability performance for cloud computing services. Evolving industry standards for cloud SLAs (EC 2014, Hunnebeck et al. 2011, ISO/IEC 2016, NIST 2015, Hogben, Giles and Dekker 2012) and existing work regarding CSP trustworthiness (Ghosh, Ghosh and Das 2015, Taha et al. 2014) will be leveraged as the predictive model (using Linear Regression Analysis) is constructed to analyze CSP cloud computing service, SLA performance and CSP trustworthiness.

Style APA, Harvard, Vancouver, ISO itp.
44

Skolmen, Dayne Edward. "Protection of personal information in the South African cloud computing environment: a framework for cloud computing adoption". Thesis, Nelson Mandela Metropolitan University, 2016. http://hdl.handle.net/10948/12747.

Pełny tekst źródła
Streszczenie:
Cloud Computing has advanced to the point where it may be considered an attractive proposition for an increasing number of South African organisations, yet the adoption of Cloud Computing in South Africa remains relatively low. Many organisations have been hesitant to adopt Cloud solutions owing to a variety of inhibiting factors and concerns that have created mistrust in Cloud Computing. One of the top concerns identified is security within the Cloud Computing environment. The approaching commencement of new data protection legislation in South Africa, known as the Protection of Personal Information Act (POPI), may provide an ideal opportunity to address the information security-related inhibiting factors and foster a trust relationship between potential Cloud users and Cloud providers. POPI applies to anyone who processes personal information and regulates how they must handle, store and secure that information. POPI is considered to be beneficial to Cloud providers as it gives them the opportunity to build trust with potential Cloud users through achieving compliance and providing assurance. The aim of this dissertation is, therefore, to develop a framework for Cloud Computing adoption that will assist in mitigating the information security-related factors inhibiting Cloud adoption by fostering a trust relationship through compliance with the POPI Act. It is believed that such a framework would be useful to South African Cloud providers and could ultimately assist in the promotion of Cloud adoption in South Africa.
Style APA, Harvard, Vancouver, ISO itp.
45

Balasubramanian, Venkatraman. "An SDN Assisted Framework for Mobile Ad-hoc Clouds". Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35935.

Pełny tekst źródła
Streszczenie:
Over a period of time, it has been studied that a mobile “edge-cloud” formed by hand-held devices could be a productive resource entity for providing a service in the mobile cloud landscape. The ease of access to a pool of devices is much more arbitrary and based purely on the needs of the user. This pool can act as a provider of an infrastructure for various services that can be processed with volunteer node participation, where the node in the vicinity is itself a service provider. This representation of cloud formation to engender a constellation of devices in turn providing a service is the basis for the concept of Mobile Ad-hoc Cloud Computing. In this thesis, an architecture is designed for providing an Infrastructure as a service in Mobile Ad-hoc Cloud Computing. The performance evaluation reveals the gain in execution time while offloading to the mobile ad-hoc cloud. Further, this novel architecture enables discovering a dedicated pool of volunteer devices for computation. An optimized task scheduling algorithm is proposed that provides a coordinated resource allocation. However, failure to maintain the service between heterogeneous networks shows the inability of the present day networks to adapt to frequent changes in a network. Thus, owing to the heavy dependence on the centralized mobile network, the service related issues in a mobile ad-hoc cloud needs to be addressed. As a result, using the principles of Software Defined Networking (SDN), a disruption tolerant Mobile Ad-hoc Cloud framework is proposed. To evaluate this framework a comprehensive case study is provided in this work that shows a round trip time improvement using an SDN controller.
Style APA, Harvard, Vancouver, ISO itp.
46

Zhu, Jiedan. "An Autonomic Framework Supporting Task Consolidation and Migration in the Cloud Environment". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1310758418.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Huang, Liren [Verfasser]. "Cloud-based Bioinformatics Framework for Next-Generation Sequencing Data / Liren Huang". Bielefeld : Universitätsbibliothek Bielefeld, 2019. http://d-nb.info/1196644020/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Hai, Socheat Virakraingsei. "Automatic and scalable cloud framework for parametric studies using scientific applications". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-307149.

Pełny tekst źródła
Streszczenie:
Many scientific applications are computationally expensive and often require a large pool of resources for large scale experiments within a reasonable time frame. With Cloud computing offers a new model of consuming computing infrastructures for their flexible application deployment. At the same time, microservice architecture has also gained momentum in the industries for a number of reasons including minimal overhead, simplicity, flexibility, scalability and resilience. The combination of the two technologies has the potential to bring large benefits for scientists and researchers. The goal of this project is to develop a framework which can help to facilitate researchers and scientists to develop their microservices and to provide a hosting platform for parallel execution of their applications.
Style APA, Harvard, Vancouver, ISO itp.
49

Sehgal, Rakesh. "Service-Oriented Architecture based Cloud Computing Framework For Renewable Energy Forecasting". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/25867.

Pełny tekst źródła
Streszczenie:
Forecasting has its application in various domains as the decision-makers are provided with a more predictable and reliable estimate of events that are yet to occur. Typically, a user would invest in licensed software or subscribe to a monthly or yearly plan in order to make such forecasts. The framework presented here differs from conventional software in forecasting, as it allows any interested party to use the proposed services on a pay-per-use basis so that they can avoid investing heavily in the required infrastructure. The Framework-as-a-Service (FaaS) presented here uses Windows Communication Foundation (WCF) to implement Service-Oriented Architecture (SOA). For forecasting, collection of data, its analysis and forecasting responsibilities lies with users, who have to put together other tools or software in order to produce a forecast. FaaS offers each of these responsibilities as a service, namely, External Data Collection Framework (EDCF), Internal Data Retrieval Framework (IDRF) and Forecast Generation Framework (FGF). FaaS Controller, being a composite service based on the above three, is responsible for coordinating activities between them. These services are accessible through Economic Endpoint (EE) or Technical Endpoint (TE) that can be used by a remote client in order to obtain cost or perform a forecast, respectively. The use of Cloud Computing makes these services available over the network to be used as software to forecast energy for solar or wind resources. These services can also be used as a platform to create new services by merging existing functionality with new service features for forecasting. Eventually, this can lead to faster development of newer services where a user can choose which services to use and pay for, presenting the use of FaaS as Platform-as-a-Service (PaaS) in forecasting.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
50

Mahmud, S. "Cloud enabled data analytics and visualization framework for health-shock prediction". Thesis, Coventry University, 2016. http://curve.coventry.ac.uk/open/items/deba667c-5142-4330-9fd0-c86db4a8c088/1.

Pełny tekst źródła
Streszczenie:
Health-shock can be defined as a health event that causes severe hardship to the household because of the financial burden for healthcare payments and the income loss due to inability to work. It is one of the most prevalent shocks faced by the people of underdeveloped and developing countries. In Pakistan especially, policy makers and healthcare sector face an uphill battle in dealing with health-shock due to the lack of a publicly available dataset and an effective data analytics approach. In order to address this problem, this thesis presents a data analytics and visualization framework for health-shock prediction based on a large-scale health informatics dataset. The framework is developed using cloud computing services based on Amazon web services integrated with Geographical Information Systems (GIS) to facilitate the capture, storage, indexing and visualization of big data for different stakeholders using smart devices. The data was collected through offline questionnaires and an online mobile based system through Begum Memhooda Welfare Trust (BMWT). All data was coded in the online system for the purpose of analysis and visualization. In order to develop a predictive model for health-shock, a user study was conducted to collect a multidimensional dataset from 1000 households in rural and remotely accessible regions of Pakistan, focusing on their health, access to health care facilities and social welfare, as well as economic and environmental factors. The collected data was used to generate a predictive model using a fuzzy rule summarization technique, which can provide stakeholders with interpretable linguistic rules to explain the causal factors affecting health-shock. The evaluation of the proposed system in terms of the interpretability and accuracy of the generated data models for classifying health-shock shows promising results. The prediction accuracy of the fuzzy model based on a k-fold crossvalidation of the data samples shows above 89% performance in predicting health-shock based on the given factors. Such a framework will not only help the government and policy makers to manage and mitigate health-shock effectively and timely, but will also provide a low-cost, flexible, scalable, and secure architecture for data analytics and visualization. Future work includes extending this study to form Pakistan’s first publicly available health informatics tool to help government and healthcare professionals to form policies and healthcare reforms. This study has implications at a national and international level to facilitate large-scale health data analytics through cloud computing in order to minimize the resource commitments needed to predict and manage health-shock.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii