Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Electron Clouds.

Rozprawy doktorskie na temat „Electron Clouds”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Electron Clouds”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Petrov, Fedor [Verfasser], Oliver [Akademischer Betreuer] Boine-Frankenheim, Thomas [Akademischer Betreuer] Weiland i Hoffmann [Akademischer Betreuer] Dieter. "Electron Clouds in High Energy Hadron Accelerators / Fedor Petrov. Betreuer: Oliver Boine-Frankenheim ; Thomas Weiland ; Hoffmann Dieter". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://d-nb.info/1107771056/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Fallas, German Vidaurre. "Characterization of mixed-phase clouds". abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3275833.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Uriarte, Rafael Brundo. "Supporting Autonomic Management of Clouds: Service-Level-Agreement, Cloud Monitoring and Similarity Learning". Thesis, IMT Alti Studi Lucca, 2015. http://e-theses.imtlucca.it/163/1/RafaelBrundoUriarte_Thesis_Final_A5.pdf.

Pełny tekst źródła
Streszczenie:
Cloud computing has grown rapidly during the past few years and has become a fundamental paradigm in the Information Technology (IT) area. Clouds enable dynamic, scalable and rapid provision of services through a computer network, usually the Internet. However, managing and optimising clouds and their services in the presence of dynamism and heterogeneity is one of the major challenges faced by industry and academia. A prominent solution is resorting to selfmanagement as fostered by autonomic computing. Self-management requires knowledge about the system and the environment to enact the self-* properties. Nevertheless, the characteristics of cloud, such as large-scale and dynamism, hinder the knowledge discovery process. Moreover, cloud systems abstract the complexity of the infrastructure underlying the provided services to their customers, which obfuscates several details of the provided services and, thus, obstructs the effectiveness of autonomic managers. While a large body of work has been devoted to decisionmaking and autonomic management in the cloud domain, there is still a lack of adequate solutions for the provision of knowledge to these processes. In view of the lack of comprehensive solutions for the provision of knowledge to the autonomic management of clouds, we propose a theoretical and practical framework which addresses three major aspects of this process: (i) the definition of services’ provision through the specification of a formal language to define Service-Level-Agreements for the cloud domain; (ii) the collection and processing of information through an extensible knowledge discovery architecture to monitor autonomic clouds with support to the knowledge discovery process; and (iii) the knowledge discovery through a machine learning methodology to calculate the similarity among services, which can be employed for different purposes, e.g. service scheduling and anomalous behaviour detection. Finally, in a case study, we integrate the proposed solutions and show the benefits of this integration in a hybrid cloud test-bed.
Style APA, Harvard, Vancouver, ISO itp.
4

Nicoll, Keri. "Coupling Between the Global Atmospheric Electric Circuit and Clouds". Thesis, University of Reading, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.525116.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Sebastio, Stefano. "Enriching volunteer clouds with self-* capabilities". Thesis, IMT Alti Studi Lucca, 2014. http://e-theses.imtlucca.it/146/1/Sebastio_phdthesis.pdf.

Pełny tekst źródła
Streszczenie:
Provisioning, using and maintaing computational resources as services is a hard challenge. On the one hand there is an increasing demand of such services due to the increasing role of software in our society, while on the other hand the amount and variety computational resources is growing due to the pervasiveness of computational devices in our lives. The complexity of such problem can only be mastered by resorting to suitable technologies based on well-studied paradigms. Three prominent examples and ICT trends of the last decade are (i) cloud computing, which promotes the idea of computational resources as services; (ii) autonomic computing, which aims at minimizing the amount of human intervention and automatizing many aspects of a system’s life-cycle; and (iii) volunteer computing, which promotes the idea of achieving complex tasks by fostering the collaboration among peers. This thesis proposes an approach based on the combination of the above mentioned paradigms (i)–(iii) for the design and evaluation of volunteer cloud platforms providing a service for executing simple tasks. The major problem under consideration is the selection of the mechanisms used by cloud participants to collaborate for providing such service. The main contributions of the thesis are: (1) an architecture and a model for volunteer cloud platforms; (2) a discrete event simulator for such model; (3) the extension of a statistical analysis tool to ease the analysis; (4) novel self-* strategies for collaboration among volunteers, mainly inspired by multi-agent systems and AI techniques, evaluated with the simulator using the Google Backend workload.
Style APA, Harvard, Vancouver, ISO itp.
6

Nasir, Usman. "An assessment model for Enterprise Clouds adoption". Thesis, Keele University, 2017. http://eprints.keele.ac.uk/4281/.

Pełny tekst źródła
Streszczenie:
Context: Enterprise Cloud Computing (or Enterprise Clouds) is using the Cloud Computing services by a large-scale organisation to migrate its existing IT services or use new Cloud based services. There are many issues and challenges that are barrier to the adoption of Enterprise Clouds. The adoption challenges have to be addressed for better assimilation of Cloud based services within the organisation. Objective: The aim of this research was to develop an assessment model for adoption of Enterprise Clouds. Method: Key challenges reported as barrier in adoption of Cloud Computing were identified from literature using the Systematic Literature Review methodology. A survey research was carried out to elicit industrial approaches and practices from Cloud Computing experts that help in overcoming the key challenges. Both key challenges and practices were used in formulating the assessment model. Results: The results have highlighted that key challenges in the adoption of Enterprise Clouds are security & reliability concerns, resistance to change, vendor lock-in issues, data privacy and difficulties in application and service migration. The industrial practices to overcome these challenges are: planning and executing pilot project, assessment of IT needs, use of open source APIs, involvement of legal team in vendor selection, identification of the processes to change, involvement of senior executive as change champion, using vendor partners to support application/service migration to Cloud Computing and creating employee awareness about Cloud Computing services. Conclusion: Using the key challenges and practices, the assessment model was developed that assesses an organisation’s readiness to adopt Enterprise Clouds. The model measures the readiness in four dimensions: technical, legal & compliance, IT capabilities and end user readiness for the adoption of Enterprise Clouds. The model’s result can help the organisation in overcoming the adoption challenges for successful assimilation of newly deployed or migrated IT services on Enterprise Clouds.
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Chao. "Performance-oriented service management in clouds". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/81885/.

Pełny tekst źródła
Streszczenie:
Cloud computing has provided the convenience for many IT-related and traditional industries to use feature-rich services to process complex requests. Various services are deployed in the cloud and they interact with each other to deliver the required results. How to effectively manage these services, the number of which is ever increasing, within the cloud has unavoidably become a critical issue for both tenants and service providers of the cloud. In this thesis, we develop the novel resource provision frameworks to determine resources provision for interactive services. Next, we propose the algorithms for mapping Virtual Machines (VMs) to Physical Machines (PMs) under different constraints, aiming to achieve the desired Quality-of-Services (QoS) while optimizing the provisions in both computing resources and communication bandwidth. Finally, job scheduling may become a performance bottleneck itself in such a large scale cloud. In order to address this issue, the distributed job scheduling framework has been proposed in the literature. However, such distributed job scheduling may cause resource conflict among distributed job schedulers due to the fact that individual job schedulers make their job scheduling decisions independently. In this thesis, we investigate the methods for reducing resource conflict. We apply the game theoretical methodology to capture the behaviour of the distributed schedulers in the cloud. The frameworks and methods developed in this thesis have been evaluated with a simulated workload, a large-scale workload trace and a real cloud testbed.
Style APA, Harvard, Vancouver, ISO itp.
8

Pawar, Pramod S. "Cloud broker based trust assessment of cloud service providers". Thesis, City University London, 2015. http://openaccess.city.ac.uk/13687/.

Pełny tekst źródła
Streszczenie:
Cloud computing is emerging as the future Internet technology due to its advantages such as sharing of IT resources, unlimited scalability and flexibility and high level of automation. Along the lines of rapid growth, the cloud computing technology also brings in concerns of security, trust and privacy of the applications and data that is hosted in the cloud environment. With large number of cloud service providers available, determining the providers that can be trusted for efficient operation of the service deployed in the provider’s environment is a key requirement for service consumers. In this thesis, we provide an approach to assess the trustworthiness of the cloud service providers. We propose a trust model that considers real-time cloud transactions to model the trustworthiness of the cloud service providers. The trust model uses the unique uncertainty model used in the representation of opinion. The Trustworthiness of a cloud service provider is modelled using opinion obtained from three different computations, namely (i) compliance of SLA (Service Level Agreement) parameters (ii) service provider satisfaction ratings and (iii) service provider behaviour. In addition to this the trust model is extended to encompass the essential Cloud characteristics, credibility for weighing the feedbacks and filtering mechanisms to filter the dubious feedback providers. The credibility function and the early filtering mechanisms in the extended trust model are shown to assist in the reduction of impact of malicious feedback providers.
Style APA, Harvard, Vancouver, ISO itp.
9

Aldawsari, B. M. A. "An energy-efficient multi-cloud service broker for green cloud computing environment". Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/7954/.

Pełny tekst źródła
Streszczenie:
The heavy demands on cloud computing resources have led to a substantial growth in energy consumption of the data transferred between cloud computing parties (i.e., providers, datacentres, users, and services) and in datacentre’s services due to the increasing loads on these services. From one hand, routing and transferring large amounts of data into a datacentre located far from the user’s geographical location consume more energy than just processing and storing the same data on the cloud datacentre. On the other hand, when a cloud user submits a job (in the form of a set of functional and non-functional requirements) to a cloud service provider (aka, datacentre) via a cloud services broker; the broker becomes responsible to find the best-fit service to the user request based mainly on the user’s requirements and Quality of Service (QoS) (i.e., response time, latency). Hence, it becomes a high necessity to locate the lowest energy consumption route between the user and the designated datacentre; and the minimum possible number of most energy efficient services that satisfy the user request. In fact, finding the most energy-efficient route to the datacentre, and most energy efficient service(s) to the user are the biggest challenges of multi-cloud broker’s environment. This thesis presents and evaluates a novel multi-cloud broker solution that contains three innovative models and their associated algorithms. The first one is aimed at finding the most energy efficient route, among multiple possible routes, between the user and cloud datacentre. The second model is to find and provide the lowest possible number of most energy efficient services in order to minimise data exchange based on a bin-packing approach. The third model creates an energy-aware composition plan by integrating the most energy efficient services, in order to fulfil user requirements. The results demonstrated a favourable performance of these models in terms of selecting the most energy efficient route and reaching the least possible number of services for an optimum and energy efficient composition.
Style APA, Harvard, Vancouver, ISO itp.
10

Kudryavtsev, Andrey. "3D Reconstruction in Scanning Electron Microscope : from image acquisition to dense point cloud". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD050/document.

Pełny tekst źródła
Streszczenie:
L’objectif de ce travail est d’obtenir un modèle 3D d’un objet à partir d’une série d’images prisesavec un Microscope Electronique à Balayage (MEB). Pour cela, nous utilisons la technique dereconstruction 3D qui est une application bien connue du domaine de vision par ordinateur.Cependant, en raison des spécificités de la formation d’images dans le MEB et dans la microscopieen général, les techniques existantes ne peuvent pas être appliquées aux images MEB. Lesprincipales raisons à cela sont la projection parallèle et les problèmes d’étalonnage de MEB entant que caméra. Ainsi, dans ce travail, nous avons développé un nouvel algorithme permettant deréaliser une reconstruction 3D dans le MEB tout en prenant en compte ces difficultés. De plus,comme la reconstruction est obtenue par auto-étalonnage de la caméra, l’utilisation des mires n’estplus requise. La sortie finale des techniques présentées est un nuage de points dense, pouvant donccontenir des millions de points, correspondant à la surface de l’objet
The goal of this work is to obtain a 3D model of an object from its multiple views acquired withScanning Electron Microscope (SEM). For this, the technique of 3D reconstruction is used which isa well known application of computer vision. However, due to the specificities of image formation inSEM, and in microscale in general, the existing techniques are not applicable to the SEM images. Themain reasons for that are the parallel projection and the problems of SEM calibration as a camera.As a result, in this work we developed a new algorithm allowing to achieve 3D reconstruction in SEMwhile taking into account these issues. Moreover, as the reconstruction is obtained through cameraautocalibration, there is no need in calibration object. The final output of the presented techniques isa dense point cloud corresponding to the surface of the object that may contain millions of points
Style APA, Harvard, Vancouver, ISO itp.
11

Fritsch, Joerg. "Functional programming languages in computing clouds : practical and theoretical explorations". Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/96984/.

Pełny tekst źródła
Streszczenie:
Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms.
Style APA, Harvard, Vancouver, ISO itp.
12

Chen, Chen. "Electron Temperature Enhancement Effects on Plasma Irregularities Associated with Charged Dust in the Earth's Mesosphere". Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/25937.

Pełny tekst źródła
Streszczenie:
Recently, experimental observations have shown that Polar Mesospheric Summer Echoes PMSE may be modulated by radio wave heating the irregularity source region with a ground-based ionospheric heating facilities. It is clear from these past investigations that the temporal behavior of PMSE during ionospheric heating shows promise as a diagnostic for the associated dust layer. To investigate the temporal behavior of plasma irregularities thought to produce PMSE, this work describes a new model that incorporates both finite diffusion time effects as well as dust charging. The hybrid model utilizes fluid ions described by continuity and momentum equations, electrons whose behavior is determined from quasi-neutrality, and charged dust described by the standard Particle-In-Cell PIC method. The model has been used to investigate the temporal behavior of charged dust associated electron irregularities during electron temperature enhancement associated with radio wave heating. The model predicts that the temporal behavior of the irregularities depends on the ratio of the electron-ion ambipolar diffusion time to the dust particle charging time Td/Tc. The results indicate that typically for Td/Tc << 1, an enhancement in electron irregularity amplitude occurs for a period after turn-off of the radio wave heating. The work also predicts that for Td/Tc >> 1, an enhancement in electron irregularity amplitude occurs for a time period after the turn-on of the radio wave heating. Due to the dependence of Td on irregularity scale-size, these results have important implications for observations of PMSE modification at different radar frequencies. Both continuous and discrete charging model were embedded into this computational model, the results were compared and analyzed. It is evident that significant diagnostic information may be available about the dust layer from the temporal behavior of the electron irregularities during the heating process which modifies the background electron temperature. Particularly interesting and important periods of the temporal behavior are during the turn-on and turn-off of the radio wave heating. Although a number of past theoretical and experimental investigations have considered both these on and off period, this dissertation considers further possibilities for diagnostic information available as well as the underlying physical processes. Approximate analytical models are developed and compared to a more accurate full computational model as a reference. Then from the temporal behavior of the electron irregularities during the turn-on and turn-off of the radio wave heating, the analytical models are used to obtain possible diagnostic information for various charged dust and background plasma quantities. Finally, two experiment campaigns have been performed at HAARP, Gakona, Alaska. Preliminary observation results look promising for the existence of PMSE turn-on overshoot. However, more careful experiments need to be done before firm conclusions can be drawn. The new designed Echotek digital receiver is ready for use now. It will be much superior to the experimental setup used for measurements in the previous campaign.Therefore, future experimental campaigns are planning next year to support the theoretical research.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
13

Sarathchandra, Magurawalage Chathura M. "Network-aware resource management for mobile cloud". Thesis, University of Essex, 2017. http://repository.essex.ac.uk/19101/.

Pełny tekst źródła
Streszczenie:
The author proposes a novel system architecture for mobile cloud computing (MCC) that includes a controller for managing computing and communication resources in Cloud Radio Access Network (C-RAN) environment. The gathered monitoring information in the controller is used when making resource allocation/management decisions. A unified protocol has been proposed, which utilises the same packet format for mobile task offloading and resource management. Moreover, the packet format and the message types of the protocol have been presented. An MCC scenario (i.e., cloudlet+clone) that consists of a cloudlet layer has been studied, in which the cloudlets are deployed next to Wi-Fi access points and serve as a localised service point in proximity to mobile devices to improve the performance of mobile cloud services. On top of this, an offloading algorithm is proposed with the main aim of deciding whether to offload to a clone or a cloudlet. The architecture described above has been implemented as a prototype by focussing on resource management in the mobile cloud. A partial implementation of a resource monitoring module that monitors both computing and communication resources have also been presented. Auto-scaling enables efficient computing resource management in the mobile cloud. An empirical performance analysis of cloud vertical scaling for mobile cloud resource management has been conducted. The working procedures of the proposed unified protocol have been illustrated to show the mobile task offloading and resource allocation functions. Simulation results of cloudlet+clone mobile task offloading algorithm demonstrate the effectiveness and efficiency of the presented task offloading architecture, and offloading algorithm on response time and energy consumption. The empirical vertical auto-scaling performance analysis for mobile cloud resource allocation shows that time delays when scaling resources (CPU, RAM, disk) in mobile cloud varies. Moreover, the scaling delay depends on the scaling amount at the given iteration.
Style APA, Harvard, Vancouver, ISO itp.
14

Pelletingeas, Christophe. "Performance evaluation of virtualization with cloud computing". Thesis, Edinburgh Napier University, 2010. http://researchrepository.napier.ac.uk/Output/4010.

Pełny tekst źródła
Streszczenie:
Cloud computing has been the subject of many researches. Researches shows that cloud computing permit to reduce hardware cost, reduce the energy consumption and allow a more efficient use of servers. Nowadays lot of servers are used inefficiently because they are underutilized. The uses of cloud computing associate to virtualization have been a solution to the underutilisation of those servers. However the virtualization performances with cloud computing cannot offers performances equal to the native performances. The aim of this project was to study the performances of the virtualization with cloud computing. To be able to meet this aim it has been review at first the previous researches on this area. It has been outline the different types of cloud toolkit as well as the different ways available to virtualize machines. In addition to that it has been examined open source solutions available to implement a private cloud. The findings of the literature review have been used to realize the design of the different experiments and also in the choice the tools used to implement a private cloud. In the design and the implementation it has been setup experiment to evaluate the performances of public and private cloud. The results obtains through those experiments have outline the performances of public cloud and shows that the virtualization of Linux gives better performances than the virtualization of Windows. This is explained by the fact that Linux is using paravitualization while Windows is using HVM. The evaluation of performances on the private cloud has permitted the comparison of native performance with paravirtualization and HVM. It has been seen that paravirtualization hasperformances really close to the native performances contrary to HVM. Finally it hasbeen presented the cost of the different solutions and their advantages.
Style APA, Harvard, Vancouver, ISO itp.
15

Kruger, Neil. "Modelling the EM properties of dipole reflections with application to uniform chaff clouds". Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2317.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: The origin of chaff dates as far back as WWII, acting as a passive EM countermeasure it was used to confuse enemy radar systems and is still in use today. The purpose of this study is, firstly, to build up a knowledge base for determining chaff parameters and secondly, to calculate the theoretical Radar Cross Section (RCS) of a chaff cloud. Initially dipole resonant properties are investigated relative to dipole physical dimensions. This is extended to the wideband spatial average RCS of a dipole with application to chaff clouds. A model is developed for calculating the theoretical RCS of a cloud typically produced by a single, multiband chaff cartridge. This model is developed on the principles of sparse clouds with negligible coupling; the dipole density for which the model is valid is determined through the statistical simulation of chaff clouds. To determine the effectiveness of chaff clouds, the E-field behaviour through a chaff cloud is investigated numerically. From simulation results a model is developed for estimating the position and drop in E-field strength. It is concluded that though it would be possible to hide a target behind a chaff cloud given ideal circumstances, it is not practical in reality. Given the presented results, recommendations are made for future work.
AFRIKAANSE OPSOMMING: Die oorsprong van kaf dateer so ver terug as WOII. Dit was gebruik as ‘n passiewe EM teenmaatreël teen vyandelike radar stelsels en is steeds vandag in gebruik. Die doel van hierdie studie is eerstens, om ‘n kennisbasis op te bou vir die bepaling van kaf parameters en tweedens, om die teoretiese RDS van kafwolke te bereken. Aanvanklik word die dipool resonante eienskappe ondersoek relatief tot die dipool dimensies. Die studie word uitgebrei tot die wyeband ruimte gemiddelde RDS van ‘n dipool met toepassing op kafwolke. ‘n Model word ontwikkel om die teoretiese RDS te bereken vir ‘n tipiese kafwolk geproduseer deur ‘n enkele, multi-band kafpakkie. Die model is gegrond op die beginsels van lae digte kafwolke met weglaatbare koppeling; die dipool digtheid waarvoor die model geldig is, is bepaal deur statistiese simulasie van kafwolke. Om die effektiwiteit van kafwolke te bepaal, word die E-veld gedrag deur kafwolke numeries ondersoek. Vanaf simulasie resultate word ‘n model ontwikkel om die ligging van, en daling in E-veld sterkte af te skat. Daar word tot die gevolgtrekking gekom dat, alhoewel dit moontlik is om ‘n teiken agter ‘n kafwolk te versteek in ideale omstandighede, dit nie prakties is nie. Na aanleiding van die resultate verkry, is aanbevelings vir verdere werk gedoen.
Style APA, Harvard, Vancouver, ISO itp.
16

Wang, Sihui. "Secondary electron yield measurements of anti-multipacting surfaces for accelerators". Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/23255.

Pełny tekst źródła
Streszczenie:
Electron cloud is an unwanted effect limiting the performance of particle accelerators with positively charged particle beams of high-intensity and short bunch spacing. However, electron cloud caused by beam induced multipacting can be sufficiently suppressed if the secondary electron yield (SEY) of accelerator chamber surface is lower than unity. Usually, the SEY is reduced by two ways: modification of surface chemistry and engineering the surface roughness. The objective of this PhD project is a systematic study of SEY as a function of various surface related parameters such as surface chemistry and surface morphology, as well as an effect of such common treatments for particle accelerators as beam pipe bakeout and surface conditioning with a beam, ultimately aiming to engineer the surfaces with low SEY for the electron cloud mitigation. In this work, transition metals and their coatings and laser treated surface were studied as a function of annealing treatment and electron bombardment. The transition metal thin films have been prepared by DC magnetron sputtering for further test. In the first two Chapter of this thesis, the literature review on electron emission effect is introduced, which includes the process of the electron emission, the influence factor and examples of low SEY materials. In the third Chapter, the experimental methods for SEY measurements and surface investigation used in this work are described. In Chapter 4, the SEY measurement setup which is built by myself are introduced in detail. In Chapter 5 transition metals and their coatings and non-evaporable getter (NEG) coatings have been studied. All the samples have been characterized by SEY measurements, their surface morphology was analysed with Scanning Electron Microscopy (SEM) and their chemistry was studied with X-ray Photoelectron Spectroscopy (XPS). Different surface treatments such as conditioning by electron beam, thermal treatment under vacuum on the sample surfaces have been investigated. For example, the maximum SEY (δmax) of as-received Ti, Zr, V and Hf were 2.30, 2.31, 1.72 and 2.45, respectively. After a dose of 7.9x10-3 C mm-2, δmax of Ti drops to 1.19. δmax for Zr, V and Hf drop to 1.27, 1.48 and 1.40 after doses of 6.4x10-3 C mm-2, 1.3x10-3 and 5.2x10-3 C mm-2, respectively. After heating to 350 °C for 2.5 hours, the SEY of bulk Ti has dropped to 1.21 and 1.40, respectively. As the all bulk samples have a flat surface, there are no difference of morphology. So this reduction of SEY is believed to be a consequence of the growth of a thin graphitic film on the surface after electron bombardment and the removal of the contaminations on the surface after annealing. Chapter 6 of this thesis is about the laser treated surface. Laser irradiation can transform highly reflective metals to black or dark coloured metal. From SEM results, metal surfaces modified by a nanosecond pulsed laser irradiation form a highly organised pyramid surface microstructures, which increase the surface roughness. Due to this reason, δmax of as-received laser treated surface could be lower than 1, which can avoid the electron cloud phenomenon. In this Chapter, the influence of different laser treatment parameters, such as power, hatch distance, different atmospheres on SEY has been investigated. Meanwhile, different surface treatments such as electron conditioning and thermal treatments are studied on the laser treated surface with the investigation of XPS. For example, the δmax of as-received type I with hatch distance 50, 60 and 80 μm in Air are 0.75, 0.75 and 0.80, respectively. After heating to 250 °C for 2 hours, in all case the δmax drop to 0.59, 0.60, 0.62, respectively. The SEYs of all as-received samples are lower than 1 due to the increasing the roughness on the surface by the special pyramid structure. After thermal treatment, the SEY reduces even further. This is caused by removing the contaminations on the surfaces. In conclusion, the present study has largely improved the knowledge of the electron cloud mitigation techniques by surface engineering of vacuum chambers. On the one hand, the surface treatments can modify the surface chemistry, such as the produce the graphic carbon layer on the surface by electron condition and the removal the contamination layer on the top of the surface by thermal treatment. On the other hand, the SEY could be critically low by engineering the surface roughness. Both methods allow reaching δmax less than unity. The efficiency of laser treated surface for e-cloud was demonstrated for a first time leading to a great interest to this new technology application for existing and future particle accelerators.
Style APA, Harvard, Vancouver, ISO itp.
17

Spina, Sandro. "Graph-based segmentation and scene understanding for context-free point clouds". Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/76651/.

Pełny tekst źródła
Streszczenie:
The acquisition of 3D point clouds representing the surface structure of real-world scenes has become common practice in many areas including architecture, cultural heritage and urban planning. Improvements in sample acquisition rates and precision are contributing to an increase in size and quality of point cloud data. The management of these large volumes of data is quickly becoming a challenge, leading to the design of algorithms intended to analyse and decrease the complexity of this data. Point cloud segmentation algorithms partition point clouds for better management, and scene understanding algorithms identify the components of a scene in the presence of considerable clutter and noise. In many cases, segmentation algorithms operate within the remit of a specific context, wherein their effectiveness is measured. Similarly, scene understanding algorithms depend on specific scene properties and fail to identify objects in a number of situations. This work addresses this lack of generality in current segmentation and scene understanding processes, and proposes methods for point clouds acquired using diverse scanning technologies in a wide spectrum of contexts. The approach to segmentation proposed by this work partitions a point cloud with minimal information, abstracting the data into a set of connected segment primitives to support efficient manipulation. A graph-based query mechanism is used to express further relations between segments and provide the building blocks for scene understanding. The presented method for scene understanding is agnostic of scene specific context and supports both supervised and unsupervised approaches. In the former, a graph-based object descriptor is derived from a training process and used in object identification. The latter approach applies pattern matching to identify regular structures. A novel external memory algorithm based on a hybrid spatial subdivision technique is introduced to handle very large point clouds and accelerate the computation of the k-nearest neighbour function. Segmentation has been successfully applied to extract segments representing geographic landmarks and architectural features from a variety of point clouds, whereas scene understanding has been successfully applied to indoor scenes on which other methods fail. The overall results demonstrate that the context-agnostic methods presented in this work can be successfully employed to manage the complexity of ever growing repositories.
Style APA, Harvard, Vancouver, ISO itp.
18

Mlawanda, Joyce. "A comparative study of cloud computing environments and the development of a framework for the automatic deployment of scaleable cloud based applications". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/19994.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng)--Stellenbosch University, 2012
ENGLISH ABSTRACT: Modern-day online applications are required to deal with an ever-increasing number of users without decreasing in performance. This implies that the applications should be scalable. Applications hosted on static servers are in exible in terms of scalability. Cloud computing is an alternative to the traditional paradigm of static application hosting and o ers an illusion of in nite compute and storage resources. It is a way of computing whereby computing resources are provided by a large pool of virtualised servers hosted on the Internet. By virtually removing scalability, infrastructure and installation constraints, cloud computing provides a very attractive platform for hosting online applications. This thesis compares the cloud computing infrastructures Google App Engine and AmazonWeb Services for hosting web applications and assesses their scalability performance compared to traditionally hosted servers. After the comparison of the three application hosting solutions, a proof-of-concept software framework for the provisioning and deployment of automatically scaling applications is built on Amazon Web Services which is shown to be best suited for the development of such a framework.
Style APA, Harvard, Vancouver, ISO itp.
19

Alansari, Marwah. "Automated management cloud-platforms based on energy policies". Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6706/.

Pełny tekst źródła
Streszczenie:
Delivering environmentally friendly services has become an important issue in Cloud Computing due to awareness provided by governments and environmental conservation organisations about the impact of electricity usage on carbon footprints. Cloud providers and cloud consumers (organisations/ enterprises) have their own defined \(green\) \(policies\) to control energy consumption at their data centers. At service management level, \(green\) \(policies\) can be mapped as \(energy\) \(management\) \(policies\) or \(management\) \(policies\). Focusing at cloud consumer's side, \(management\) \(policies\) are described by business managers which can change regularly. The continuous changing is based on the nature of the technical environment, changes in regulation; and business requirements. Therefore, there is a gap between the level of describing and implementing \(management\) \(policies\) in the cloud environment. This thesis provides a method to bridge that gap by (a) defining a specification for formulating \(management\) \(policies\) into executable form for an infrastructure-as-a-service (IaaS) cloud model; (b) designing a framework to execute the described \(management\) \(policies\) automatically; (c) proposing a modelling and analysis method to identify the potential \(energy\) \(management\) \(policy\) that would save energy-cost. Each aspect covered in the thesis is evaluated with a help of an Energy Management Case Study for a private cloud scenario.
Style APA, Harvard, Vancouver, ISO itp.
20

Zardari, Shehnila. "Cloud adoption : a goal-oriented requirements engineering approach". Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6567/.

Pełny tekst źródła
Streszczenie:
The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers’ ability to meet service level agreements. These risks hinder the adoption of cloud computing. We motivate the need for a new requirements engineering methodology for systematically helping businesses and users to adopt cloud services and for mitigating risks in such transition. The methodology is grounded in goal-oriented approaches for requirements engineering. We argue that Goal-Oriented Requirements Engineering (GORE) is a promising paradigm to adopt for goals that are generic and flexible statements of users’ requirements, which could be refined, elaborated, negotiated, mitigated for risks and analysed for economics considerations. The methodology can be used by small to large scale organisations to inform crucial decisions related to cloud adoption. We propose a risk management framework based on the principle of GORE. In this approach, we liken risks to obstacles encountered while realising cloud user goals, therefore proposing cloud-specific obstacle resolution tactics for mitigating identified risks. The proposed framework shows benefits by providing a principled engineering approach to cloud adoption and empowering stakeholders with tactics for resolving risks when adopting the cloud. We extend the work on GORE and obstacles for informing the adoption process. We argue that obstacles’ prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis.
Style APA, Harvard, Vancouver, ISO itp.
21

Younis, Y. A. "Securing access to cloud computing for critical infrastructure". Thesis, Liverpool John Moores University, 2015. http://researchonline.ljmu.ac.uk/4453/.

Pełny tekst źródła
Streszczenie:
Cloud computing offers cost effective services on-demand which encourage critical infrastructure providers to consider migrating to the cloud. Critical infrastructures are considered as a backbone of modern societies such as power plants and water. Information in cloud computing is likely to be shared among different entities, which could have various degrees of sensitivity. This requires robust isolation and access control mechanisms. Although various access control models and policies have been developed, they cannot fulfil requirements for a cloud based access control system. The reason is that cloud computing has a diverse sets of security requirements and unique security challenges such as multi-tenant and heterogeneity of security policies, rules and domains. This thesis provides a detailed study of cloud computing security challenges and threats, which were used to identify security requirements for various critical infrastructure providers. We found that an access control system is a crucial security requirement for the surveyed critical infrastructure providers. Furthermore, the requirement analysis was used to propose a new criteria to evaluate access control systems for cloud computing. Moreover, this work presents a new cloud based access control model to meet the identified cloud access control requirements. The model does not only ensure the secure sharing of resources among potential untrusted tenants, but also has the capacity to support different access permissions for the same cloud user. Our focused in the proposed model is the lack of data isolation in lower levels (CPU caches), which could lead to bypass access control models to gain some sensitive information by using cache side-channel attacks. Therefore, the thesis investigates various real attack scenarios and the gaps in existing mitigation approaches. It presents a new Prime and Probe cache side-channel attack, which can give detailed information about addresses accessed by a virtual machine with no need for any information about cache sets accessed by the virtual machine. The design, implementation and evaluation of a proposed solution preventing cache side-channel attacks are also presented in the thesis. It is a new lightweight solution, which introduces very low overhead (less than 15,000 CPU cycles). It can be applied in any operating system and prevents cache side-channel attacks in cloud computing. The thesis also presents a new detecting cache side-channel attacks solution. It focuses on the infrastructure used to host cloud computing tenants by counting cache misses caused by a virtual machine. The detection solutions has 0% false negative and 15% false positive.
Style APA, Harvard, Vancouver, ISO itp.
22

Alwabel, Abdulelah. "A fault-tolerant mechanism for desktop cloud systems". Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/387007/.

Pełny tekst źródła
Streszczenie:
Cloud computing is a paradigm that promises to move IT another step towards the age of computing utility. Traditionally, Clouds employ dedicated resources located in data centres to provide services to clients. The resources in such Cloud systems are known to be highly reliable with a low probability of failure. Desktop Cloud computing is a new type of Cloud computing that aims to provide Cloud services at little or no cost. This ambition can be achieved by combining Cloud computing and Volunteer computing into Desktop Clouds, harnessing non-dedicated resources when idle. The resources can be any type of computing machine, for example a standard PC, but such computing resources are renowned for their volatility; failures can happen at any time without warning. In Cloud computing, tasks are submitted by Cloud users or brokers to be processed and executed by virtual machines (VMs), and virtual mechanisms are hosted by physical machines (PMs). In this context, throughput is defined as the proportion of the total number of tasks that are successfully processed, so the failure of a PM can have a negative impact on this measure of a Desktop Cloud system by causing the destruction of all hosted VMs, leading to the loss of submitted tasks currently being processed. The aim of this research is to design a VM allocation mechanism for Desktop Cloud systems that is tolerant to node failure. VM allocation mechanisms are responsible for allocating VMs to PMs and migrating them during runtime with the objective of optimisation, yet those available pay little attention to node failure events. The contribution of this research is to propose a Fault-Tolerant VM allocation mechanism that handles failure events in PMs in Desktop Clouds to ensure that the throughput of Desktop Cloud system remains within acceptable levels by employing a replication technique. Since doing so causes an increase of power consumption in PMs, the mechanism is enhanced with a migration policy to minimise this effect, evaluated using three metrics: throughput of tasks; power consumption of PMs; and service availability. The evaluation is conducted using DesktopCloudSim, a tool developed for the purpose by this study as an extension to CloudSim, the well-known Cloud simulation tool, to simulate node failure events in Cloud systems, analysing node failure with real data sets of collected from Failure Trace Archives. The experiments demonstrate that the FT mechanism improves the throughput of Cloud systems statistically significantly compared with traditional mechanisms (First Come First Serve, Greedy and RoundRobin) in the presence of node failures. The FT mechanism reduces power consumption statistically significantly when its migration policy is employed.
Style APA, Harvard, Vancouver, ISO itp.
23

Alqahtani, Saeed Masaud H. "Cloud intrusion detection systems : fuzzy logic and classifications". Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/45430/.

Pełny tekst źródła
Streszczenie:
Cloud Computing (CC), as defned by national Institute of Standards and Technology (NIST), is a new technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service-provider interaction. CC is a fast growing field; yet, there are major concerns regarding the detection of security threats, which in turn have urged experts to explore solutions to improve its security performance through conventional approaches, such as, Intrusion Detection System (IDS). In the literature, there are two most successful current IDS tools that are used worldwide: Snort and Suricata; however, these tools are not flexible to the uncertainty of intrusions. The aim of this study is to explore novel approaches to uplift the CC security performance using Type-1 fuzzy logic (T1FL) technique with IDS when compared to IDS alone. All experiments in this thesis were performed within a virtual cloud that was built within an experimental environment. By combining fuzzy logic technique (FL System) with IDSs, namely SnortIDS and SuricataIDS, SnortIDS and SuricataIDS for detection systems were used twice (with and without FL) to create four detection systems (FL-SnortIDS, FL-SuricataIDS, SnortIDS, and SuricataIDS) using Intrusion Detection Evaluation Dataset (namely ISCX). ISCX comprised two types of traffic (normal and threats); the latter was classified into four classes including Denial of Service, User-to-Root, Root-to-Local, and Probing. Sensitivity, specificity, accuracy, false alarms and detection rate were compared among the four detection systems. Then, Fuzzy Intrusion Detection System model was designed (namely FIDSCC) in CC based on the results of the aforementioned four detection systems. The FIDSCC model comprised of two individual systems pre-and-post threat detecting systems (pre-TDS and post-TDS). The pre-TDS was designed based on the number of threats in the aforementioned classes to assess the detection rate (DR). Based on the output of this DR and false positives of the four detection systems, the post-TDS was designed in order to assess CC security performance. To assure the validity of the results, classifier algorithms (CAs) were introduced to each of the four detection systems and four threat classes for further comparison. The classifier algorithms were OneR, Naive Bayes, Decision Tree (DT), and K-nearest neighbour. The comparison was made based on specific measures including accuracy, incorrect classified instances, mean absolute error, false positive rate, precision, recall, and ROC area. The empirical results showed that FL-SnortIDS was superior to FL-SuricataIDS, SnortIDS, and SuricataIDS in terms of sensitivity. However, insignificant difference was found in specificity, false alarms and accuracy among the four detection systems. Furthermore, among the four CAs, the combination of FL-SnortIDS and DT was shown to be the best detection method. The results of these studies showed that FIDSCC model can provide a better alternative to detecting threats and reducing the false positive rates more than the other conventional approaches.
Style APA, Harvard, Vancouver, ISO itp.
24

PAREDES, QUINTANILLA MIRYAM ELIZABETH. "Electronic design of innovative mini ultralight radioprobes aimed at tracking lagrangian turbulence fluctuations within warm clouds". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2950496.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Beza-Beza, Cristian F. "Cloud forest passalids: An evolutionary study of the genus Yumtaax (Coleoptera: Passalidae)". Diss., Wichita State University, 2013. http://hdl.handle.net/10057/6418.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Anegondi, Phanindhra Raju. "Electronic design aspects and instrumentation techniques of cloud condensation nuclei (CCN) spectrometer". abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447634.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Tyler, Lamonte Bryant. "Exploring the Implementation of Cloud Security to Minimize Electronic Health Records Cyberattacks". ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/5281.

Pełny tekst źródła
Streszczenie:
Health care leaders lack the strategies to implement cloud security for electronic medical records to prevent a breach of patient data. The purpose of this qualitative case study was to explore strategies senior information technology leaders in the healthcare industry use to implement cloud security to minimize electronic health record cyberattacks. The theory supporting this study was routine activities theory. Routine activities theory is a theory of criminal events that can be applied to technology. The study's population consisted of senior information technology leaders from a medical facility in a large northeastern city. Data collection included semistructured interviews, phone interviews, and analysis of organizational documents. The use of member checking and methodological triangulation increased the validity of this study's findings among all participants. There were 5 major themes that emerged from the study (a) requirement of coordination with the electronic health record vendor and the private cloud vendor, (b) protection of the organization, (c) requirements based on government and organizational regulations, (d) access management, (e) a focus on continuous improvement. The results of this study may create awareness of the necessity to secure electronic health records in the cloud to minimize cyberattacks. Cloud security is essential because of its social impact on the ability to protect confidential data and information. The results of this study will further serve as a foundation for positive social change by increasing awareness in support of the implementation of electronic health record cloud security.
Style APA, Harvard, Vancouver, ISO itp.
28

Mishra, Subhashree. "Characterizing CCN spectra to investigate the warm rain process". abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1438932.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Tziakouris, Giannis. "Economics-driven approach for self-securing assets in cloud". Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7868/.

Pełny tekst źródła
Streszczenie:
This thesis proposes the engineering of an elastic self-adaptive security solution for the Cloud that considers assets as independent entities, with a need for customised, ad-hoc security. The solution exploits agent-based, market-inspired methodologies and learning approaches for managing the changing security requirements of assets by considering the shared and on-demand nature of services and resources while catering for monetary and computational constraints. The usage of auction procedures allows the proposed framework to deal with the scale of the problem and the trade-offs that can arise between users and Cloud service provider(s). Whereas, the usage of a learning technique enables our framework to operate in a proactive, automated fashion and to arrive on more efficient bidding plans, informed by historical data. A variant of the proposed framework, grounded on a simulated university application environment, was developed to evaluate the applicability and effectiveness of this solution. As the proposed solution is grounded on market methods, this thesis is also concerned with asserting the dependability of market mechanisms. We follow an experimentally driven approach to demonstrate the deficiency of existing market-oriented solutions in facing common market-specific security threats and provide candidate, lightweight defensive mechanisms for securing them against these attacks.
Style APA, Harvard, Vancouver, ISO itp.
30

Hassett, Maribeth O. "Analysis of the Hygroscopic Properties of Fungal Spores and Pollen Grains inside an Environmental Scanning Electron Microscope (ESEM)". Miami University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=miami1461243940.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Sajjad, Ali. "A secure and scalable communication framework for inter-cloud services". Thesis, City University London, 2015. http://openaccess.city.ac.uk/14415/.

Pełny tekst źródła
Streszczenie:
A lot of contemporary cloud computing platforms offer Infrastructure-as-a-Service provisioning model, which offers to deliver basic virtualized computing resources like storage, hardware, and networking as on-demand and dynamic services. However, a single cloud service provider does not have limitless resources to offer to its users, and increasingly users are demanding the features of extensibility and inter-operability with other cloud service providers. This has increased the complexity of the cloud ecosystem and resulted in the emergence of the concept of an Inter-Cloud environment where a cloud computing platform can use the infrastructure resources of other cloud computing platforms to offer a greater value and flexibility to its users. However, there are no common models or standards in existence that allows the users of the cloud service providers to provision even some basic services across multiple cloud service providers seamlessly, although admittedly it is not due to any inherent incompatibility or proprietary nature of the foundation technologies on which these cloud computing platforms are built. Therefore, there is a justified need of investigating models and frameworks which allow the users of the cloud computing technologies to benefit from the added values of the emerging Inter-Cloud environment. In this dissertation, we present a novel security model and protocols that aims to cover one of the most important gaps in a subsection of this field, that is, the problem domain of provisioning secure communication within the context of a multi-provider Inter-Cloud environment. Our model offers a secure communication framework that enables a user of multiple cloud service providers to provision a dynamic application-level secure virtual private network on top of the participating cloud service providers. We accomplish this by taking leverage of the scalability, robustness, and flexibility of peer-to-peer overlays and distributed hash tables, in addition to novel usage of applied cryptography techniques to design secure and efficient admission control and resource discovery protocols. The peer-to-peer approach helps us in eliminating the problems of manual configurations, key management, and peer churn that are encountered when setting up the secure communication channels dynamically, whereas the secure admission control and secure resource discovery protocols plug the security gaps that are commonly found in the peer-to-peer overlays. In addition to the design and architecture of our research contributions, we also present the details of a prototype implementation containing all of the elements of our research, as well as showcase our experimental results detailing the performance, scalability, and overheads of our approach, that have been carried out on actual (as opposed to simulated) multiple commercial and non-commercial cloud computing platforms. These results demonstrate that our architecture incurs minimal latency and throughput overheads for the Inter-Cloud VPN connections among the virtual machines of a service deployed on multiple cloud platforms, which are 5% and 10% respectively. Our results also show that our admission control scheme is approximately 82% more efficient and our secure resource discovery scheme is about 72% more efficient than a standard PKI-based (Public Key Infrastructure) scheme.
Style APA, Harvard, Vancouver, ISO itp.
32

Scoca, Vincenzo. "Improving service quality in cloud computing : from definition to deployment". Thesis, IMT Alti Studi Lucca, 2018. http://e-theses.imtlucca.it/255/1/Scoca_phdthesis.pdf.

Pełny tekst źródła
Streszczenie:
Service quality is crucial in all stages of the Cloud service life cycle, from service acquisition, where Cloud consumers and providers negotiate for a mutual agreement, to service execution, where service management is driven by the agreed requirements. Much work has been devoted to specification and enforcement of service quality terms in the Cluster, Grid and Cloud domains. However, the dynamism present in Cloud services is ignored. We propose a theoretical and practical framework which addresses the first phases of the service life cycle: (i) the definition of service provision; (ii) the negotiation of offers/requests expressed and (iii) the service deployment, mainly focused on latency-sensitive applications. We introduce SLAC, a specification language for the definition of service requirements, the so-called service level agreements (SLAs), which allows us to define conditions and actions that can automatically modify those terms at runtime. Experimental results show that the use of SLAC can drastically reduce the service violations and penalties to the advantages of providers and consumers. Then, we define a novel matchmaking and negotiation framework, which evaluates the compatibility of SLAC requests/offers, and provides the modifications necessary to reach an agreement. Experiments demonstrate the effectiveness of our proposal. We also introduce a new scheduling algorithm for latency-sensitive services, in a Cloud/Edge computing scenario, which takes into account not only the service requirements but also network latency, bandwidth and computing capabilities. Again, experimental results confirm the advantages of this new approach over existing solutions.
Style APA, Harvard, Vancouver, ISO itp.
33

Solur, Virupakshaiah Girish. "Study of energy efficiency in portable devices using cloud computing: case of multimedia applications". Thesis, Wichita State University, 2010. http://hdl.handle.net/10057/3746.

Pełny tekst źródła
Streszczenie:
Energy efficiency is considered an important argument when designing information and communication technology solutions. Today the use of portable devices to receive valuable services while on the move is immense. Advancements in software and hardware technology need to be assimilated, with corresponding meliorations in battery engineering. With further progress in device miniaturization, striking improvements in battery technologies cannot be foretold. Due to limited energy requirements on portable devices, changing computing power from the user end to a remote server has provided the opportunity for the evolution of cloud computing. With on-demand self-service from cloud computing, there has been substantial growth in the number of users of portable device like laptops, netbooks, smartphones, and tablet PCs, which has resulted in a significant increase in energy consumption. The main objective of this research is to study and analyze the energy patterns between client-computing and cloud-computing-based applications that provide multimedia services. This study renders potential solutions for cloud users and cloud service providers to choose applications based on their requirements. It provides an alternative to end users for optimal-use cloud services through battery-powered portable devices.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering.
Style APA, Harvard, Vancouver, ISO itp.
34

Tippa, Nani. "Design of client aware scheduler for XEN with enhanced techniques to improve cloud performance". Thesis, Wichita State University, 2013. http://hdl.handle.net/10057/6842.

Pełny tekst źródła
Streszczenie:
Infrastructure as a Service (IaaS) in cloud provides ample scope for various high volume applications to be run on the servers across the WAN, availing a fair service to the end clients. Many effective schedulers have been designed to consider the contention of the computational and communicational resources, which provide a guaranteed effectiveness for resource sharing. However, the vast diversity of client devices in a cloud demand scheduling based on their features and capabilities. Mobile clients, workstations, laptops, PDAs and thin clients in the cloud vary in aspects such as processing power, screen size, battery life, geographical distance and many. Algorithms in the cloud, which are based on client capabilities result in many consumer benefits such as network load balancing, saving battery, reducing latency, efficient processing. In this thesis, the authors propose a client aware credit scheduler for virtualized server setup in a cloud that schedules the client requests based on the client device features and capabilities. Rich Internet Applications (RIA) is proposed, in order for the server to realize the client device capabilities such as the type of client, the battery remaining and the location of the client. Results show that the client-aware credit scheduler is effective in terms of saving energy and reducing response latency.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
35

Stanfield, Allison R. "The authentication of electronic evidence". Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/93021/1/Allison_Stanfield_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis examines whether the rules for of evidence, which were developed around paper over centuries, are adequate for the authentication of electronic evidence. The history of documentary evidence is examined, and the nature of electronic evidence is explored, particularly recent types of electronic evidence such as social media and 'the Cloud'. The old rules are then critically applied to the varied types of electronic evidence to determine whether or not these old rules are indeed adequate.
Style APA, Harvard, Vancouver, ISO itp.
36

Abdlhamed, M. "Intrusion prediction system for cloud computing and network based systems". Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/8897/.

Pełny tekst źródła
Streszczenie:
Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method.
Style APA, Harvard, Vancouver, ISO itp.
37

Chen, Tao. "Self-aware and self-adaptive autoscaling for cloud based services". Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6713/.

Pełny tekst źródła
Streszczenie:
Modern Internet services are increasingly leveraging on cloud computing for flexible, elastic and on-demand provision. Typically, Quality of Service (QoS) of cloud-based services can be tuned using different underlying cloud configurations and resources, e.g., number of threads, CPU and memory etc., which are shared, leased and priced as utilities. This benefit is fundamentally grounded by autoscaling: an automatic and elastic process that adapts cloud configurations on-demand according to time-varying workloads. This thesis proposes a holistic cloud autoscaling framework to effectively and seamlessly address existing challenges related to different logical aspects of autoscaling, including architecting autoscaling system, modelling the QoS of cloudbased service, determining the granularity of control and deciding trade-off autoscaling decisions. The framework takes advantages of the principles of self-awareness and the related algorithms to adaptively handle the dynamics, uncertainties, QoS interference and trade-offs on objectives that are exhibited in the cloud. The major benefit is that, by leveraging the framework, cloud autoscaling can be effectively achieved without heavy human analysis and design time knowledge. Through conducting various experiments using RUBiS benchmark and realistic workload on real cloud setting, this thesis evaluates the effectiveness of the framework based on various quality indicators and compared with other state-of-the-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
38

He, Yijun, i 何毅俊. "Protecting security in cloud and distributed environments". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49617631.

Pełny tekst źródła
Streszczenie:
Encryption helps to ensure that information within a session is not compromised. Authentication and access control measures ensure legitimate and appropriate access to information, and prevent inappropriate access to such resources. While encryption, authentication and access control each has its own responsibility in securing a communication session, a combination of these three mechanisms can provide much better protection for information. This thesis addresses encryption, authentication and access control related problems in cloud and distributed environments, since these problems are very common in modern organization environment. The first one is a User-friendly Location-free Encryption System for Mobile Users (UFLE). It is an encryption and authentication system which provides maximum security to sensitive data in distributed environment: corporate, home and outdoors scenarios, but requires minimum user effort (i.e. no biometric entry, or possession of cryptographic tokens) to access the data. It makes users securely and easily access data any time and any place, as well as avoids data breach due to stolen/lost laptops and USB flash. The multi-factor authentication protocol provided in this scheme is also applicable to cloud storage. The second one is a Simple Privacy-Preserving Identity-Management for Cloud Environment (SPICE). It is the first digital identity management system that can satisfy “unlinkability”and “delegatable authentication” in addition to other desirable properties in cloud environment. Unlinkability ensures that none of the cloud service providers (CSPs), even if they collude, can link the transactions of the same user. On the other hand, delegatable authentication is unique to the cloud platform, in which several CSPs may join together to provide a packaged service, with one of them being the source provider which interacts with the clients and performs authentication, while the others are receiving CSPs which will be transparent to the clients. The authentication should be delegatable such that the receiving CSP can authenticate a user without a direct communication with either the user or the registrar, and without fully trusting the source CSP. The third one addresses re-encryption based access control issue in cloud and distributed storage. We propose the first non-transferable proxy re-encryption scheme [16] which successfully achieves the non-transferable property. Proxy re-encryption allows a third-party (the proxy) to re-encrypt a ciphertext which has been encrypted for one party without seeing the underlying plaintext so that it can be decrypted by another. A proxy re-encryption scheme is said to be non-transferable if the proxy and a set of colluding delegatees cannot re-delegate decryption rights to other parties. The scheme can be utilized for a content owner to delegate content decryption rights to users in the untrusted cloud storage. The advantages of using such scheme are: decryption keys are managed by the content owner, and plaintext is always hidden from cloud provider.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
39

Krotsiani, M. "Model driven certification of Cloud service security based on continuous monitoring". Thesis, City University London, 2016. http://openaccess.city.ac.uk/15044/.

Pełny tekst źródła
Streszczenie:
Cloud Computing technology offers an advanced approach for the provision of infrastructure, platform and software services without the need of extensive cost of owning, operating or maintaining the computational infrastructures required. However, despite being cost effective, this technology has raised concerns regarding the security, privacy and compliance of data or services offered through cloud systems. This is mainly due to the lack of transparency of services to the consumers, or due to the fact that service providers are unwilling to take full responsibility for the security of services that they offer through cloud systems, and accept liability for security breaches [18]. In such circumstances, there is a trust deficiency that needs to be addressed. The potential of certification as a means of addressing the lack of trust regarding the security of different types of services, including the cloud, has been widely recognised [149]. However, the recognition of this potential has not led to a wide adoption, as it was expected. The reason could be that certification has traditionally been carried out through standards and certification schemes (e.g., ISO27001 [149], ISO27002 [149] and Common Criteria [65]), which involve predominantly manual systems for security auditing, testing and inspection processes. Such processes tend to be lengthy and have a significant financial cost, which often prevents small technology vendors from adopting it [87]. In this thesis, we present an automated approach for cloud service certification, where the evidence is gathered through continuous monitoring. This approach can be used to: (a) define and execute automatically certification models, to continuously acquire and analyse evidence regarding the provision of services on cloud infrastructures through continuous monitoring; (b) use this evidence to assess whether the provision is compliant with required security properties; and (c) generate and manage digital certificates to confirm the compliance of services with specific security properties.
Style APA, Harvard, Vancouver, ISO itp.
40

Morichetta, Andrea. "A formal approach to decision support on Mobile Cloud Computing applications". Thesis, IMT Alti Studi Lucca, 2016. http://e-theses.imtlucca.it/190/1/Morichetta_phdthesis.pdf.

Pełny tekst źródła
Streszczenie:
Mobile Cloud Computing (MCC) is an emergent topic growths with the explosion of the mobile applications. In MCC systems, application functionalities are dynamically partitioned between the mobile devices and cloud infrastructures. The main research direction in this field aims at optimizing different metrics, like performance, energy efficiency, reliability and security, in a dynamic environment in which the MCC application is located. Optimization in MCC refers to taking advantages from the offloading process, that consists in moving the computation from the local device to a remote one. The biggest challenge in this aspect is to define a strategy that is able to decide when offloading and which part of the application to move. This technique, in general, improves the efficiency of a system, although sometimes it can lead to a performance degradation. To decide when and what to offload, in this thesis we propose a new general framework supporting the design and the runtime execution of applications on their own MCC scenarios. In particular the framework provides a new specification language, called MobiCa, equipped with a formal semantics that permits to capture all characteristics of a MCC system. Besides the strategy optimization achieved by exploiting the potentiality of the model checker UPPAAL, we propose a set of methods for determining optimal finite/infinite schedules. They are able to manage the resource assignment of components with the aim of improving the system efficiency in terms of battery consumption and time. Furthermore, we propose two optimized scheduling algorithms, developed in Java, based on the exploitation of parallel computation in order to improve the system performance.
Style APA, Harvard, Vancouver, ISO itp.
41

Baharon, M. R. "Mobile network and cloud based privacy-preserving data aggregation and processing". Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/7244/.

Pełny tekst źródła
Streszczenie:
The emerging technology of mobile devices and cloud computing has brought a new and efficient way for data to be collected, processed and stored by mobile users. With improved specifications of mobile devices and various mobile applications provided by cloud servers, mobile users can enjoy tremendous advantages to manage their daily life through those applications instantaneously, conveniently and productively. However, using such applications may lead to the exposure of user data to unauthorised access when the data is outsourced for processing and storing purposes. Furthermore, such a setting raises the privacy breach and security issue to mobile users. As a result, mobile users would be reluctant to accept those applications without any guarantee on the safety of their data. The recent breakthrough of Fully Homomorphic Encryption (FHE) has brought a new solution for data processing in a secure motion. Several variants and improvements on the existing methods have been developed due to efficiency problems. Experience of such problems has led us to explore two areas of studies, Mobile Sensing Systems (MSS) and Mobile Cloud Computing (MCC). In MSS, the functionality of smartphones has been extended to sense and aggregate surrounding data for processing by an Aggregation Server (AS) that may be operated by a Cloud Service Provider (CSP). On the other hand, MCC allows resource-constraint devices like smartphones to fully leverage services provided by powerful and massive servers of CSPs for data processing. To support the above two application scenarios, this thesis proposes two novel schemes: an Accountable Privacy-preserving Data Aggregation (APDA) scheme and a Lightweight Homomorphic Encryption (LHE) scheme. MSS is a kind of WSNs, which implements a data aggregation approach for saving the battery lifetime of mobile devices. Furthermore, such an approach could improve the security of the outsourced data by mixing the data prior to be transmitted to an AS, so as to prevent the collusion between mobile users and the AS (or its CSP). The exposure of users’ data to other mobile users leads to a privacy breach and existing methods on preserving users’ privacy only provide an integrity check on the aggregated data without being able to identify any misbehaved nodes once the integrity check has failed. Thus, to overcome such problems, our first scheme APDA is proposed to efficiently preserve privacy and support accountability of mobile users during the data aggregation. Furthermore, APDA is designed with three versions to provide balanced solutions in terms of misbehaved node detection and data aggregation efficiency for different application scenarios. In addition, the successfully aggregated data also needs to be accompanied by some summary information based on necessary additive and non-additive functions. To preserve the privacy of mobile users, such summary could be executed by implementing existing privacy-preserving data aggregation techniques. Nevertheless, those techniques have limitations in terms of applicability, efficiency and functionality. Thus, our APDA has been extended to allow maximal value finding to be computed on the ciphertext data so as to preserve user privacy with good efficiency. Furthermore, such a solution could also be developed for other comparative operations like Average, Percentile and Histogram. Three versions of Maximal value finding (Max) are introduced and analysed in order to differentiate their efficiency and capability to determine the maximum value in a privacy-preserving manner. Moreover, the formal security proof and extensive performance evaluation of our proposed schemes demonstrate that APDA and its extended version can achieve stronger security with an optimised efficiency advantage over the state-of-the-art in terms of both computational and communication overheads. In the MCC environment, the new LHE scheme is proposed with a significant difference so as to allow arbitrary functions to be executed on ciphertext data. Such a scheme will enable rich-mobile applications provided by CSPs to be leveraged by resource-constraint devices in a privacy-preserving manner. The scheme works well as long as noise (a random number attached to the plaintext for security reasons) is less than the encryption key, which makes it flexible. The flexibility of the key size enables the scheme to incorporate with any computation functions in order to produce an accurate result. In addition, this scheme encrypts integers rather than individual bits so as to improve the scheme’s efficiency. With a proposed process that allows three or more parties to communicate securely, this scheme is suited to the MCC environment due to its lightweight property and strong security. Furthermore, the efficacy and efficiency of this scheme are thoroughly evaluated and compared with other schemes. The result shows that this scheme can achieve stronger security under a reasonable cost.
Style APA, Harvard, Vancouver, ISO itp.
42

Chang, Ouliang. "Numerical Simulation of Ion-Cyclotron Turbulence Generated by Artificial Plasma Cloud Release". Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34018.

Pełny tekst źródła
Streszczenie:
Possibilities of generating plasma turbulence to provide control of space weather processes have been of particular interest in recent years. Such turbulence can be created by chemical released into a magnetized background plasma. The released plasma clouds are heavy ions which have ring velocity distribution and large free energy to drive the turbulence. An electromagnetic hybrid (fluid electrons and particle ions) model incorporating electron inertia is developed to study the generation and nonlinear evolution of this turbulence. Fourier pseudo-spectral methods are combined with finite difference methods to solve the electron momentum equations. Time integration is accomplished by a 4th-order Runge-Kutta scheme or predicator-corrector method. The numerical results show good agreement with theoretical prediction as well as provide further insights on the nonlinear turbulence evolution. Initially the turbulence lies near harmonics of the ring plasma ion cyclotron frequency and propagates nearly perpendicular to the background magnetic field as predicted by the linear theory. If the amplitude of the turbulence is sufficiently large, the quasi-electrostatic short wavelength ion cyclotron waves evolve nonlinearly into electromagnetic obliquely propagating shear Alfven waves with much longer wavelength. The results indicate that ring densities above a few percent of the background plasma density may produce wave amplitudes large enough for such an evolution to occur. The extraction of energy from the ring plasma may be in the range of 10-15% with a generally slight decrease in the magnitude as the ring density is increased from a few percent to several 10's of percent of the background plasma density. Possibilities to model the effects of nonlinear processes on energy extraction by introducing electron anomalous resistivity are also addressed. Suitability of the nonlinearly generated shear Alfven waves for applications to scattering radiation belt particles is discussed.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
43

Kothuru, Hemanth J. S. "Study of energy efficiency on portable devices using cloud computing: the case of office productivity applications". Thesis, Wichita State University, 2010. http://hdl.handle.net/10057/3730.

Pełny tekst źródła
Streszczenie:
Today, there is an exponential growth in the use of laptops for computing and communication. However, the battery life of laptops is only a few hours, at best. Furthermore, studies indicate that laptops contribute to approximately 1% of the overall global energy consumption. Thus, there are significant incentives to minimize the energy consumed by laptops. To achieve this goal, it is important to understand the energy expended by each component of a laptop. Initially in this work, the power consumed by each component of a modern laptop was systematically studied. Results indicate that wireless communication has been a significant consumer of power, particularly obvious power hogs being the display, graphics card, and processor. A subsequent study of energy consumption of portable devices using a remote cloud application compared to local execution revealed some interesting facts.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
Style APA, Harvard, Vancouver, ISO itp.
44

Shetty, Rahul. "Gas kinematics and dynamics [electronic resource: spiral structure and cloud formation in disk galaxies /". College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7603.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Dept. of Astronomy. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
45

Gao, Bo. "Developing energy-aware workload offloading frameworks in mobile cloud computing". Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/78802/.

Pełny tekst źródła
Streszczenie:
Mobile cloud computing is an emerging field of research that aims to provide a platform on which intelligent and feature-rich applications are delivered to the user at any time and at anywhere. Computation offload between mobile and cloud plays a key role in this vision and ensures that the integration between mobile and cloud is both seamless and energy-efficient. In this thesis, we develop a suite of energy-aware workload offloading frameworks to accommodate the efficient execution of mobile workflows on a mobile cloud platform. We start by looking at two energy objectives of a mobile cloud platform. While the first objective aims at minimising the overall energy cost of the platform, the second objective aims at the longevity of the platform taking into account the residual battery power of each device. We construct optimisation models for both objectives and develop two efficient algorithms to approximate the optimal solution. According to simulation results, our greedy autonomous offload (GAO) algorithm is able to efficiently produce allocation schemes that are close to optimal. Next, we look at the task allocation problem from a workflow's perspective and develop energy-aware offloading strategies for time-constrained mobile workflows. We demonstrate the effect of software and hardware characteristics have over the offload-efficiency of mobile workflows with a workflow-oriented greedy autonomous offload (WGAO) algorithm, an extension to the GAO algorithm. Thirdly, we propose a novel network I-O model to describe the bandwidth dependencies and allocation problem in mobile networks. This model lays the foundation for further objective developments such as the cost-based and adaptive bandwidth allocation schemes which we also present in this thesis. Lastly, we apply a game theoretical approach to model the non-cooperative behaviour of mobile cloud applications that reside on the same device. Mixed-strategy Nash equilibrium is derived for the offload game which further quantifies the price of anarchy of the system.
Style APA, Harvard, Vancouver, ISO itp.
46

Zhang, Linquan, i 张琳泉. "Move my data to the cloud: an online cost-minimizing approach". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48330140.

Pełny tekst źródła
Streszczenie:
Cloud computing has rapidly emerged as a new computation paradigm, providing agile and scalable resource access in a utility-like fashion. Processing of massive amounts of data has been a primary usage of the clouds in practice. While many efforts have been devoted to designing the computation models (e.g., MapReduce), one important issue has been largely neglected in this respect: how do we efficiently move the data, practically generated from different geographical locations over time, into a cloud for effective processing? The usual approach of shipping data using hard disks lacks flexibility and security. As the first dedicated effort, this paper tackles this massive, dynamic data migration issue. Targeting a cloud encompassing disparate data centers of different resource charges, we model the cost-minimizing data migration problem, and propose efficient offline and online algorithms, which optimize the routes of data into the cloud and the choice of the data center to aggregate the data for processing, at any give time. Three online algorithms are proposed to practically guide data migration over time. With no need of any future information on the data generation pattern, an online lazy migration (OLM) algorithm achieves a competitive ratio as low as 2:55 under typical system settings, and a work function algorithm (WFA) has a linear 2K-1 (K is the number of data centers) competitive ratio. The rest one randomized fixed horizon control algorithm (RFHC) achieves 1+ 1/(l+1 ) κ/λ competitive ratio in theory with a lookahead window of l into the future, where κ and λ are protocol parameters. We conduct extensive experiments to evaluate our online algorithms, using real-world meteorological data generation traces, under realistic cloud settings. Comparisons among online and offline algorithms show a close-to-offline-optimum performance and demonstrate the effectiveness of our online algorithms in practice.
published_or_final_version
Computer Science
Master
Master of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
47

Fang, Daren. "A semantic framework for unified cloud service search, recommendation, retrieval and management". Thesis, Edinburgh Napier University, 2015. http://researchrepository.napier.ac.uk/Output/9836.

Pełny tekst źródła
Streszczenie:
Cloud computing (CC) is a revolutionary paradigm of consuming Information and Communication Technology (ICT) services. However, while trying to find the optimal services, many users often feel confused due to the inadequacy of service information description. Although some efforts are made in the semantic modelling, retrieval and recommendation of cloud services, existing practices would only work effectively for certain restricted scenarios to deal for example with basic and non-interactive service specifications. In the meantime, various service management tasks are usually performed individually for diverse cloud resources for distinct service providers. This results into significant decreased effectiveness and efficiency for task implementation. Fundamentally, it is due to the lack of a generic service management interface which enables a unified service access and manipulation regardless of the providers or resource types. To address the above issues, the thesis proposes a semantic-driven framework, which integrates two main novel specification approaches, known as agility-oriented and fuzziness-embedded cloud service semantic specifications, and cloud service access and manipulation request operation specifications. These consequently enable comprehensive service specification by capturing the in-depth cloud concept details and their interactions, even across multiple service categories and abstraction levels. Utilising the specifications as CC knowledge foundation, a unified service recommendation and management platform is implemented. Based on considerable experiment data collected on real-world cloud services, the approaches demonstrate distinguished effectiveness in service search, retrieval and recommendation tasks whilst the platform shows outstanding performance for a wide range of service access, management and interaction tasks. Furthermore, the framework includes two sets of innovative specification processing algorithms specifically designed to serve advanced CC tasks: while the fuzzy rating and ontology evolution algorithms establish a manner of collaborative cloud service specification, the service orchestration reasoning algorithms reveal a promising means of dynamic service compositions.
Style APA, Harvard, Vancouver, ISO itp.
48

Carlini, Emanuele. "Combining Peer-to-Peer and Cloud Computing for large scale on-line games". Thesis, IMT Alti Studi Lucca, 2012. http://e-theses.imtlucca.it/88/1/Carlini_phdthesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis investigates the combination of Peer-to-Peer (P2P)and Cloud Computing to support Massively Multiplayer Online Games (MMOGs). MMOGs are large-scale distributed applications where a large number of users concurrently share a real-time virtual environment. Commercial MMOG infrastructures are sized to support peak loads, incurring in high economical cost. Cloud Computing represents an attractive solution, as it lifts MMOG operators from the burden of buying and maintaining hardware, while offering the illusion of infinite machines. However, it requires balancing the tradeoff between resource provisioning and operational costs. P2Pbased solutions present several advantages, including the inherent scalability, self-repairing, and natural load distribution capabilities. They require additional mechanisms to suit the requirements of a MMOG, such as backup solutions to cope with peer unreliability and heterogeneity. We propose mechanisms that integrate P2P and Cloud Computing combining their advantages. Our techniques allow operators to select the ideal tradeoff between performance and economical costs. Using realistic workloads, we show that hybrid infrastructures can reduce the economical effort of the operator, while offering a level of service comparable with centralized architectures.
Style APA, Harvard, Vancouver, ISO itp.
49

Josilo, Sladana. "Decentralized Algorithms for Resource Allocation in Mobile Cloud Computing Systems". Licentiate thesis, KTH, Nätverk och systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-228084.

Pełny tekst źródła
Streszczenie:
The rapid increase in the number of mobile devices has been followed by an increase in the capabilities of mobile devices, such as the computational power, memory and battery capacity. Yet, the computational resources of individual mobile devices are still insufficient for various delay sensitive and computationally intensive applications. These emerging applications could be supported by mobile cloud computing, which allows using external computational resources. Mobile cloud computing does not only improve the users’ perceived performance of mobile applications, but it also may reduce the energy consumption of mobile devices, and thus it may extend their battery life. However, the overall performance of mobile cloud computing systems is determined by the efficiency of allocating communication and computational resources. The work in this thesis proposes decentralized algorithms for allocating these two resources in mobile cloud computing systems. In the first part of the thesis, we consider the resource allocation problem in a mobile cloud computing system that allows mobile users to use cloud computational resources and the resources of each other. We consider that each mobile device aims at minimizing its perceived response time, and we develop a game theoretical model of the problem. Based on the game theoretical model, we propose an efficient decentralized algorithm that relies on average system parameters, and we show that the proposed algorithm could be a promising solution for coordinating multiple mobile devices. In the second part of the thesis, we consider the resource allocation problem in a mobile cloud computing system that consists of multiple wireless links and a cloud server. We model the problem as a strategic game, in which each mobile device aims at minimizing a combination of its response time and energy consumption for performing the computation. We prove the existence of equilibrium allocations of mobile cloud resources, and we use game theoretical tools for designing polynomial time decentralized algorithms with a bounded approximation ratio. We then consider the problem of allocating communication and computational resources over time slots, and we show that equilibrium allocations still exist. Furthermore, we analyze the structure of equilibrium allocations, and we show that the proposed decentralized algorithm for computing equilibria achieves good system performance. By providing constructive equilibrium existence proofs, the results in this thesis provide low complexity decentralized algorithms for allocating mobile cloud resources for various mobile cloud computing architectures.

-the webinar ID on Zoom: 670-3514-7251,  - the registration URL​: https://kth-se.zoom.us/webinar/register/WN_EQCltecySbSMoEQiRztIZg​

QC 20180518

Style APA, Harvard, Vancouver, ISO itp.
50

Hu, Yan. "Cloud Computing for Interoperability in Home-Based Healthcare". Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00605.

Pełny tekst źródła
Streszczenie:
The care of chronic disease has become a main challenge for healthcare institutions around the world. As the incidence and prevalence of chronic diseases continue to increase, traditional hospital-based healthcare is less able to meet the needs of every patient. Treating chronic disease heavily depends on the patient’s daily behaviors, so patient-centered healthcare is encouraged. To improve patients’ quality of life, moving the base of healthcare from hospital to home is imperative. Home-based chronic disease care involves many different healthcare organizations and healthcare providers. Therefore, interoperability is a key approach to provide efficient and convenient home-based healthcare services. This thesis aims to reveal the interoperability issues in the current healthcare system and to point out an appropriate technical solution to overcome them. We start with collecting perspectives from both healthcare providers and healthcare recipients through interviews and online surveys to understand the situations and problems they face. In our research study, we mainly use two current techniques―peer-to-peer (P2P) networks and cloud computing―to design prototypes for sharing healthcare data, developing both a P2P-based solution and a cloud-based solution. Comparing these two techniques, we found the cloud-based solution covered most of the deficiencies in healthcare interoperability. Although there are different types of interoperability, such as pragmatic, semantic and syntactic, we explored alternative solutions specifically for syntactic interoperability. To identify the state of the art and pinpoint the challenges and possible future directions for applying a cloud-based solution, we reviewed the literature on cloud-based eHealth solutions. We suggest that a hybrid cloud model, which contains access controls and techniques for securing data, would be a feasible solution for developing a citizen-centered, home-based healthcare system. Patients’ healthcare records in hospitals and other healthcare centers could be kept in private clouds, while patients’ daily self-management data could be published in a trusted public cloud. Patients, as the owners of their health data, should then decide who can access their data and the conditions for sharing. Finally, we propose an online virtual community for home-based chronic disease healthcare―a cloud-based, home healthcare platform. The requirements of the platform were mainly determined from the responses to an online questionnaire delivered to a target group of people. This platform integrates healthcare providers and recipients within the same platform. Through this shared platform, interoperability among different healthcare providers, as well as with healthcare recipients’ self-management regimens, could be achieved.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii